query_id
stringlengths
32
32
query
stringlengths
6
3.9k
positive_passages
listlengths
1
21
negative_passages
listlengths
10
100
subset
stringclasses
7 values
940c36f4f9d57e4367b7b6e5388d9f2c
Deep neural network based speech separation for robust speech recognition
[ { "docid": "08dcf41de314afe40b4430132be40380", "text": "Robust speech recognition in everyday conditions requires the solution to a number of challenging problems, not least the ability to handle multiple sound sources. The specific case of speech recognition in the presence of a competing talker has been studied for several decades, resulting in a number of quite distinct algorithmic solutions whose focus ranges from modeling both target and competing speech to speech separation using auditory grouping principles. The purpose of the monaural speech separation and recognition challenge was to permit a large-scale comparison of techniques for the competing talker problem. The task was to identify keywords in sentences spoken by a target talker when mixed into a single channel with a background talker speaking similar sentences. Ten independent sets of results were contributed, alongside a baseline recognition system. Performance was evaluated using common training and test data and common metrics. Listeners’ performance in the same task was also measured. This paper describes the challenge problem, compares the performance of the contributed algorithms, and discusses the factors which distinguish the systems. One highlight of the comparison was the finding that several systems achieved near-human performance in some conditions, and one out-performed listeners overall. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c2daec5b85a4e8eea614d855c6549ef0", "text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.", "title": "" } ]
[ { "docid": "fc935bf600e49db18c0a89f0945bac59", "text": "Psychological positive health and health complaints have long been ignored scientifically. Sleep plays a critical role in children and adolescents development. We aimed at studying the association of sleep duration and quality with psychological positive health and health complaints in children and adolescents from southern Spain. A randomly selected two-phase sample of 380 healthy Caucasian children (6–11.9 years) and 304 adolescents (12–17.9 years) participated in the study. Sleep duration (total sleep time), perceived sleep quality (morning tiredness and sleep latency), psychological positive health and health complaints were assessed using the Health Behaviour in School-aged Children questionnaire. The mean (standard deviation [SD]) reported sleep time for children and adolescents was 9.6 (0.6) and 8.8 (0.6) h/day, respectively. Sleep time ≥10 h was significantly associated with an increased likelihood of reporting no health complaints (OR 2.3; P = 0.005) in children, whereas sleep time ≥9 h was significantly associated with an increased likelihood of overall psychological positive health and no health complaints indicators (OR ~ 2; all P < 0.05) in adolescents. Reporting better sleep quality was associated with an increased likelihood of reporting excellent psychological positive health (ORs between 1.5 and 2.6; all P < 0.05). Furthermore, children and adolescents with no difficulty falling asleep were more likely to report no health complaints (OR ~ 3.5; all P < 0.001). Insufficient sleep duration and poor perceived quality of sleep might directly impact quality of life in children, decreasing general levels of psychological positive health and increasing the frequency of having health complaints.", "title": "" }, { "docid": "05be15ff272c075da77a6c4101dec789", "text": "Until recently, much of the recent upsurge in interest in physician health has been motivated by concerns about improving patient care and patient safety and reducing medical errors. Increasingly, more attention has turned to examining how the management of mental illness among physicians might be improved within the medical profession and one key direction for change is the reduction of stigma associated with mental illness. I begin this article by presenting a brief overview of the stigma process from the general sociological literature. Next, I provide evidence that illustrates how the stigma of mental illness thrives in the medical profession as a result of the culture of medicine and medical training, perceptions of physicians and their colleagues, and expectations and responses of health care systems and organizations. Lastly, I discuss what needs to change by proposing ways of educating and raising awareness regarding mental illness among physicians, discussing approaches to assessing and identifying mental health concerns for physicians and by examining how safe and confidential support and treatment can be offered to physicians in need. I rely on strategically selected studies to effectively draw attention to and support the central themes of this article.", "title": "" }, { "docid": "bc7333c22df5568fa81c3c179ce17f59", "text": "Recursive Neural Networks have recently obtained state of the art performance on several natural language processing tasks. However, because of their feedforward architecture they cannot correctly predict phrase or word labels that are determined by context. This is a problem in tasks such as aspect-specific sentiment classification which tries to, for instance, predict that the word Android is positive in the sentence Android beats iOS. We introduce global belief recursive neural networks (GB-RNNs) which are based on the idea of extending purely feedforward neural networks to include one feedbackward step during inference. This allows phrase level predictions and representations to give feedback to words. We show the effectiveness of this model on the task of contextual sentiment analysis. We also show that dropout can improve RNN training and that a combination of unsupervised and supervised word vector representations performs better than either alone. The feedbackward step improves F1 performance by 3% over the standard RNN on this task, obtains state-of-the-art performance on the SemEval 2013 challenge and can accurately predict the sentiment of specific entities.", "title": "" }, { "docid": "6a8a849bc8272a7b73259e732e3be81b", "text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.", "title": "" }, { "docid": "c821e5d4cc3705b0c3180e802e25b591", "text": "This paper discusses the issue of profit shifting and ‘aggressive’ tax planning by multinational firms. The paper makes two contributions. Firstly, we provide some background information to the debate by giving a brief overview over existing empirical studies on profit shifting and by describing arrangements for IP-based profit shifting which are used by the companies currently accused of avoiding taxes. We then show that preventing this type of tax avoidance is, in principle, straightforward. Secondly, we argue that, in the short term, policy makers should focus on extending withholding taxes in an internationally coordinated way. Other measures which are currently being discussed, in particular unilateral measures like limitations on interest and license deduction, fundamental reforms of the international tax system and country-by-country reporting, are either economically harmful or need to be elaborated much further before their introduction can be considered. JEL Classification: H20, H25, F23, K34", "title": "" }, { "docid": "e1ada58b1ae0e92f12d4fb049de5a4bb", "text": "We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages.", "title": "" }, { "docid": "cc57e023628ec7ca1bfc91c40fc58341", "text": "The design of electromagnetic interference (EMI) input filters, needed for switched power converters to fulfill the regulatory standards, is typically associated with high development effort. This paper presents a guideline for a simplified differential-mode (DM) filter design. First, a procedure to estimate the required filter attenuation based on the total input rms current using only a few equations is given. Second, a volume optimization of the needed DM filter based on the previously calculated filter attenuation and volumetric component parameters is introduced. It is shown that a minimal volume can be found for a certain optimal number of filter stages. The considerations are exemplified for two single-phase power factor correction converters operated in continuous and discontinuous conduction modes, respectively. Finally, EMI measurements done with a 300-W power converter prototype prove the proposed filter design method.", "title": "" }, { "docid": "e9d987351816570b29d0144a6a7bd2ae", "text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.", "title": "" }, { "docid": "e177c04d8eb729046d368965dbcedd4c", "text": "This study investigated biased message processing of political satire in The Colbert Report and the influence of political ideology on perceptions of Stephen Colbert. Results indicate that political ideology influences biased processing of ambiguous political messages and source in late-night comedy. Using data from an experiment (N = 332), we found that individual-level political ideology significantly predicted perceptions of Colbert’s political ideology. Additionally, there was no significant difference between the groups in thinking Colbert was funny, but conservatives were more likely to report that Colbert only pretends to be joking and genuinely meant what he said while liberals were more likely to report that Colbert used satire and was not serious when offering political statements. Conservatism also significantly predicted perceptions that Colbert disliked liberalism. Finally, a post hoc analysis revealed that perceptions of Colbert’s political opinions fully mediated the relationship between political ideology and individual-level opinion.", "title": "" }, { "docid": "0aff3b047f483216e02644f130fb8151", "text": "Blockchain methods are emerging as practical tools for validation, record-keeping, and access control in addition to their early applications in cryptocurrency. This column explores the options for use of blockchains to enhance security, trust, and compliance in a variety of industry settings and explores the current state of blockchain standards.", "title": "" }, { "docid": "b3c83fc9495387f286ea83d00673b5b3", "text": "A new walk compensation method for a pulsed time-of-flight rangefinder is suggested. The receiver channel operates without gain control using leading edge timing discrimination principle. The generated walk error is compensated for by measuring the pulse length and knowing the relation between the walk error and pulse length. The walk compensation is possible also at the range where the signal is clipped and where the compensation method by amplitude measurement is impossible. Based on the simulations walk error can be compensated within the dynamic range of 1:30 000.", "title": "" }, { "docid": "80a5000d821771be9bfbbf0d22b7fda0", "text": "The diode-grounded scheme for stray-current collection in some systems, such as the Taipei rapid transit systems (TRTS), has been constructed to gather the stray current leaking from the running rails and avoid corrosion damage to the system as well as the surrounding metallic objects. During operation of the TRTS, a high potential between the negative return bus and system earth bus at traction substations, referred to as rail potential, has been observed on the Blue line between BL13 and BL16. Since the Blue and Red-Green lines have their running rails and stray-current collector mats in junction at the G11 station, the TRTS suspects that the impedance bond at G11 is the cause of rail potential rise. This paper presents the results of field tests for studying whether the impedance bond at G11 of the tie line has an impact on rail potential and stray currents in TRTS. The results show the rail potential can be reduced by disconnecting the impedance bond at G11 of the tie line so that the negative return current of the Blue line cannot flow to the rails of the Red-Green Line, and vice-versa. In addition, rail potential and stray currents occurring at a station of the Blue line are numerically simulated by using a distributed two-layer ladder circuit model. The simulation results are compared with the field-test results and they are consistent with each other.", "title": "" }, { "docid": "d775cdc31c84d94d95dc132b88a37fae", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "7d43cf2e0fcc795f6af4bdbcfb56d13e", "text": "Vehicular Ad hoc Networks is a special kind of mobile ad hoc network to provide communication among nearby vehicles and between vehicles and nearby fixed equipments. VANETs are mainly used for improving efficiency and safety of (future) transportation. There are chances of a number of possible attacks in VANET due to open nature of wireless medium. In this paper, we have classified these security attacks and logically organized/represented in a more lucid manner based on the level of effect of a particular security attack on intelligent vehicular traffic. Also, an effective solution is proposed for DOS based attacks which use the redundancy elimination mechanism consists of rate decreasing algorithm and state transition mechanism as its components. This solution basically adds a level of security to its already existing solutions of using various alternative options like channel-switching, frequency-hopping, communication technology switching and multiple-radio transceivers to counter affect the DOS attacks. Proposed scheme enhances the security in VANETs without using any cryptographic scheme.", "title": "" }, { "docid": "0808637a7768609502b63bff5ffda1cb", "text": "Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.", "title": "" }, { "docid": "c37e48459d24f7802bfb863c731ecff4", "text": "The need to evaluate a function f (A) ∈ C n×n of a matrix A ∈ C n×n arises in a wide and growing number of applications, ranging from the numerical solution of differential equations to measures of the complexity of networks. We give a survey of numerical methods for evaluating matrix functions, along with a brief treatment of the underlying theory and a description of two recent applications. The survey is organized by classes of methods, which are broadly those based on similarity transformations, those employing approximation by polynomial or rational functions, and matrix iterations. Computation of the Fréchet derivative, which is important for condition number estimation, is also treated, along with the problem of computing f (A)b without computing f (A). A summary of available software completes the survey.", "title": "" }, { "docid": "844116dc8302aac5076c95ac2218b5bd", "text": "Virtual reality and augmented reality technology has existed in various forms for over two decades. However, high cost proved to be one of the main barriers to its adoption in education, outside of experimental studies. The creation and widespread sale of low-cost virtual reality devices using smart phones has made virtual reality technology available to the common person. This paper reviews how virtual reality and augmented reality has been used in education, discusses the advantages and disadvantages of using these technologies in the classroom, and describes how virtual reality and augmented reality technologies can be used to enhance teaching at the United States Military Academy.", "title": "" }, { "docid": "71296a25cda3991333cd78fba7a85fa7", "text": "In the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.", "title": "" }, { "docid": "e5f0bca200dc4ef5a806feb06b4cf2a4", "text": "Supply chain finance is a new financing model that makes the industry chain as an organic whole chain to develop financing services. Its purpose is to combine with financial institutions, companies and third-party logistics companies to achieve win-win situation. The supply chain is designed to maximize the financial value. The supply chain finance business in our country is still in its early stages. Conducting the research on risk assessment and control of the supply chain finance business has an important significance for the promotion of the development of our country supply chain finance business. The paper investigates the dynamic multiple attribute decision making problems, in which the decision information, provided by decision makers at different periods, is expressed in intuitionistic fuzzy numbers. We first develop one new aggregation operators called dynamic intuitionistic fuzzy Hamacher weighted averaging (DIFHWA) operator. Moreover, a procedure based on the DIFHWA and IFHWA operators is developed to solve the dynamic multiple attribute decision making problems where all the decision information about attribute values takes the form of intuitionistic fuzzy numbers collected at different periods. Finally, an illustrative example for risk assessment of supply chain finance is given to verify the developed approach and to demonstrate its practicality and effectiveness.", "title": "" }, { "docid": "c63421313f4ed9c1689da4e937a07962", "text": "The life-long learning architecture attempts to create an adaptive agent through the incorporation of prior knowledge over the lifetime of a learning agent. Our paper focuses on task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Qlearning: direct transfer, soft transfer and memoryguided exploration. In direct transfer Q-values from a previous task are used to initialize the Q-values of the next task. Soft transfer initializes the Q-values of the new task with a weighted average of the standard initialization value and the Q-values of the previous task. In memory-guided exploration the Q-values of previous tasks are used as a guide in the initial exploration of the agent. The weight that the agent gives to its past experience decreases over time. We explore stability issues related to the off-policy nature of memory-guided exploration and compare memory-guided exploration to soft transfer and direct transfer in three different envi-", "title": "" } ]
scidocsrr
49f4f4166a7904705d511eba48007f9a
The Role of Morphology of the Thumb in Anthropomorphic Grasping: A Review
[ { "docid": "2440e4a18413a6fb0c66327b2e30baca", "text": "In studying grasping and manipulation we find two very different approaches to the subject: knowledge-based approaches based primarily on empirical studies of human grasping and manipulation, and analytical approaches based primarily on physical models of the manipulation process. This chapter begins with a review of studies of human grasping, in particular our development of a grasp taxonomy and an expert system for predicting human grasp choice. These studies show how object geometry and task requirements (as well as hand capabilities and tactile sensing) combine to dictate grasp choice. We then consider analytic models of grasping and manipulation with robotic hands. To keep the mathematics tractable, these models require numerous simplifications which restrict their generality. Despite their differences, the two approaches can be correlated. This provides insight into why people grasp and manipulate objects as they do, and suggests different approaches for robotic grasp and manipulation planning. The results also bear upon such issues such as object representation", "title": "" } ]
[ { "docid": "6315288620132b456feeb78f36362ca7", "text": "Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan. © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).", "title": "" }, { "docid": "db522c506b07614a5fdc5707530107bb", "text": "An energy-efficient Deep Neural Network (DNN) processor is proposed for high-speed Visual Attention (VA) engine in a mobile vision SoC. The proposed embedded DNN realizes VA to rapidly find ROI tiles of potential target objects reducing ~70% of recognition workloads of vision processor. Compared to previous VA, the DNN VA reduces execution time by 90%, which results in 73.4% overall OR time reduction. Highly-parallel 200-way PEs are implemented in the DNN processor with 2D image sliding architecture, and only 3ms of DNN VA latency can be obtained. Also, the dual-mode PE configuration is proposed for both DNN and multi-layer-perceptron (MLP) to share same hardware for high energy efficiency. As a result, the proposed work achieves only 1.9nJ/pixel energy efficiency which is 7.7x smaller than state-of-the-art VA accelerator.", "title": "" }, { "docid": "7fdee49774800eca8256f2b85b8a587b", "text": "Internet of Things (IoT) concept enables the possibility of information discovery about a tagged object or a tagged person by browsing an internet addresses or database entry that corresponds to a particular active RFID with sensing capability. It is a media for information retrieval from physical world to a digital world. With cooperative wireless communication, the wireless node entities can increase their effective quality of service (QoS) via cooperation. In developing countries the death rates due to lack of timely available medical treatments are quite high as compared to other developed countries. The majority of these deaths are preventable through quality care. This paper proposes a cooperative IoT approach for the better health monitoring and control of rural and poor human being's health parameters like blood pressure (BP), hemoglobin (HB), blood sugar, abnormal cellular growth in any part of the body, etc.", "title": "" }, { "docid": "3603e3d676a3ccae0c2ad18dc914b6a1", "text": "In large storage systems, it is crucial to protect data from loss due to failures. Erasure codes lay the foundation of this protection, enabling systems to reconstruct lost data when components fail. Erasure codes can however impose significant performance overhead in two core operations: encoding, where coding information is calculated from newly written data, and decoding, where data is reconstructed after failures. This paper focuses on improving the performance of encoding, the more frequent operation. It does so by scheduling the operations of XOR-based erasure codes to optimize their use of cache memory. We call the technique XORscheduling and demonstrate how it applies to a wide variety of existing erasure codes. We conduct a performance evaluation of scheduling these codes on a variety of processors and show that XOR-scheduling significantly improves upon the traditional approach. Hence, we believe that XORscheduling has great potential to have wide impact in practical storage systems.", "title": "" }, { "docid": "7d909fed27732e8334cb4030112df9ab", "text": "Attention mechanism has been used as an ancillary means to help RNN or CNN. However, the Transformer (Vaswani et al., 2017) recently recorded the state-of-theart performance in machine translation with a dramatic reduction in training time by solely using attention. Motivated by the Transformer, Directional Self Attention Network (Shen et al., 2017), a fully attention-based sentence encoder, was proposed. It showed good performance with various data by using forward and backward directional information in a sentence. But in their study, not considered at all was the distance between words, an important feature when learning the local dependency to help understand the context of input text. We propose Distance-based Self-Attention Network, which considers the word distance by using a simple distance mask in order to model the local dependency without losing the ability of modeling global dependency which attention has inherent. Our model shows good performance with NLI data, and it records the new state-of-the-art result with SNLI data. Additionally, we show that our model has a strength in long sentences or documents.", "title": "" }, { "docid": "b58a04bbb5d69e6d2e48392d389383a7", "text": "Automatic generation of natural language from images has attracted extensive attention. In this paper, we take one step further to investigate generation of poetic language (with multiple lines) to an image for automatic poetry creation. This task involves multiple challenges, including discovering poetic clues from the image (e.g., hope from green), and generating poems to satisfy both relevance to the image and poeticness in language level. To solve the above challenges, we formulate the task of poem generation into two correlated sub-tasks by multi-adversarial training via policy gradient, through which the cross-modal relevance and poetic language style can be ensured. To extract poetic clues from images, we propose to learn a deep coupled visual-poetic embedding, in which the poetic representation from objects, sentiments \\footnoteWe consider both adjectives and verbs that can express emotions and feelings as sentiment words in this research. and scenes in an image can be jointly learned. Two discriminative networks are further introduced to guide the poem generation, including a multi-modal discriminator and a poem-style discriminator. To facilitate the research, we have released two poem datasets by human annotators with two distinct properties: 1) the first human annotated image-to-poem pair dataset (with $8,292$ pairs in total), and 2) to-date the largest public English poem corpus dataset (with $92,265$ different poems in total). Extensive experiments are conducted with 8K images, among which 1.5K image are randomly picked for evaluation. Both objective and subjective evaluations show the superior performances against the state-of-the-art methods for poem generation from images. Turing test carried out with over $500$ human subjects, among which 30 evaluators are poetry experts, demonstrates the effectiveness of our approach.", "title": "" }, { "docid": "91c3734125249659df4098ba02f2d5e5", "text": "Good performance and efficiency, in terms of high quality of service and resource utilization for example, are important goals in a cloud environment. Through extensive measurements of an n-tier application benchmark (RUBBoS), we show that overall system performance is surprisingly sensitive to appropriate allocation of soft resources (e.g., server thread pool size). Inappropriate soft resource allocation can quickly degrade overall application performance significantly. Concretely, both under-allocation and over-allocation of thread pool can lead to bottlenecks in other resources because of non-trivial dependencies. We have observed some non-obvious phenomena due to these correlated bottlenecks. For instance, the number of threads in the Apache web server can limit the total useful throughput, causing the CPU utilization of the C-JDBC clustering middleware to decrease as the workload increases. We provide a practical iterative solution approach to this challenge through an algorithmic combination of operational queuing laws and measurement data. Our results show that soft resource allocation plays a central role in the performance scalability of complex systems such as n-tier applications in cloud environments.", "title": "" }, { "docid": "093465aba11b82b768e4213b23c5911b", "text": "This paper describes the generation of large deformation diffeomorphisms phi:Omega=[0,1]3<-->Omega for landmark matching generated as solutions to the transport equation dphi(x,t)/dt=nu(phi(x,t),t),epsilon[0,1] and phi(x,0)=x, with the image map defined as phi(.,1) and therefore controlled via the velocity field nu(.,t),epsilon[0,1]. Imagery are assumed characterized via sets of landmarks {xn, yn, n=1, 2, ..., N}. The optimal diffeomorphic match is constructed to minimize a running smoothness cost parallelLnu parallel2 associated with a linear differential operator L on the velocity field generating the diffeomorphism while simultaneously minimizing the matching end point condition of the landmarks. Both inexact and exact landmark matching is studied here. Given noisy landmarks xn matched to yn measured with error covariances Sigman, then the matching problem is solved generating the optimal diffeomorphism phi;(x,1)=integral0(1)nu(phi(x,t),t)dt+x where nu(.)=argmin(nu.)integral1(0) integralOmega parallelLnu(x,t) parallel2dxdt +Sigman=1N[yn-phi(xn,1)] TSigman(-1)[yn-phi(xn,1)]. Conditions for the existence of solutions in the space of diffeomorphisms are established, with a gradient algorithm provided for generating the optimal flow solving the minimum problem. Results on matching two-dimensional (2-D) and three-dimensional (3-D) imagery are presented in the macaque monkey.", "title": "" }, { "docid": "25d14017403c96eceeafcbda1cbdfd2c", "text": "We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks (NNs). Our model is a three-layer NN that learns to encode the non-pivot features of an input example into a lowdimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation. The low-dimensional representation is then employed in a learning algorithm for the task. Moreover, we show how to inject pre-trained word embeddings into our model in order to improve generalization across examples with similar pivot features. We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines.1", "title": "" }, { "docid": "21f45ec969ba3852d731a2e2119fc86e", "text": "When a large number of people with heterogeneous knowledge and skills run a project together, it is important to use a sensible engineering process. This especially holds for a project building an intelligent autonomously driving car to participate in the 2007 DARPA Urban Challenge. In this article, we present essential elements of a software and systems engineering process for the development of artificial intelligence capable of driving autonomously in complex urban situations. The process includes agile concepts, like test first approach, continuous integration of every software module and a reliable release and configuration management assisted by software tools in integrated development environments. However, the most important ingredients for an efficient and stringent development are the ability to efficiently test the behavior of the developed system in a flexible and modular simulator for urban situations.", "title": "" }, { "docid": "81928a29f210e68815022fcb634c414d", "text": "Reactions to stress vary between individuals, and physiological and behavioral responses tend to be associated in distinct suites of correlated traits, often termed stress-coping styles. In mammals, individuals exhibiting divergent stress-coping styles also appear to exhibit intrinsic differences in cognitive processing. A connection between physiology, behavior, and cognition was also recently demonstrated in strains of rainbow trout (Oncorhynchus mykiss) selected for consistently high or low cortisol responses to stress. The low-responsive (LR) strain display longer retention of a conditioned response, and tend to show proactive behaviors such as enhanced aggression, social dominance, and rapid resumption of feed intake after stress. Differences in brain monoamine neurochemistry have also been reported in these lines. In comparative studies, experiments with the lizard Anolis carolinensis reveal connections between monoaminergic activity in limbic structures, proactive behavior in novel environments, and the establishment of social status via agonistic behavior. Together these observations suggest that within-species diversity of physiological, behavioral and cognitive correlates of stress responsiveness is maintained by natural selection throughout the vertebrate sub-phylum.", "title": "" }, { "docid": "14dbf1851016161633e847e55e93cad3", "text": "Direct drive permanent magnet generators(PMGs) are increasingly capturing the global wind market in large onshore and offshore applications. The aim of this paper is to provide a quick overview of permanent magnet generator design and related control issues for large wind turbines. Generator systems commonly used in wind turbines, the permanent magnet generator types, and control methods are reviewed in the paper. The current commercial PMG wind turbine on market is surveyed. The design of a 5 MW axial flux permanent magnet (AFPM) generator for large wind turbines is discussed and presented in detail.", "title": "" }, { "docid": "a1d1d61c61d1941329cdbc38639bd487", "text": "America’s critical infrastructure is becoming “smarter” and increasingly dependent on highly specialized computers called industrial control systems (ICS). Networked ICS components now called the industrial Internet of Things (IIoT) are at the heart of the “smart city”, controlling critical infrastructure, such as CCTV security networks, electric grids, water networks, and transportation systems. Without the continuous, reliable functioning of these assets, economic and social disruption will ensue. Unfortunately, IIoT are hackable and difficult to secure from cyberattacks. This leaves our future smart cities in a state of perpetual uncertainty and the risk that the stability of our lives will be upended. The Local government has largely been absent from conversations about cybersecurity of critical infrastructure, despite its importance. One reason for this is public administrators do not have a good way of knowing which assets and which components of those assets are at the greatest risk. This is further complicated by the highly technical nature of the tools and techniques required to assess these risks. Using artificial intelligence planning techniques, an automated tool can be developed to evaluate the cyber risks to critical infrastructure. It can be used to automatically identify the adversarial strategies (attack trees) that can compromise these systems. This tool can enable both security novices and specialists to identify attack pathways. We propose and provide an example of an automated attack generation method that can produce detailed, scalable, and consistent attack trees–the first step in securing critical infrastructure from cyberattack.", "title": "" }, { "docid": "4ed74450320dfef4156013292c1d2cbb", "text": "This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain.", "title": "" }, { "docid": "24e3d8c81dfdc05d20114f1acf6feb81", "text": "The development of machinery health monitoring technologies has taken center stage within the DoD community in recent years. Existing health monitoring systems, such as the Integrated Condition Assessment System (ICAS) for NAVSEA, enable the diagnosis of mission critical problems using fault detection and diagnostic technologies. These technologies, however, have not specifically focused on the automated prediction of future condition (prognostics) of a machine based on the current diagnostic state of the machinery and its available operating and failure history data. Current efforts are focused on developing a generic architecture for the development of prognostic systems that will enable “plug and play” capabilities within existing systems. The designs utilize Open System Architecture (OSA) guidelines, such as OSA-CBM (Condition Based Maintenance), to provide these capabilities and enhance reusability of the software modules. One such implementation, which determines the optimal water wash interval to mitigate gas turbine compressor performance degradation due to salt deposit ingestion, is the focus of this paper. The module utilizes advanced probabilistic modeling and analysis technologies to forecast the future performance characteristics of the compressor and yield the optimal Time To Wash (TTW) from a cost/benefit standpoint. This paper describes the developed approach and architecture for developing prognostics using the gas turbine module.", "title": "" }, { "docid": "1f8783ae21826b9281d4652ad1d29c15", "text": "Short posts on micro-blogs are characterized by high ambiguity and non-standard language. We focus on detecting life events from such micro-blogs, a type of event which have not been paid much attention so far. We discuss the corpus we assembled and our experiments. Simpler models based on unigrams perform better than models that include history, number of retweets and semantic roles.", "title": "" }, { "docid": "080a14f6eb96b04c11c0cb65897dadd2", "text": "Enterococcus faecalis is a microorganism commonly detected in asymptomatic, persistent endodontic infections. Its prevalence in such infections ranges from 24% to 77%. This finding can be explained by various survival and virulence factors possessed by E. faecalis, including its ability to compete with other microorganisms, invade dentinal tubules, and resist nutritional deprivation. Use of good aseptic technique, increased apical preparation sizes, and inclusion of 2% chlorhexidine in combination with sodium hypochlorite are currently the most effective methods to combat E. faecalis within the root canal systems of teeth. In the changing face of dental care, continued research on E. faecalis and its elimination from the dental apparatus may well define the future of the endodontic specialty.", "title": "" }, { "docid": "d8802a7fcdbd306bd474f3144bc688a4", "text": "Shape from defocus (SFD) is one of the most popular techniques in monocular 3D vision. While most SFD approaches require two or more images of the same scene captured at a fixed view point, this paper presents an efficient approach to estimate absolute depth from a single defocused image. Instead of directly measuring defocus level of each pixel, we propose to design a sequence of aperture-shape filters to segment a defocused image by defocus level. A boundary-weighted belief propagation algorithm is employed to obtain a smooth depth map. We also give an estimation of depth error. Extensive experiments show that our approach outperforms the state-of-the-art single-image SFD approaches both in precision of the estimated absolute depth and running time.", "title": "" }, { "docid": "d2b64916f8dc49bdf9c2e4379f5851f9", "text": "The lower mobility parallel robots have been applied in lots of industrial fields, and the Jacobian plays an important role in the robotic performance analysis such as velocity analysis, precision analysis, and stiffness analysis. As for lower mobility parallel robots with the special limb structures, a novel approach based on actuating wrenches is proposed for the Jacobian analysis, which is much concise compared with the existing methods and significant for the proposed physical concept of the actuating wrench. The characteristics of the special limb structures are explored, and these structures are frequently adopted to constitute the lower mobility parallel robots that are extensively utilized in industry. The Jacobian matrix is derived just through the actuating wrenches. Not only the velocity Jacobian but also the force Jacobian is analyzed, and some incorrect deduction between the two Jacobian matrices appeared in the published works is clarified. Examples show that if the origin of the global coordinate frame is set at the moving platform, the Jacobian analysis can be much simplified.", "title": "" }, { "docid": "f5ba54c76166eed39da96f86a8bbd2a1", "text": "The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis  the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.", "title": "" } ]
scidocsrr
f4a4ad1971056cb6f51932600c1196d1
AprilTag 2: Efficient and robust fiducial detection
[ { "docid": "3acb0ab9f20e1efece96a2414a9c9c8c", "text": "Artificial markers are successfully adopted to solve several vision tasks, ranging from tracking to calibration. While most designs share the same working principles, many specialized approaches exist to address specific application domains. Some are specially crafted to boost pose recovery accuracy. Others are made robust to occlusion or easy to detect with minimal computational resources. The sheer amount of approaches available in recent literature is indeed a statement to the fact that no silver bullet exists. Furthermore, this is also a hint to the level of scholarly interest that still characterizes this research topic. With this paper we try to add a novel option to the offer, by introducing a general purpose fiducial marker which exhibits many useful properties while being easy to implement and fast to detect. The key ideas underlying our approach are three. The first one is to exploit the projective invariance of conics to jointly find the marker and set a reading frame for it. Moreover, the tag identity is assessed by a redundant cyclic coded sequence implemented using the same circular features used for detection. Finally, the specific design and feature organization of the marker are well suited for several practical tasks, ranging from camera calibration to information payload delivery.", "title": "" } ]
[ { "docid": "0943628b72cff16fd50affa40e98d360", "text": "The aim of image captioning is to generate captions by machine to describe image contents. Despite many efforts, generating discriminative captions for images remains non-trivial. Most traditional approaches imitate the language structure patterns, thus tend to fall into a stereotype of replicating frequent phrases or sentences and neglect unique aspects of each image. In this work, we propose an image captioning framework with a self-retrieval module as training guidance, which encourages generating discriminative captions. It brings unique advantages: (1) the self-retrieval guidance can act as a metric and an evaluator of caption discriminativeness to assure the quality of generated captions. (2) The correspondence between generated captions and images are naturally incorporated in the generation process without human annotations, and hence our approach could utilize a large amount of unlabeled images to boost captioning performance with no additional annotations. We demonstrate the effectiveness of the proposed retrievalguided method on COCO and Flickr30k captioning datasets, and show its superior captioning performance with more discriminative captions.", "title": "" }, { "docid": "4d56abf003caaa11e5bef74a14bd44e0", "text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.", "title": "" }, { "docid": "9d33565dbd5148730094a165bb2e968f", "text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.", "title": "" }, { "docid": "798cd7ebdd234cb62b32d963fdb51af0", "text": "The use of frontal sinus radiographs in positive identification has become an increasingly applied and accepted technique among forensic anthropologists, radiologists, and pathologists. From an evidentiary standpoint, however, it is important to know whether frontal sinus radiographs are a reliable method for confirming or rejecting an identification, and standardized methods should be applied when making comparisons. The purpose of the following study is to develop an objective, standardized comparison method, and investigate the reliability of that method. Elliptic Fourier analysis (EFA) was used to assess the variation in 808 outlines of frontal sinuses by calculating likelihood ratios and posterior probabilities from EFA coefficients. Results show that using EFA coefficient comparison to estimate the probability of a correct identification is a reliable technique, and EFA comparison of frontal sinus outlines is recommended when it may be necessary to provide quantitative substantiation for a forensic identification based on these structures.", "title": "" }, { "docid": "0618529a20e00174369a05077294de5b", "text": "In this paper we present a case study of the steps leading up to the extraction of the spam bot payload found within a backdoor rootkit known as Backdoor.Rustock.B or Spam-Mailbot.c. Following the extraction of the spam module we focus our analysis on the steps necessary to decrypt the communications between the command and control server and infected hosts. Part of the discussion involves a method to extract the encryption key from within the malware binary and use that to decrypt the communications. The result is a better understanding of an advanced botnet communications scheme.", "title": "" }, { "docid": "1a2f2e75691e538c867b6ce58591a6a5", "text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.", "title": "" }, { "docid": "cdac5244050d0127273b8a845129257a", "text": "Existing sentence regression methods for extractive summarization usually model sentence importance and redundancy in two separate processes. They first evaluate the importance f(s) of each sentence s and then select sentences to generate a summary based on both the importance scores and redundancy among sentences. In this paper, we propose to model importance and redundancy simultaneously by directly evaluating the relative importance f(s|S) of a sentence s given a set of selected sentences S. Specifically, we present a new framework to conduct regression with respect to the relative gain of s given S calculated by the ROUGE metric. Besides the single sentence features, additional features derived from the sentence relations are incorporated. Experiments on the DUC 2001, 2002 and 2004 multi-document summarization datasets show that the proposed method outperforms state-of-the-art extractive summarization approaches.", "title": "" }, { "docid": "8b6116105914e3d912d4594b875e443b", "text": "Patients with neuropathic pain (NP) are challenging to manage and evidence-based clinical recommendations for pharmacologic management are needed. Systematic literature reviews, randomized clinical trials, and existing guidelines were evaluated at a consensus meeting. Medications were considered for recommendation if their efficacy was supported by at least one methodologically-sound, randomized clinical trial (RCT) demonstrating superiority to placebo or a relevant comparison treatment. Recommendations were based on the amount and consistency of evidence, degree of efficacy, safety, and clinical experience of the authors. Available RCTs typically evaluated chronic NP of moderate to severe intensity. Recommended first-line treatments include certain antidepressants (i.e., tricyclic antidepressants and dual reuptake inhibitors of both serotonin and norepinephrine), calcium channel alpha2-delta ligands (i.e., gabapentin and pregabalin), and topical lidocaine. Opioid analgesics and tramadol are recommended as generally second-line treatments that can be considered for first-line use in select clinical circumstances. Other medications that would generally be used as third-line treatments but that could also be used as second-line treatments in some circumstances include certain antiepileptic and antidepressant medications, mexiletine, N-methyl-D-aspartate receptor antagonists, and topical capsaicin. Medication selection should be individualized, considering side effects, potential beneficial or deleterious effects on comorbidities, and whether prompt onset of pain relief is necessary. To date, no medications have demonstrated efficacy in lumbosacral radiculopathy, which is probably the most common type of NP. Long-term studies, head-to-head comparisons between medications, studies involving combinations of medications, and RCTs examining treatment of central NP are lacking and should be a priority for future research.", "title": "" }, { "docid": "be852bd342e8051c01fdac3f9de9dbd3", "text": "Dimensional sentiment analysis aims to recognize continuous numerical values in multiple dimensions such as the valencearousal (VA) space. Compared to the categorical approach that focuses on sentiment classification such as binary classification (i.e., positive and negative), the dimensional approach can provide more fine-grained sentiment analysis. This study proposes a regional CNN-LSTM model consisting of two parts: regional CNN and LSTM to predict the VA ratings of texts. Unlike a conventional CNN which considers a whole text as input, the proposed regional CNN uses an individual sentence as a region, dividing an input text into several regions such that the useful affective information in each region can be extracted and weighted according to their contribution to the VA prediction. Such regional information is sequentially integrated across regions using LSTM for VA prediction. By combining the regional CNN and LSTM, both local (regional) information within sentences and long-distance dependency across sentences can be considered in the prediction process. Experimental results show that the proposed method outperforms lexicon-based, regression-based, and NN-based methods proposed in previous studies.", "title": "" }, { "docid": "eee48e3e78f630a78c3b7e666503d849", "text": "Few psychological concepts evoke simultaneously as much fascination and misunderstanding as psychopathic personality , or psychopathy. Typically, individuals with psychopathy are misconceived as fundamentally different from the rest of humanity and as inalterably dangerous. Popular portrayals of \" psychopaths \" are diverse and conflicting, ranging from uncommonly impulsive and violent criminal offenders to corporate figures who callously and skillfully manuever their way to the highest rungs of the social ladder. Despite this diversity of perspectives, a single well-validated measure of psychopathy, the Psychopathy Checklist-Revised (PCL-R; Hare, 1991; 2003), has come to dominate clinical and legal practice over recent years. The items of the PCL-R cover two basic content domains—an interpersonal-affective domain that encompasses core traits such as callousness and manipulativeness and an antisocial domain that entails disinhibition and chronic antisocial behavior. In most Western countries, the PCL-R and its derivatives are routinely applied to inform legal decisions about criminal offenders that hinge upon issues of dangerousness and treatability. In fact, clinicians in many cases choose the PCL-R over other, purpose-built risk-assessment tools to inform their opinions about what sentence offenders should receive, whether they should be indefinitely incarcerated as a \" dangerous offender \" or \" sexually violent predator, \" or whether they should be transferred from juvenile to adult court. The PCL-R has played an extraordinarily generative role in research and practice over the past three decades—so much so, that concerns have been raised that the measure has become equated in many minds with the psychopathy construct itself (Skeem & Cooke 2010a). Equating a measure with a construct may impede scientific progress because it disregards the basic principle that measures always imperfectly operationalize constructs and that our understanding of a construct is ever-evolving (Cronbach & Meehl, 1955). In virtually any domain, the construct-validation process is an incremental one that entails shifts in conceptualization and measurement at successive points in the process of clarifying the nature and boundaries of a hypothetical entity. Despite the predominance of the PCL-R measurement model in recent years, vigorous scientific debates have continued regarding what psychopathy is and what it is not. Should adaptive, positive-adjustment features (on one hand) and criminal and antisocial behaviors (on the other) be considered essential features of the construct? Are anxious and emotionally reactive people that are identified as psychopaths by the PCL-R and other measures truly psychopathic? More fundamentally , is psychopathy a unitary entity (i.e., a global syndrome …", "title": "" }, { "docid": "19100853a7f0f4d519e0a5513a83aa08", "text": "The authors explain how to perform software inspections to locate defects. They present metrics for inspection and examples of its effectiveness. The authors contend, on the basis of their experiences and those reported in the literature, that inspections can detect and eliminate faults more cheaply than testing.<<ETX>>", "title": "" }, { "docid": "e35f6f4e7b6589e992ceeccb4d25c9f1", "text": "One of the key success factors of lending organizations in general and banks in particular is the assessment of borrower credit worthiness in advance during the credit evaluation process. Credit scoring models have been applied by many researchers to improve the process of assessing credit worthiness by differentiating between prospective loans on the basis of the likelihood of repayment. Thus, credit scoring is a very typical Data Mining (DM) classification problem. Many traditional statistical and modern computational intelligence techniques have been presented in the literature to tackle this problem. The main objective of this paper is to describe an experiment of building suitable Credit Scoring Models (CSMs) for the Sudanese banks. Two commonly discussed data mining classification techniques are chosen in this paper namely: Decision Tree (DT) and Artificial Neural Networks (ANN). In addition Genetic Algorithms (GA) and Principal Component Analysis (PCA) are also applied as feature selection techniques. In addition to a Sudanese credit dataset, German credit dataset is also used to evaluate these techniques. The results reveal that ANN models outperform DT models in most cases. Using GA as a feature selection is more effective than PCA technique. The highest accuracy of German data set (80.67%) and Sudanese credit scoring models (69.74%) are achieved by a hybrid GA-ANN model. Although DT and its hybrid models (PCA-DT, GA-DT) are outperformed by ANN and its hybrid models (PCA-ANN, GA-ANN) in most cases, they produced interpretable loan granting decisions.", "title": "" }, { "docid": "3f657657a24c03038bd402498b7abddd", "text": "We propose a system for real-time animation of eyes that can be interactively controlled in a WebGL enabled device using a small number of animation parameters, including gaze. These animation parameters can be obtained using traditional keyframed animation curves, measured from an actor's performance using off-the-shelf eye tracking methods, or estimated from the scene observed by the character, using behavioral models of human vision. We present a model of eye movement, that includes not only movement of the globes, but also of the eyelids and other soft tissues in the eye region. The model includes formation of expression wrinkles in soft tissues. To our knowledge this is the first system for real-time animation of soft tissue movement around the eyes based on gaze input.", "title": "" }, { "docid": "96d2e884c65205ef458214594f8b64f5", "text": "The weak methods occur pervasively in AI systems and may form die basic methods for all intelligent systems. The purpose of this paper is to characterize die weak methods and to explain how and why they arise in intelligent systems. We propose an organization, called a universal weak method that provides functionality of all the weak methods.* A universal weak method is an organizational scheme for knowledge that produces the appropriate search behavior given the available task-domain knowledge. We present a problem solving architecture, called SOAR, in which we realize a universal weak method. We then demonstrate the universal weak method with a variety of weak methods on a set of tasks. This research was sponsored by die Defense Advanced Research Projects Agency (DOD), ARPA Order No: 3597, monitored by die Air Force Avionics Laboratory Under Contract F33515-78-C-155L The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of die Defense Advanced Research Projects Agency or the US Government.", "title": "" }, { "docid": "cfb06477edaa39f53b1b892cdfc1621a", "text": "This paper presents ray casting as the methodological basis for a CAD/CAM solid modeling system. Solid objects are modeled by combining primitive solids, such as blocks and cylinders, using the set operators union, intersection, and difference. To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding linesurface intersection points. So surfaces such as planes, quad&, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. New methods are presented, accompanied by sample pictures and CPU times, to meet the challenge.", "title": "" }, { "docid": "a583b48a8eb40a9e88a5137211f15bce", "text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.", "title": "" }, { "docid": "c66d556686c60af51f007ec36c29bd38", "text": "The main question we try to answer in this work is whether it is feasible to employ super-resolution (SR) algorithms to increase the spatial resolution of endoscopic high-definition (HD) images in order to reveal new details which may have got lost due to the limited endoscope magnification of the HD endoscope used (e.g. mucosal structures). For this purpose we compare the quality achieved of different SR methods. This is done on standard test images as well as on images obtained from endoscopic video frames. We also investigate whether compression artifacts have a noticeable effect on the SR results. We show that, due to several limitations in case of endoscopic videos, we are not consistently able to achieve a higher visual quality when using SR algorithms instead of bicubic interpolation.", "title": "" }, { "docid": "28c1416fd464af8543e6486339e1a483", "text": "In today’s global competitive marketplace, there is intense pressure for manufacturing industries to continuously reduce and eliminate costly, unscheduled downtime and unexpected breakdowns. With the advent of Internet and tether-free technologies, companies necessitate dramatic changes in transforming traditional ‘‘fail and fix (FAF)’’ maintenance practices to a ‘‘predict and prevent (PAP)’’ e-maintenance methodology. Emaintenance addresses the fundamental needs of predictive intelligence tools to monitor the degradation rather than detecting the faults in a networked environment and, ultimately to optimize asset utilization in the facility. This paper introduces the emerging field of e-maintenance and its critical elements. Furthermore, performance assessment and prediction tools are introduced for continuous assessment and prediction of a particular product’s performance, ultimately enable proactive maintenance to prevent machine from breakdowns. Recent advances on intelligent prognostic technologies and tools are discussed. Several case studies are introduced to validate these developed technologies and tools. # 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f398eee40f39acd2c2955287ccbb4924", "text": "One of the ultimate goals of natural language processing (NLP) systems is understanding the meaning of what is being transmitted, irrespective of the medium (e.g., written versus spoken) or the form (e.g., static documents versus dynamic dialogues). Although much work has been done in traditional language domains such as speech and static written text, little has yet been done in the newer communication domains enabled by the Internet, e.g., online chat and instant messaging. This is in part due to the fact that there are no annotated chat corpora available to the broader research community. The purpose of this research is to build a chat corpus, tagged with lexical (token part-of-speech labels), syntactic (post parse tree), and discourse (post classification) information. Such a corpus can then be used to develop more complex, statistical-based NLP applications that perform tasks such as author profiling, entity identification, and social network analysis.", "title": "" }, { "docid": "3e6e1125fcdd9757206d0d3f25039e0d", "text": "Native Language Identification, or NLI, is the task of automatically classifying the L1 of a writer based solely on his or her essay written in another language. This problem area has seen a spike in interest in recent years as it can have an impact on educational applications tailored towards non-native speakers of a language, as well as authorship profiling. While there has been a growing body of work in NLI, it has been difficult to compare methodologies because of the different approaches to pre-processing the data, different sets of languages identified, and different splits of the data used. In this shared task, the first ever for Native Language Identification, we sought to address the above issues by providing a large corpus designed specifically for NLI, in addition to providing an environment for systems to be directly compared. In this paper, we report the results of the shared task. A total of 29 teams from around the world competed across three different sub-tasks.", "title": "" } ]
scidocsrr
898bb628f7b662de278a4f9c8ab6eea3
MuseumFinland - Finnish museums on the semantic web
[ { "docid": "ee95ad7e7243607b56e92b6cb4228288", "text": "We have developed an innovative search interface that allows non-expert users to move through large information spaces in a flexible manner without feeling lost. The design goal was to offer users a “browsing the shelves” experience seamlessly integrated with focused search. Key to achieving our goal is the explicit exposure of hierarchical faceted metadata in a manner that is intuitive and inviting to users. After several iterations of design and testing, the usability results are strikingly positive. We believe our approach marks a major step forward in search user interfaces and can serve as a model for web-based collections of up to 100,000 items. Topics: Search User Interfaces, Faceted Metadata INTRODUCTION Although general Web search is steadily improving [30], studies show that search is still the primary usability problem in web site design. A recent report by Vividence Research analyzing 69 web sites found that the most common usability problem was poorly organized search results, affecting 53% of sites studied. The second most common problem was poor information architecture, affecting 32% of sites [27]. Studies of search behavior reveal that good search involves both broadening and narrowing of the query, appropriate selection of terminology, and the ability to modify the query [31]. Still others show that users often express a concern about online search systems since they do not allow a “browsing the shelves” experience afforded by physical libraries [6] and that users like wellstructured hyperlinks but often feel lost when navigating through complex sites [23]. Our goals are to support search usability guidelines [28], while avoiding negative consequences like empty result sets or feelings of being lost. We are especially interested in large collections of similar-style items (such as product catalog sites, sites consisting of collections of images, or text documents on a topic such as medicine or law). Our approach is to follow iterative design practices from the field of human-computer interaction [29], meaning that we first assess the behavior of the target users, then prototype a system, then assess that system with target users, learn from and adjust to the problems found, and repeat until a successful interface is produced. We have applied this method to the problem of creating an information architecture that seamlessly integrates navigation and free-text search into one interface. This system builds on earlier work that shows the importance of query previews [25] for indicating next choices (thus allowing the user to use recognition over recall) and avoiding empty result sets. The approach makes use of faceted hierarchical metadata (described below) as the basis for a navigation structure showing next choices, providing alternative views, and permitting refinement and expansion in new directions, while at the same time maintaining a consistent representation of the collection’s structure [14]. This use of metadata is integrated with free-text search, allowing the user to follow links, then add search terms, then follow more links, without interrupting the interaction flow. Our most recent usability studies show strong, positive results along most measured variables. An added advantage of this framework is that it can be built using off-the-shelf database technology, and it allows the contents of the collection to be changed without requiring the web site maintainer to change the system or the interface. For these reasons, we believe these results should influence the design of information architecture of information-centric web sites. In the following sections we define the metadata-based terminology, describe the interface framework as applied to a collection of architectural images, report the results of usability studies, discuss related work, and discuss the implications of these results. Submitted for Publication METADATA Content-oriented category metadata has become more prevalent in the last few years, and many people are interested in standards for describing content in various fields (e.g., Dublin Core and the Semantic Web). Web directories such as Yahoo and the Open Directory Project are familiar examples of the use of metadata for navigation structures. Web search engines have begun to interleave search hits on category labels with other search results. Many individual collections already have rich metadata assigned to their contents; for example, biomedical journal articles have on average a dozen or more content attributes attached to them. Metadata for organizing content collections can be classified along several dimensions: • The metadata may be faceted, that is, composed of orthogonal sets of categories. For example, in the domain of architectural images, some possible facets might be Materials (concrete, brick, wood, etc.), Styles (Baroque, Gothic, Ming, etc.), View Types, People (architects, artists, developers, etc.), Locations, Periods, and so on. • The metadata (or an individual facet) may be hierarchical (“located in Berkeley, California, United States”) or flat (“by Ansel Adams”). • The metadata (or an individual facet) may be singlevalued or multi-valued. That is, the data may be constrained so that at most one value can be assigned to an item (“measures 36 cm tall”) or it may allow multiple values to be assigned to an item (“uses oil paint, ink, and watercolor”). We note that there are a number of issues associated with creation of metadata itself which we are not addressing here. The most pressing problem is how to decide which descriptors are correct or at least most appropriate for a collection of information. Another problem relates to how to assign metadata descriptors to items that currently do not have metadata assigned. We will not be addressing these issues, in part because many other researchers already are, and because the fact remains that there are many existing, important collections whose contents have hierarchical metadata already assigned. RECIPE USABILITY STUDY We are particularly concerned with supporting non-professional searchers in rich information seeking tasks. Specifically we aim to answer the following questions: do users like and understand flexible organizations of metadata from different hierarchies? Are faceted hierarchies preferable to single hierarchies? Do people prefer to follow category-based hyperlinks or do they prefer to issue a keyword-based query and sort through results listings? 1http://dublincore.org, http://www.w3.org/2001/sw 2http://www.yahoo.com, http://dmoz.org Figure 1: The opening page for both interfaces shows a text search box and the first level of metadata terms. Hovering over a facet name yields a tooltip (here shown below Locations) explaining the meaning of the facet. Before developing our system, we tested the idea of using hierarchical faceted metadata on an existing interface that exemplified some of our design goals. This preliminary study was conducted using a commercial recipe web site called Epicurious containing five flat facets, 93 metadata terms, and approximately 13,000 recipes. We compared the three available search interfaces:(1) Simple keyword search, with unsorted results list (2) Enhanced search form that exposes metadata using checkboxes and drop-down lists, with unsorted results list. (3) Browse interface that allows user to navigate through the collection, implicitly building up a query consisting of an AND across facets; Selecting a category within a facet (e.g., Pasta within Main Ingredient) narrows results set, and users are shown query previews at every step. In the interests of space, we can only provide a brief summary of this small (9 participant) study: All the participants who liked the site (7 out of 9) said they were likely to use the browse interface again. Only 4 said this about enhanced search and 0 said this about simple search. Participants especially liked the browse interface for open-ended tasks such as “plan a dinner party.” We took this as encouraging support for the faceted metadata approach. However, the recipe browse facility is lacking in several ways. Free-text search is not integrated with metadata browse, the collection and metadata are of only moderate size, and the metadata is organized into flat (non-hierarchical) facets. Finally users are only allowed to refine queries, they cannot broaden 3http://eat.epicurious.com/recipes/browse home/", "title": "" } ]
[ { "docid": "47e67d50a4fa53dc2a696fc04dc84ea7", "text": "In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL/MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives.", "title": "" }, { "docid": "d53a9f4452d61df9d1476ed895cfddf2", "text": "Semantic segmentation benefits robotics related applications, especially autonomous driving. Most of the research on semantic segmentation only focuses on increasing the accuracy of segmentation models with little attention to computationally efficient solutions. The few work conducted in this direction does not provide principled methods to evaluate the different design choices for segmentation. In this paper, we address this gap by presenting a real-time semantic segmentation benchmarking framework with a decoupled design for feature extraction and decoding methods. The framework is comprised of different network architectures for feature extraction such as VGG16, Resnet18, MobileNet, and ShuffleNet. It is also comprised of multiple meta-architectures for segmentation that define the decoding methodology. These include SkipNet, UNet, and Dilation Frontend. Experimental results are presented on the Cityscapes dataset for urban scenes. The modular design allows novel architectures to emerge, that lead to 143x GFLOPs reduction in comparison to SegNet. This benchmarking framework is publicly available at 11https://github.com/MSiam/TFSegmentation.", "title": "" }, { "docid": "ce901f6509da9ab13d66056319c15bd8", "text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.", "title": "" }, { "docid": "fea31b71829803d78dabf784dfdb0093", "text": "Tag recommendation is helpful for the categorization and searching of online content. Existing tag recommendation methods can be divided into collaborative filtering methods and content based methods. In this paper, we put our focus on the content based tag recommendation due to its wider applicability. Our key observation is the tag-content co-occurrence, i.e., many tags have appeared multiple times in the corresponding content. Based on this observation, we propose a generative model (Tag2Word), where we generate the words based on the tag-word distribution as well as the tag itself. Experimental evaluations on real data sets demonstrate that the proposed method outperforms several existing methods in terms of recommendation accuracy, while enjoying linear scalability.", "title": "" }, { "docid": "a7f4f3d03b69fc339b4908e247a36f30", "text": "In this letter, we present a novel feature extraction method for sound event classification, based on the visual signature extracted from the sound's time-frequency representation. The motivation stems from the fact that spectrograms form recognisable images, that can be identified by a human reader, with perception enhanced by pseudo-coloration of the image. The signal processing in our method is as follows. 1) The spectrogram is normalised into greyscale with a fixed range. 2) The dynamic range is quantized into regions, each of which is then mapped to form a monochrome image. 3) The monochrome images are partitioned into blocks, and the distribution statistics in each block are extracted to form the feature. The robustness of the proposed method comes from the fact that the noise is normally more diffuse than the signal and therefore the effect of the noise is limited to a particular quantization region, leaving the other regions less changed. The method is tested on a database of 60 sound classes containing a mixture of collision, action and characteristic sounds and shows a significant improvement over other methods in mismatched conditions, without the need for noise reduction.", "title": "" }, { "docid": "936048690fb043434c3ee0060c5bf7a5", "text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "546d4b0455a8d11147e99a2d813ee7a5", "text": "O ce delivery robots have to perform many tasks. They have to determine the order in which to visit ofces, plan paths to those o ces, follow paths reliably, and avoid static and dynamic obstacles in the process. Reliability and e ciency are key issues in the design of such autonomous robot systems. They must deal reliably with noisy sensors and actuators and with incomplete knowledge of the environment. They must also act e ciently, in real time, to deal with dynamic situations. Our architecture is composed of four abstraction layers: obstacle avoidance, navigation, path planning, and task scheduling. The layers are independent, communicating processes that are always active, processing sensory data and status information to update their decisions and actions. A version of our robot architecture has been in nearly daily use in our building since December 1995. As of July 1996, the robot has traveled more than 75 kilometers in service of over 1800 navigation requests that were speci ed using our World Wide Web interface.", "title": "" }, { "docid": "a45e7855be4a99ef2d382e914650e8bc", "text": "We propose a novel type inference technique for Python programs. Type inference is difficult for Python programs due to their heavy dependence on external APIs and the dynamic language features. We observe that Python source code often contains a lot of type hints such as attribute accesses and variable names. However, such type hints are not reliable. We hence propose to use probabilistic inference to allow the beliefs of individual type hints to be propagated, aggregated, and eventually converge on probabilities of variable types. Our results show that our technique substantially outperforms a state-of-the-art Python type inference engine based on abstract interpretation.", "title": "" }, { "docid": "8171fc4d3b47ed79915f98269bef3c4d", "text": "The purpose of this study was to investigate the effects of loaded and unloaded plyometric training strategies on speed and power performance of elite young soccer players. Twenty-three under-17 male soccer players (age: 15.9 ± 1.2 years, height: 178.3 ± 8.1 cm, body-mass (BM): 68.1 ± 9.3 kg) from the same club took part in this study. The athletes were pair-matched in two training groups: loaded vertical and horizontal jumps using an haltere type handheld with a load of 8% of the athletes' body mass (LJ; n = 12) and unloaded vertical and horizontal plyometrics (UJ; n = 11). Sprinting speeds at 5-, 10-, and 20-m, mean propulsive power (MPP) relative to the players' BM in the jump squat exercise, and performance in the squat jump (SJ) and countermovement jump (CMJ) were assessed pre- and post-training period. During the experimental period, soccer players performed 12 plyometric training sessions across a 6-week preseason period. Magnitude based inferences and standardized differences were used for statistical analysis. A very likely increase in the vertical jumps was observed for the LJ group (99/01/00 and 98/02/00 for SJ and CMJ, respectively). In the UJ group a likely increase was observed for both vertical jumps (83/16/01 and 90/10/00, for SJ and CMJ, respectively). An almost certainly decrease in the sprinting velocities along the 20-m course were found in the LJ group (00/00/100 for all split distances tested). Meanwhile, in the UJ likely to very likely decreases were observed for all sprinting velocities tested (03/18/79, 01/13/86, and 00/04/96, for velocities in 5-, 10-, and 20-m, respectively). No meaningful differences were observed for the MPP in either training group (11/85/04 and 37/55/08 for LJ and UJ, respectively). In summary, under-17 professional soccer players increased jumping ability after a 6-week preseason training program, using loaded or unloaded jumps. Despite these positive adaptations, both plyometric strategies failed to produce worthwhile improvements in maximal speed and power performances, which is possible related to the interference of concurrent training effects. New training strategies should be developed to ensure adequate balance between power and endurance loads throughout short (and high-volume) soccer preseasons.", "title": "" }, { "docid": "6cbf3d8ac97d1764ab736481526126b3", "text": "Infrastructure-as-a-Service (IaaS) cloud providers hide available interfaces for virtual machine (VM) placement and migration, CPU capping, memory ballooning, page sharing, and I/O throttling, limiting the ways in which applications can optimally configure resources or respond to dynamically shifting workloads. Given these interfaces, applications could migrate VMs in response to diurnal workloads or changing prices, adjust resources in response to load changes, and so on. This article proposes a new abstraction that we call a Library Cloud and that allows users to customize the diverse available cloud resources to best serve their applications.\n We built a prototype of a Library Cloud that we call the Supercloud. The Supercloud encapsulates applications in a virtual cloud under users’ full control and can incorporate one or more availability zones within a cloud provider or across different providers. The Supercloud provides virtual machine, storage, and networking complete with a full set of management operations, allowing applications to optimize performance. In this article, we demonstrate various innovations enabled by the Library Cloud.", "title": "" }, { "docid": "ef7e973a5c6f9e722917a283a1f0fe52", "text": "We live in a digital society that provides a range of opportunities for virtual interaction. Consequently, emojis have become popular for clarifying online communication. This presents an exciting opportunity for psychologists, as these prolific online behaviours can be used to help reveal something unique about contemporary human behaviour.", "title": "" }, { "docid": "f62d941ef1f7c4e8d8a245f74b97a7e6", "text": "BACKGROUND\nThe stepped-care approach, where people with early symptoms of depression are stepped up from low-intensity interventions to higher-level interventions as needed, has the potential to assist many people with mild depressive symptoms. Self-monitoring techniques assist people to understand their mental health symptoms by increasing their emotional self-awareness (ESA) and can be easily distributed on mobile phones at low cost. Increasing ESA is an important first step in psychotherapy and has the potential to intervene before mild depressive symptoms progress to major depressive disorder. In this secondary analysis we examined a mobile phone self-monitoring tool used by young people experiencing mild or more depressive symptoms to investigate the relationships between self-monitoring, ESA, and depression.\n\n\nOBJECTIVES\nWe tested two main hypotheses: (1) people who monitored their mood, stress, and coping strategies would have increased ESA from pretest to 6-week follow-up compared with an attention comparison group, and (2) an increase in ESA would predict a decrease in depressive symptoms.\n\n\nMETHODS\nWe recruited patients aged 14 to 24 years from rural and metropolitan general practices. Eligible participants were identified as having mild or more mental health concerns by their general practitioner. Participants were randomly assigned to either the intervention group (where mood, stress, and daily activities were monitored) or the attention comparison group (where only daily activities were monitored), and both groups self-monitored for 2 to 4 weeks. Randomization was carried out electronically via random seed generation, by an in-house computer programmer; therefore, general practitioners, participants, and researchers were blinded to group allocation at randomization. Participants completed pretest, posttest, and 6-week follow-up measures of the Depression Anxiety Stress Scale and the ESA Scale. We estimated a parallel process latent growth curve model (LGCM) using Mplus to test the indirect effect of the intervention on depressive symptoms via the mediator ESA, and calculated 95% bias-corrected bootstrapping confidence intervals (CIs).\n\n\nRESULTS\nOf the 163 participants assessed for eligibility, 118 were randomly assigned and 114 were included in analyses (68 in the intervention group and 46 in the comparison group). A parallel process LGCM estimated the indirect effect of the intervention on depressive symptoms via ESA and was shown to be statistically significant based on the 95% bias-corrected bootstrapping CIs not containing zero (-6.366 to -0.029). The proportion of the maximum possible indirect effect estimated was κ(2 )=.54 (95% CI .426-.640).\n\n\nCONCLUSIONS\nThis study supported the hypothesis that self-monitoring increases ESA, which in turn decreases depressive symptoms for young people with mild or more depressive symptoms. Mobile phone self-monitoring programs are ideally suited to first-step intervention programs for depression in the stepped-care approach, particularly when ESA is targeted as a mediating factor.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT00794222; http://clinicaltrials.gov/ct2/show/NCT00794222 (Archived by WebCite at http://www.webcitation.org/65lldW34k).", "title": "" }, { "docid": "9dd8d5d3af8c901dc2a822b3599eaff0", "text": "In this paper we propose a flexible and scalable distributed storage framework called flex Store that can adapt to variations in available or consumable power and demonstrate its performance in the context of reduplicated virtual machine disks. We propose and investigate smart control techniques in order to cope with the power constraints either introduced as a result of increasing node density in the storage arrays (consumable power constraints) or introduced when a mix of renewable (green) and conventional (brown) energy sources are used to power the data enter. The key component in the proposed storage framework is the policy engine which is a software layer that provides interfaces to define performance requirements of the applications (and also energy related policies). The policy engine enforces those policies in the storage system by adjusting the allocation of storage resources. The experimental results demonstrate the ability of the framework to dynamically adapt to the changes in workload and power constraints and minimize performance impacts. Our evaluation of the prototype shows that the adaptive replication mechanisms can reduce the IO latencies by around 65% during energy plenty situations and the impact of adaptation actions on IO latencies during energy constrained situations is reduced by more than 40% compared to the case without the adaptive replication and optimized adaptation mechanisms.", "title": "" }, { "docid": "ec673efa5f837ba4c997ee7ccd845ce1", "text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.", "title": "" }, { "docid": "ff6b4840787027df75873f38fbb311b4", "text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.", "title": "" }, { "docid": "fd8a677dffe737d61ebd0e30b91595e9", "text": "Despite outstanding success in vision amongst other domains, many of the recent deep learning approaches have evident drawbacks for robots. This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts. These recent advances are only a piece to the puzzle. We suggest that deep learning as a tool alone is insufficient in building a unified framework to acquire general intelligence. For this reason, we complement our survey with insights from cognitive development and refer to ideas from classical control theory, producing an integrated direction for a lifelong learning architecture.", "title": "" }, { "docid": "33cab0ec47af5e40d64e34f8ffc7dd6f", "text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.", "title": "" }, { "docid": "5525d06ad673a0099d05568f201e1195", "text": "Bitcoin is a digital payment system empowered by a distributed database called blockchain. The blockchain is an open ledger containing records of every transaction within Bitcoin system maintained by Bitcoin nodes all over the world. Because of its availability and robustness, the blockchain could be utilized as a record-keeping tool for information not related to any Bitcoin transactions. We propose a method of utilizing the blockchain of Bitcoin system to publish information by embedding an arbitrary size of data into Bitcoin transactions. By publishing information using the blockchain, the information also carries the characteristics of Bitcoin transaction: anonymous, decentralized, and permanent. The proposed protocol could be used to extend the functionality of asset management systems which are limited to a maximum of 80 bytes data. The proposed method also offers an efficiency of the transaction fee by 18 percent compared to Bitcoin Messaging protocol.", "title": "" }, { "docid": "f01a19652bff88923a3141fb56d805e2", "text": "This paper presents a visible light communication system, focusing mostly on the aspects related with the hardware design and implementation. The designed system is aimed to ensure a highly-reliable communication between a commercial LED-based traffic light and a receiver mounted on a vehicle. Enabling wireless data transfer between the road infrastructure and vehicles has the potential to significantly increase the safety and efficiency of the transportation system. The paper presents the advantages of the proposed system and explains same of the choices made in the implementation process.", "title": "" }, { "docid": "df97dff1e2539f192478f2aa91f69cc4", "text": "Computer systems are increasingly employed in circumstances where their failure (or even their correct operation, if they are built to flawed requirements) can have serious consequences. There is a surprising diversity of opinion concerning the properties that such “critical systems” should possess, and the best methods to develop them. The dependability approach grew out of the tradition of ultra-reliable and fault-tolerant systems, while the safety approach grew out of the tradition of hazard analysis and system safety engineering. Yet another tradition is found in the security community, and there are further specialized approaches in the tradition of real-time systems. In this report, I examine the critical properties considered in each approach, and the techniques that have been developed to specify them and to ensure their satisfaction. Since systems are now being constructed that must satisfy several of these critical system properties simultaneously, there is particular interest in the extent to which techniques from one tradition support or conflict with those of another, and in whether certain critical system properties are fundamentally compatible or incompatible with each other. As a step toward improved understanding of these issues, I suggest a taxonomy, based on Perrow’s analysis, that considers the complexity of component interactions and tightness of coupling as primary factors. C. Perrow. Normal Accidents: Living with High Risk Technologies. Basic Books, New York, NY, 1984.", "title": "" } ]
scidocsrr
e729bbce9851f97c2387ef35d3fcd67a
Robust real-time pupil tracking in highly off-axis images
[ { "docid": "1705ba479a7ff33eef46e0102d4d4dd0", "text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.", "title": "" }, { "docid": "06c0b39b820da9549c72ae48544d096c", "text": "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.", "title": "" } ]
[ { "docid": "b35922663b4728c409528675be15d586", "text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.", "title": "" }, { "docid": "9175794d83b5f110fb9f08dc25a264b8", "text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.", "title": "" }, { "docid": "2c7920f53eed99e3a7380ebc036e67a5", "text": "We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers.", "title": "" }, { "docid": "63d26f3336960c1d92afbd3a61a9168c", "text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.", "title": "" }, { "docid": "a4e57fe3d24d6eb8be1d0e0659dda58a", "text": "Automated game design has remained a key challenge within the field of Game AI. In this paper, we introduce a method for recombining existing games to create new games through a process called conceptual expansion. Prior automated game design approaches have relied on hand-authored or crowdsourced knowledge, which limits the scope and applications of such systems. Our approach instead relies on machine learning to learn approximate representations of games. Our approach recombines knowledge from these learned representations to create new games via conceptual expansion. We evaluate this approach by demonstrating the ability for the system to recreate existing games. To the best of our knowledge, this represents the first machine learning-based automated game design system.", "title": "" }, { "docid": "3ac89f0f4573510942996ae66ef8184c", "text": "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.", "title": "" }, { "docid": "6825d0150a4fb1e1788e50bcc1d178d5", "text": "Severely stressful life events can have a substantial impact on those who experience them. For some, experience with a traumatic life event can leave them confused, withdrawn, depressed, and increasingly vulnerable to the next stressful situation that arises. The clinical literature, for example, has found various stressful life events to be risk factors for the development of depression, anxiety, and in extreme cases, posttraumatic stress disorder (PTSD). For other individuals, a traumatic experience can serve as a catalyst for positive change, a chance to reexamine life priorities or develop strong ties with friends and family. Recent research has explored the immediate and long-term positive effects of similarly severe life events, such as cancer, bereavement, and HIV-infection, to identify the factors and processes that appear to contribute to resilience and growth. These two lines of research, however, have developed largely independent of each other and a number of questions remain to be explored in their integration. For example, do the roots of these apparently divergent patterns lie in the events themselves or in the people who experience them? Do some experiences typically lead to negative outcomes, whereas others contribute to the development of positive changes? What psychological factors appear to moderate these outcomes? How do positive outcomes, such as perceptions of stress-related growth and benefit, relate to measures of negative adjustment? To address these questions, we begin with a review of positive outcomes that have been reported in response to stressful life events, such as the perceptions of stressrelated growth and benefit and theories that help to explain these changes. We then", "title": "" }, { "docid": "c05f825b7520423c9ff95a1a8e5d260f", "text": "Accurate detection and tracking of objects is vital for effective video understanding. In previous work, the two tasks have been combined in a way that tracking is based heavily on detection, but the detection benefits marginally from the tracking. To increase synergy, we propose to more tightly integrate the tasks by conditioning the object detection in the current frame on tracklets computed in prior frames. With this approach, the object detection results not only have high detection responses, but also improved coherence with the existing tracklets. This greater coherence leads to estimated object trajectories that are smoother and more stable than the jittered paths obtained without tracklet-conditioned detection. Over extensive experiments, this approach is shown to achieve state-of-the-art performance in terms of both detection and tracking accuracy, as well as noticeable improvements in tracking stability.", "title": "" }, { "docid": "aa2e16e6ed5d2610a567e358807834d4", "text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.", "title": "" }, { "docid": "eed5c66d0302c492f2480a888678d1dc", "text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.", "title": "" }, { "docid": "c8c9e542c966a7c474f2a5c8d494ec23", "text": "We present ASPIER -- the first framework that combines software model checking with a standard protocol security model to automatically analyze authentication and secrecy properties of protocol implementations in C. The technical approach extends the iterative abstraction-refinement methodology for software model checking with a domain-specific protocol and symbolic attacker model. We have implemented the ASPIER tool and used it to verify authentication and secrecy properties of a part of an industrial strength protocol implementation -- the handshake in OpenSSL -- for configurations consisting of up to 3 servers and 3 clients. We have also implemented two distinct methods for reasoning about attacker message derivations, and evaluated them in the context of OpenSSL verification. ASPIER detected the \"version-rollback\" vulnerability in OpenSSL 0.9.6c source code and successfully verified the implementation when clients and servers are only willing to run SSL 3.0.", "title": "" }, { "docid": "1026fa138e36ac1ccef81c1660c9dbf9", "text": "The Java®HotSpot Virtual Machine includes a multi-tier compilation system that may invoke a compiler at any time. Lower tiers instrument the program to gather information for the highly optimizing compiler at the top tier, and this compiler bases its optimizations on these profiles. But if the assumptions made by the top-tier compiler are proven wrong (e.g., because the profile does not cover all execution paths), the method is deoptimized: the code generated for the method is discarded and the method is then executed at Tier 0 again. Eventually, after profile information has been gathered, the method is recompiled at the top tier again (this time with less-optimistic assumptions). Users of the system experience such deoptimization cycles (discard, profile, compile) as performance fluctuations and potentially as variations in the system's responsiveness. Unpredictable performance however is problematic in many time-critical environments even if the system is not a hard real-time system.\n A profile cache captures the profile of earlier executions. When the application is executed again, with a fresh VM, the top tier (highly optimizing) compiler can base its decisions on a profile that reflects prior executions and not just the recent history observed during this run. We report in this paper the design and effectiveness of a profile cache for Java applications which is implemented and evaluated as part of the multi-tier compilation system of the HotSpot Java Virtual Machine in OpenJDK version 9. For a set of benchmarks, profile caching reduces the number of (re)compilations by up to 23%, the number of deoptimizations by up to 90%, and thus improves performance predictability.", "title": "" }, { "docid": "7dcdf69f47a0a56d437cc8b7ea5352a6", "text": "A wide range of domain-specific languages (DSLs) has been implemented successfully by embedding them in general purpose languages. This paper reviews embedding, and summarizes how two alternative techniques—staged interpreters and templates—can be used to overcome the limitations of embedding. Both techniques involve a form of generative programming. The paper reviews and compares three programming languages that have special support for generative programming. Two of these languages (MetaOCaml and Template Haskell) are research languages, while the third (C++) is already in wide industrial use. The paper identifies several dimensions that can serve as a basis for comparing generative languages.", "title": "" }, { "docid": "4b22eaf527842e0fa41a1cd740ad9b40", "text": "Music transcription is the process of creating a written score of music from an audio recording. Musicians and musicologists use transcription to better understand music that may not have a written form, from improvised jazz solos to traditional folk music. Automatic music transcription introduces signal-processing algorithms to extract pitch and rhythm information from recordings. This speeds up and automates the process of music transcription, which requires musical training and is very time consuming even for experts. This thesis explores the still unsolved problem of automatic music transcription through an in-depth analysis of the problem itself and an overview of different techniques to solve the hardest subtask of music transcription, multiple pitch estimation. It concludes with a close study of a typical multiple pitch estimation algorithm and highlights the challenges that remain unsolved.", "title": "" }, { "docid": "69a01ea46134301abebd6159942c0b52", "text": "This paper proposes a crowd counting method. Crowd counting is difficult because of large appearance changes of a target which caused by density and scale changes. Conventional crowd counting methods generally utilize one predictor (e.g. regression and multi-class classifier). However, such only one predictor can not count targets with large appearance changes well. In this paper, we propose to predict the number of targets using multiple CNNs specialized to a specific appearance, and those CNNs are adaptively selected according to the appearance of a test image. By integrating the selected CNNs, the proposed method has the robustness to large appearance changes. In experiments, we confirm that the proposed method can count crowd with lower counting error than a CNN and integration of CNNs with fixed weights. Moreover, we confirm that each predictor automatically specialized to a specific appearance.", "title": "" }, { "docid": "f9da4bfe6dba0a6ec886758b164cd10b", "text": "Physically based deformable models have been widely embraced by the Computer Graphics community. Many problems outlined in a previous survey by Gibson and Mirtich [GM97] have been addressed, thereby making these models interesting and useful for both offline and real-time applications, such as motion pictures and video games. In this paper, we present the most significant contributions of the past decade, which produce such impressive and perceivably realistic animations and simulations: finite element/difference/volume methods, mass-spring systems, meshfree methods, coupled particle systems and reduced deformable models based on modal analysis. For completeness, we also make a connection to the simulation of other continua, such as fluids, gases and melting objects. Since time integration is inherent to all simulated phenomena, the general notion of time discretization is treated separately, while specifics are left to the respective models. Finally, we discuss areas of application, such as elastoplastic deformation and fracture, cloth and hair animation, virtual surgery simulation, interactive entertainment and fluid/smoke animation, and also suggest areas for future research.", "title": "" }, { "docid": "e6555beb963f40c39089959a1c417c2f", "text": "In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as the weights matrix decomposition, binarization and hashing. It is shown that the most efficient optimization can be achieved with the matrices decomposition and hashing. Finally, we explore the possibility to distill the knowledge from the large neural network, if only large unlabeled sample of facial images is available.", "title": "" }, { "docid": "7ce350ec696066026e094687e96fb9d4", "text": "Convergence of communication technologies and innovative product features are expanding the markets for technological products and services. Prior literature on technology acceptance and use has focused on utilitarian belief factors as predictors of rational adoption decisions and subsequent user behavior. This presupposes that consumers’ intentions to use technology are based on functional or utilitarian needs. Using netnographic evidence on iPhone usage, this study suggests that innovative consumers adopt and use new technology for not just utilitarian but also for experiential outcomes. The study presents an interpretive analysis of the consumption behavior of very early iPhone users. Apple introduced iPhone as a revolutionary mobile handset offering integrated features and converged services—a handheld computercum-phone with a touch-screen web browser, a music player, an organizer, a note-taker, and a camera. This revolutionary product opened up new possibilities to meld functional tasks, hedonism, and social signaling. The study suggests that even utilitarian users have hedonic and social factors present in their consumption patterns. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "11510b7d7421aea0f59aee3687a12d04", "text": "In this paper, we present the Cooperative Adaptive Cruise Control (CACC) architecture, which was proposed and implemented by the team from Chalmers University of Technology, Göteborg, Sweden, that joined the Grand Cooperative Driving Challenge (GCDC) in 2011. The proposed CACC architecture consists of the following three main components, which are described in detail: 1) communication; 2) sensor fusion; and 3) control. Both simulation and experimental results are provided, demonstrating that the proposed CACC system can drive within a vehicle platoon while minimizing the inter-vehicle spacing within the allowed range of safety distances, tracking a desired speed profile, and attenuating acceleration shockwaves.", "title": "" }, { "docid": "bc7f80192416aa7787657aed1bda3997", "text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.", "title": "" } ]
scidocsrr
f7cb103256c4ec70f6f8ddd54df67bd5
The software value map - an exhaustive collection of value aspects for the development of software intensive products
[ { "docid": "48fc7aabdd36ada053ebc2d2a1c795ae", "text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.", "title": "" }, { "docid": "d362b36e0c971c43856a07b7af9055f3", "text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,", "title": "" } ]
[ { "docid": "8b067b1115d4bc7c8656564bc6963d7b", "text": "Sentence Function: Indicating the conversational purpose of speakers • Interrogative: Acquire further information from the user • Imperative: Make requests, instructions or invitations to elicit further information • Declarative: Make statements to state or explain something Response Generation Task with Specified Sentence Function • Global Control: Plan different types of words globally • Compatibility: Controllable sentence function + informative content", "title": "" }, { "docid": "06e58f46c989f22037f443ccf38198ce", "text": "Many biological surfaces in both the plant and animal kingdom possess unusual structural features at the micro- and nanometre-scale that control their interaction with water and hence wettability. An intriguing example is provided by desert beetles, which use micrometre-sized patterns of hydrophobic and hydrophilic regions on their backs to capture water from humid air. As anyone who has admired spider webs adorned with dew drops will appreciate, spider silk is also capable of efficiently collecting water from air. Here we show that the water-collecting ability of the capture silk of the cribellate spider Uloborus walckenaerius is the result of a unique fibre structure that forms after wetting, with the ‘wet-rebuilt’ fibres characterized by periodic spindle-knots made of random nanofibrils and separated by joints made of aligned nanofibrils. These structural features result in a surface energy gradient between the spindle-knots and the joints and also in a difference in Laplace pressure, with both factors acting together to achieve continuous condensation and directional collection of water drops around spindle-knots. Submillimetre-sized liquid drops have been driven by surface energy gradients or a difference in Laplace pressure, but until now neither force on its own has been used to overcome the larger hysteresis effects that make the movement of micrometre-sized drops more difficult. By tapping into both driving forces, spider silk achieves this task. Inspired by this finding, we designed artificial fibres that mimic the structural features of silk and exhibit its directional water-collecting ability.", "title": "" }, { "docid": "79c2623b0e1b51a216fffbc6bbecd9ec", "text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.", "title": "" }, { "docid": "e5f50bc18cefc486ead4b92f9df178dc", "text": "Mobile users of computation and communication services have been rapidly adopting battery-powered mobile handhelds, such as PocketPCs and SmartPhones, for their work. However, the limited battery-lifetime of these devices restricts their portability and applicability, and this weakness can be exacerbated by mobile malware targeting depletion of battery energy. Such malware are usually difficult to detect and prevent, and frequent outbreaks of new malware variants also reduce the effectiveness of commonly-seen signature-based detection. To alleviate these problems, we propose a power-aware malware-detection framework that monitors, detects, and analyzes previously unknown energy-depletion threats. The framework is composed of (1) a power monitor which collects power samples and builds a power consumption history from the collected samples, and (2) a data analyzer which generates a power signature from the constructed history. To generate a power signature, simple and effective noise-filtering and data-compression are applied, thus reducing the detection overhead. Similarities between power signatures are measured by the χ2-distance, reducing both false-positive and false-negative detection rates. According to our experimental results on an HP iPAQ running a Windows Mobile OS, the proposed framework achieves significant (up to 95%) storage-savings without losing the detection accuracy, and a 99% true-positive rate in classifying mobile malware.", "title": "" }, { "docid": "9637537d6aeb6545d59eefaaaf2bdafa", "text": "The swing-up maneuver of the double pendulum on a cart serves to demonstrate a new approach of inversion-based feedforward control design introduced recently. The concept treats the transition task as a nonlinear two-point boundary value problem of the internal dynamics by providing free parameters in the desired output trajectory for the cart position. A feedback control is designed with linear methods to stabilize the swing-up maneuver. The emphasis of the paper is on the experimental realization of the double pendulum swing-up, which reveals the accuracy of the feedforward/feedback control scheme. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4b2a16c023937db4f417d52b070cc2cc", "text": "Endosomal protein trafficking is an essential cellular process that is deregulated in several diseases and targeted by pathogens. Here, we describe a role for ubiquitination in this process. We find that the E3 RING ubiquitin ligase, MAGE-L2-TRIM27, localizes to endosomes through interactions with the retromer complex. Knockdown of MAGE-L2-TRIM27 or the Ube2O E2 ubiquitin-conjugating enzyme significantly impaired retromer-mediated transport. We further demonstrate that MAGE-L2-TRIM27 ubiquitin ligase activity is required for nucleation of endosomal F-actin by the WASH regulatory complex, a known regulator of retromer-mediated transport. Mechanistic studies showed that MAGE-L2-TRIM27 facilitates K63-linked ubiquitination of WASH K220. Significantly, disruption of WASH ubiquitination impaired endosomal F-actin nucleation and retromer-dependent transport. These findings provide a cellular and molecular function for MAGE-L2-TRIM27 in retrograde transport, including an unappreciated role of K63-linked ubiquitination and identification of an activating signal of the WASH regulatory complex.", "title": "" }, { "docid": "8986de609f238e83623c7130a9ab9253", "text": "The color psychology literature has made a convincing case that color is not just about aesthetics, but also about meaning. This work has involved situational manipulations of color, rendering it uncertain as to whether color-meaning associations can be used to characterize how people differ from each other. The present research focuses on the idea that the color red is linked to, or associated with, individual differences in interpersonal hostility. Across four studies (N = 376 undergraduates), red preferences and perceptual biases were measured along with individual differences in interpersonal hostility. It was found that (a) a preference for the color red was higher as interpersonal hostility increased, (b) hostile people were biased to see the color red more frequently than nonhostile people, and (c) there was a relationship between a preference for the color red and hostile social decision making. These studies represent an important extension of the color psychology literature, highlighting the need to attend to person-based, as well as situation-based, factors.", "title": "" }, { "docid": "6dfb62138ad7e0c23826a2c6b7c2507e", "text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.", "title": "" }, { "docid": "bbe43ff06e30a5cf2e9477a60c0bb6ff", "text": "As the Internet of Things (IoT) paradigm gains popularity, the next few years will likely witness 'servitization' of domain sensing functionalities. We envision a cloud-based eco-system in which high quality data from large numbers of independently-managed sensors is shared or even traded in real-time. Such an eco-system will necessarily have multiple stakeholders such as sensor data providers, domain applications that utilize sensor data (data consumers), and cloud infrastructure providers who may collaborate as well as compete. While there has been considerable research on wireless sensor networks, the challenges involved in building cloud-based platforms for hosting sensor services are largely unexplored. In this paper, we present our vision for data quality (DQ)-centric big data infrastructure for federated sensor service clouds. We first motivate our work by providing real-world examples. We outline the key features that federated sensor service clouds need to possess. This paper proposes a big data architecture in which DQ is pervasive throughout the platform. Our architecture includes a markup language called SDQ-ML for describing sensor services as well as for domain applications to express their sensor feed requirements. The paper explores the advantages and limitations of current big data technologies in building various components of the platform. We also outline our initial ideas towards addressing the limitations.", "title": "" }, { "docid": "f12de00e1b3fc390d197aabd41a64f87", "text": "Wireless ad hoc sensor networks have emerged as one of the key growth areas for wireless networking and computing technologies. So far these networks/systems have been designed with static and custom architectures for specific tasks, thus providing inflexible operation and interaction capabilities. Our vision is to create sensor networks that are open to multiple transient users with dynamic needs. Working towards this vision, we propose a framework to define and support lightweight and mobile control scripts that allow the computation, communication, and sensing resources at the sensor nodes to be efficiently harnessed in an application-specific fashion. The replication/migration of such scripts in several sensor nodes allows the dynamic deployment of distributed algorithms into the network. Our framework, SensorWare, defines, creates, dynamically deploys, and supports such scripts. Our implementation of SensorWare occupies less than 180Kbytes of code memory and thus easily fits into several sensor node platforms. Extensive delay measurements on our iPAQ-based prototype sensor node platform reveal the small overhead of SensorWare to the algorithms (less than 0.3msec in most high-level operations). In return the programmer of the sensor network receives compactness of code, abstraction services for all of the node's modules, and in-built multi-user support. SensorWare with its features apart from making dynamic programming possible it also makes it easy and efficient without restricting the expressiveness of the algorithms.", "title": "" }, { "docid": "d6b213889ba6073b0987852e31b98c6a", "text": "Nowadays, large volumes of multimedia data are outsourced to the cloud to better serve mobile applications. Along with this trend, highly correlated datasets can occur commonly, where the rich information buried in correlated data is useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, we aim to provide a mobile-friendly design that saves the transmission cost for mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. First, we propose a secure and efficient index design that allows the mobile client to securely find from encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support secure image reproduction from encrypted candidate selection. We formally analyze the security strength of the design. Our experiments explicitly show that both the bandwidth and energy consumptions at the mobile client can be saved, while achieving all service requirements and security guarantees.", "title": "" }, { "docid": "ecb06a681f7d14fc690376b4c5a630af", "text": "Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.", "title": "" }, { "docid": "5589615ee24bf5ba1ac5def2c5bc556e", "text": "The computer industry is at a major inflection point in its hardware roadmap due to the end of a decades-long trend of exponentially increasing clock frequencies. Instead, future computer systems are expected to be built using homogeneous and heterogeneous many-core processors with 10’s to 100’s of cores per chip, and complex hardware designs to address the challenges of concurrency, energy efficiency and resiliency. Unlike previous generations of hardware evolution, this shift towards many-core computing will have a profound impact on software. These software challenges are further compounded by the need to enable parallelism in workloads and application domains that traditionally did not have to worry about multiprocessor parallelism in the past. A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. Unfortunately, hybrid programming models that support multithreaded execution on CPUs in parallel with CUDA execution on GPUs prove to be too complex for use by mainstream programmers and domain experts, especially when targeting platforms with multiple CPU cores and multiple GPU devices. In this paper, we extend past work on Intel’s Concurrent Collections (CnC) programming model to address the hybrid programming challenge using a model called CnC-CUDA. CnC is a declarative and implicitly parallel coordination language that supports flexible combinations of task and data parallelism while retaining determinism. CnC computations are built using steps that are related by data and control dependence edges, which are represented by a CnC graph. The CnC-CUDA extensions in this paper include the definition of multithreaded steps for execution on GPUs, and automatic generation of data and control flow between CPU steps and GPU steps. Experimental results show that this approach can yield significant performance benefits with both GPU execution and hybrid CPU/GPU execution.", "title": "" }, { "docid": "bb2c01181664baaf20012e321b5e1f9f", "text": "Systems able to suggest items that a user may be interested in are usually named as Recommender Systems. The new emergent field of Recommender Systems has undoubtedly gained much interest in the research community. Although Recommender Systems work well in suggesting books, movies and items of general interest, many users express today a feeling that the existing systems don’t actually identify them as individual personalities. This dissatisfaction turned the research society towards the development of new approaches on Recommender Systems, more user-centric. A methodology originated from Decision Theory is exploited herein, aiming to address to the lack of personalization in Recommender Systems by integrating the user in the recommendation process.", "title": "" }, { "docid": "f70f825996544350b21177246cb39803", "text": "The goal of our work is to develop an efficient, automatic algorithm for discovering point correspondences between surfaces that are approximately and/or partially isometric.\n Our approach is based on three observations. First, isometries are a subset of the Möbius group, which has low-dimensionality -- six degrees of freedom for topological spheres, and three for topological discs. Second, computing the Möbius transformation that interpolates any three points can be computed in closed-form after a mid-edge flattening to the complex plane. Third, deviations from isometry can be modeled by a transportation-type distance between corresponding points in that plane.\n Motivated by these observations, we have developed a Möbius Voting algorithm that iteratively: 1) samples a triplet of three random points from each of two point sets, 2) uses the Möbius transformations defined by those triplets to map both point sets into a canonical coordinate frame on the complex plane, and 3) produces \"votes\" for predicted correspondences between the mutually closest points with magnitude representing their estimated deviation from isometry. The result of this process is a fuzzy correspondence matrix, which is converted to a permutation matrix with simple matrix operations and output as a discrete set of point correspondences with confidence values.\n The main advantage of this algorithm is that it can find intrinsic point correspondences in cases of extreme deformation. During experiments with a variety of data sets, we find that it is able to find dozens of point correspondences between different object types in different poses fully automatically.", "title": "" }, { "docid": "d6628b102e8f87e8ce58c2e3483a7beb", "text": "Nowadays, Big Data platforms allow the analysis of massive data streams in an efficient way. However, the services they provide are often too raw, thus the implementation of advanced real-world applications requires a non-negligible effort for interfacing with such services. This also complicates the task of choosing which one of the many available alternatives is the most appropriate for the application at hand. In this paper, we present a comparative study of the three major opensource Big Data platforms for stream processing, as performed by using our novel RAMS framework. Although the results we present are specific for our use case (recognition of suspect people from massive video streams), the generality of the RAMS framework allows both considering such results as valid for similar applications and implementing different use cases on top of Big Data platforms with very limited effort.", "title": "" }, { "docid": "c0484f3055d7e7db8dfea9d4483e1e06", "text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.", "title": "" }, { "docid": "fb1a178c7c097fbbf0921dcef915dc55", "text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.", "title": "" }, { "docid": "ca2d9b2fe08cda70aa37410aa30e2f2a", "text": "3D human pose estimation from a single image is a challenging problem, especially for in-the-wild settings due to the lack of 3D annotated data. We propose two anatomically inspired loss functions and use them with the weaklysupervised learning framework of [41] to jointly learn from large-scale in-thewild 2D and indoor/synthetic 3D data. We also present a simple temporal network that exploits temporal and structural cues present in predicted pose sequences to temporally harmonize the pose estimations. We carefully analyze the proposed contributions through loss surface visualizations and sensitivity analysis to facilitate deeper understanding of their working mechanism. Our complete pipeline improves the state-of-the-art by 11.8% and 12% on Human3.6M and MPI-INF3DHP, respectively, and runs at 30 FPS on a commodity graphics card.", "title": "" } ]
scidocsrr
eabe324a2abbd5aa247017c3b62cc6c5
Investigation into Big Data Impact on Digital Marketing
[ { "docid": "a2047969c4924a1e93b805b4f7d2402c", "text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.", "title": "" }, { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" } ]
[ { "docid": "1145d2375414afbdd5f1e6e703638028", "text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).", "title": "" }, { "docid": "00277e4562f707d37844e6214d1f8777", "text": "Video super-resolution (SR) aims at estimating a high-resolution video sequence from a low-resolution (LR) one. Given that the deep learning has been successfully applied to the task of single image SR, which demonstrates the strong capability of neural networks for modeling spatial relation within one single image, the key challenge to conduct video SR is how to efficiently and effectively exploit the temporal dependence among consecutive LR frames other than the spatial relation. However, this remains challenging because the complex motion is difficult to model and can bring detrimental effects if not handled properly. We tackle the problem of learning temporal dynamics from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence. Inspired by the inception module in GoogLeNet [1], filters of various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated, in order to fully exploit the temporal relation among the consecutive LR frames. Second, we decrease the complexity of motion among neighboring frames using a spatial alignment network that can be end-to-end trained with the temporal adaptive network and has the merit of increasing the robustness to complex motion and the efficiency compared with the competing image alignment methods. We provide a comprehensive evaluation of the temporal adaptation and the spatial alignment modules. We show that the temporal adaptive design considerably improves the SR quality over its plain counterparts, and the spatial alignment network is able to attain comparable SR performance with the sophisticated optical flow-based approach, but requires a much less running time. Overall, our proposed model with learned temporal dynamics is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. More information can be found in http://www.ifp.illinois.edu/~dingliu2/videoSR/.", "title": "" }, { "docid": "c27fb42cf33399c9c84245eeda72dd46", "text": "The proliferation of technology has empowered the web applications. At the same time, the presences of Cross-Site Scripting (XSS) vulnerabilities in web applications have become a major concern for all. Despite the many current detection and prevention approaches, attackers are exploiting XSS vulnerabilities continuously and causing significant harm to the web users. In this paper, we formulate the detection of XSS vulnerabilities as a prediction model based classification problem. A novel approach based on text-mining and pattern-matching techniques is proposed to extract a set of features from source code files. The extracted features are used to build prediction models, which can discriminate the vulnerable code files from the benign ones. The efficiency of the developed models is evaluated on a publicly available labeled dataset that contains 9408 PHP labeled (i.e. safe, unsafe) source code files. The experimental results depict the superiority of the proposed approach over existing ones.", "title": "" }, { "docid": "b4796891108f41b1faf054636d3eefd2", "text": "Business process analysis ranges from model verification at design-time to the monitoring of processes at runtime. Much progress has been achieved in process verificatio n. Today we are able to verify the entire reference model of SAP without any problems. Moreover, more and more pr ocesses leave their “trail” in the form of event logs. This makes it interesting to apply process minin g to these logs. Interestingly, practical applications of process mining reveal that reality is often quite differe nt from the idealized models, also referred to as “PowerPoint reality”. Future process-aware information s ystems will need to provide full support of the entire life-cycle of business processes. Recent results in busine s process analysis show that this is indeed possible, e.g., the possibilities offered by process mining tools suc h as ProM are breathtaking both from a scientific and practical perspective.", "title": "" }, { "docid": "76071bd6bf0874191e2cdd3b491dc6c6", "text": "Steganography is collection of methods to hide secret information (“payload”) within non-secret information (“container”). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier. In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversarial Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications.", "title": "" }, { "docid": "3132db67005f04591f93e77a2855caab", "text": "Money laundering refers to activities pertaining to hiding the true income, evading taxes, or converting illegally earned money for normal use. These activities are often performed through shell companies that masquerade as real companies but where actual the purpose is to launder money. Shell companies are used in all the three phases of money laundering, namely, placement, layering, and integration, often simultaneously. In this paper, we aim to identify shell companies. We propose to use only bank transactions since that is easily available. In particular, we look at all incoming and outgoing transactions from a particular bank account along with its various attributes, and use anomaly detection techniques to identify the accounts that pertain to shell companies. Our aim is to create an initial list of potential shell company candidates which can be investigated by financial experts later. Due to lack of real data, we propose a banking transactions simulator (BTS) to simulate both honest as well as shell company transactions by studying a host of actual real-world fraud cases. We apply anomaly detection algorithms to detect candidate shell companies. Results indicate that we are able to identify the shell companies with a high degree of precision and recall.1", "title": "" }, { "docid": "dfcc6b34f008e4ea9d560b5da4826f4d", "text": "The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. Keywords—Gesture recognition, Kinect, shadow play animation, VRPN.", "title": "" }, { "docid": "94e2bfa218791199a59037f9ea882487", "text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.", "title": "" }, { "docid": "c41038d0e3cf34e8a1dcba07a86cce9a", "text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.", "title": "" }, { "docid": "1a962bcbd5b670e532d841a74c2fe724", "text": "In SCADA systems, there are many RTUs (Remote Terminal Units) are used for field data collection as well as sending data to master node through the communication system. In such case master node represents the collected data and enables manager to handle the remote controlling activities. The RTU is nothing but the unit of data acquisition in standalone manner. The processor used in RTU is vulnerable to random faults due to harsh environment around RTUs. Faults may lead to the failure of RTU unit and hence it becomes inaccessible for information acquisition. For long running methods, fault tolerance is major concern and research problem since from last two decades. Using the SCADA systems increase the problem of fault tolerance is becoming servered. To handle the faults in oreder to perform the message passing through all the layers of communication system fo the SCADA that time need the efficient fault tolerance. The faults like RTU, message passing layer faults in communication system etc. SCADA is nothing but one of application of MPI. The several techniques for the fault tolerance has been described for MPI which are utilized in different applications such as SCADA. The goal of this paper is to present the study over the different fault tolerance techniques which can be used to optimize the SCADA system availability by mitigating the faults in RTU devices and communication systems.", "title": "" }, { "docid": "f89107f7ae4a250af36630aba072b7a9", "text": "The new HTML5 standard provides much more access to client resources, such as user location and local data storage. Unfortunately, this greater access may create new security risks that potentially can yield new threats to user privacy and web attacks. One of these security risks lies with the HTML5 client-side database. It appears that data stored on the client file system is unencrypted. Therefore, any stored data might be at risk of exposure. This paper explains and performs a security investigation into how the data is stored on client local file systems. The investigation was undertaken using Firefox and Chrome web browsers, and Encase (a computer forensic tool), was used to examine the stored data. This paper describes how the data can be retrieved after an application deletes the client side database. Finally, based on our findings, we propose a solution to correct any potential issues and security risks, and recommend ways to store data securely on local file systems.", "title": "" }, { "docid": "e2762e01ccf8319c726f3702867eeb8e", "text": "Balance maintenance and upright posture recovery under unexpected environmental forces are key requirements for safe and successful co-existence of humanoid robots in normal human environments. In this paper we present a two-phase control strategy for robust balance maintenance under a force disturbance. The first phase, called the reflex phase, is designed to withstand the immediate effect of the force. The second phase is the recovery phase where the system is steered back to a statically stable “home” posture. The reflex control law employs angular momentum and is characterized by its counter-intuitive quality of “yielding” to the disturbance. The recovery control employs a general scheme of seeking to maximize the potential energy and is robust to local ground surface feature. Biomechanics literature indicates a similar strategy in play during human balance maintenance.", "title": "" }, { "docid": "bef119e43fcc9f2f0b50fdf521026680", "text": "Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution. 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "dc1cfdda40b23849f11187ce890c8f8b", "text": "Controlled sharing of information is needed and desirable for many applications and is supported in operating systems by access control mechanisms. This paper shows how to extend programming languages to provide controlled sharing. The extension permits expression of access constraints on shared data. Access constraints can apply both to simple objects, and to objects that are components of larger objects, such as bank account records in a bank's data base. The constraints are stated declaratively, and can be enforced by static checking similar to type checking. The approach can be used to extend any strongly-typed language, but is particularly suitable for extending languages that support the notion of abstract data types.", "title": "" }, { "docid": "00e5acdfb1e388b149bc729a7af108ee", "text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574", "title": "" }, { "docid": "b1ef75c4a0dc481453fb68e94ec70cdc", "text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.", "title": "" }, { "docid": "6e4f0a770fe2a34f99957f252110b6bd", "text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.", "title": "" }, { "docid": "cad54b58e3dd47e1e92078519660e71d", "text": "Web images come in hand with valuable contextual information. Although this information has long been mined for various uses such as image annotation, clustering of images, inference of image semantic content, etc., insufficient attention has been given to address issues in mining this contextual information. In this paper, we propose a webpage segmentation algorithm targeting the extraction of web images and their contextual information based on their characteristics as they appear on webpages. We conducted a user study to obtain a human-labeled dataset to validate the effectiveness of our method and experiments demonstrated that our method can achieve better results compared to an existing segmentation algorithm.", "title": "" }, { "docid": "7df97d3a5c393053b22255a0414e574a", "text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.", "title": "" }, { "docid": "5d9112213e6828d5668ac4a33d4582f9", "text": "This paper describes four patients whose chief symptoms were steatorrhoea and loss of weight. Despite the absence of a history of abdominal pain investigations showed that these patients had chronic pancreatitis, which responded to medical treatment. The pathological findings in two of these cases and in six which came to necropsy are reported.", "title": "" } ]
scidocsrr
35acd1125604011e93fc78e2604ea45a
Image-Based Human Age Estimation by Manifold Learning and Locally Adjusted Robust Regression
[ { "docid": "b63591acc9a15a52029860806e2b1060", "text": "Age Specific Human-Computer Interaction (ASHCI) has vast potential applications in daily life. However, automatic age estimation technique is still underdeveloped. One of the main reasons is that the aging effects on human faces present several unique characteristics which make age estimation a challenging task that requires non-standard classification approaches. According to the speciality of the facial aging effects, this paper proposes the AGES (AGing pattErn Subspace) method for automatic age estimation. The basic idea is to model the aging pattern, which is defined as a sequence of personal aging face images, by learning a representative subspace. The proper aging pattern for an unseen face image is then determined by the projection in the subspace that can best reconstruct the face image, while the position of the face image in that aging pattern will indicate its age. The AGES method has shown encouraging performance in the comparative experiments either as an age estimator or as an age range estimator.", "title": "" }, { "docid": "1e2768be2148ff1fd102c6621e8da14d", "text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.", "title": "" }, { "docid": "83a644ac25c7db156d787629060fb32a", "text": "In this paper we study face recognition across ages within a real passport photo verification task. First, we propose using the gradient orientation pyramid for this task. Discarding the gradient magnitude and utilizing hierarchical techniques, we found that the new descriptor yields a robust and discriminative representation. With the proposed descriptor, we model face verification as a two-class problem and use a support vector machine as a classifier. The approach is applied to two passport data sets containing more than 1,800 image pairs from each person with large age differences. Although simple, our approach outperforms previously tested Bayesian technique and other descriptors, including the intensity difference and gradient with magnitude. In addition, it works as well as two commercial systems. Second, for the first time, we empirically study how age differences affect recognition performance. Our experiments show that, although the aging process adds difficulty to the recognition task, it does not surpass illumination or expression as a confounding factor.", "title": "" } ]
[ { "docid": "66382b88e0faa573251d5039ccd65d6c", "text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.", "title": "" }, { "docid": "a924ccb5a5465c1542fea5ac34749dd9", "text": "Self-awareness facilitates a proper assessment of cost-constrained cyber-physical systems, allocating limited resources where they are most needed. Together, situation awareness and attention are key enablers for self-awareness in efficient distributed sensing and computing networks.", "title": "" }, { "docid": "7c237153bbd9e43a93bccfdf5579ecfa", "text": "Over the last decade, efforts from industries and research communities have been made in addressing the security of Supervisory Control and Data Acquisition (SCADA) systems. However, the SCADA security deployed for critical infrastructures is still a challenging issue today. This paper gives an overview of the complexity of SCADA security. Products and applications in control network security are reviewed. Furthermore, new developments in SCADA security, especially the trend in technical and theoretical studies are presented. Some important topics on SCADA security are identified and highlighted and this can be served as the guide for future works in this area.", "title": "" }, { "docid": "0c1cd807339481f3a0b6da1fbe96950c", "text": "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.", "title": "" }, { "docid": "8bf9455d2eea2a2a5213ba8bb58e224c", "text": "Visual servoing is a well-known task in robotics. However, there are still challenges when multiple visual sources are combined to accurately guide the robot or occlusions appear. In this paper we present a novel visual servoing approach using hybrid multi-camera input data to lead a robot arm accurately to dynamically moving target points in the presence of partial occlusions. The approach uses four RGBD sensors as Eye-to-Hand (EtoH) visual input, and an arm-mounted stereo camera as Eye-in-Hand (EinH). A Master supervisor task selects between using the EtoH or the EinH, depending on the distance between the robot and target. The Master also selects the subset of EtoH cameras that best perceive the target. When the EinH sensor is used, if the target becomes occluded or goes out of the sensor's view-frustum, the Master switches back to the EtoH sensors to re-track the object. Using this adaptive visual input data, the robot is then controlled using an iterative planner that uses position, orientation and joint configuration to estimate the trajectory. Since the target is dynamic, this trajectory is updated every time-step. Experiments show good performance in four different situations: tracking a ball, targeting a bulls-eye, guiding a straw to a mouth and delivering an item to a moving hand. The experiments cover both simple situations such as a ball that is mostly visible from all cameras, and more complex situations such as the mouth which is partially occluded from some of the sensors.", "title": "" }, { "docid": "058a9737def3c1dc46218afe02e8d9b1", "text": "Covering point process theory, random geometric graphs, and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance, and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs, and is accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques, this is a comprehensive guide to the spatial stochastic models essential for modeling and analysis of wireless network performance.", "title": "" }, { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" }, { "docid": "60a3538ec6a64af6f8fd447ed0fb79f5", "text": "Several Pinned Photodiode (PPD) CMOS Image Sensors (CIS) are designed, manufactured, characterized and exposed biased to ionizing radiation up to 10 kGy(SiO2 ). In addition to the usually reported dark current increase and quantum efficiency drop at short wavelengths, several original radiation effects are shown: an increase of the pinning voltage, a decrease of the buried photodiode full well capacity, a large change in charge transfer efficiency, the creation of a large number of Total Ionizing Dose (TID) induced Dark Current Random Telegraph Signal (DC-RTS) centers active in the photodiode (even when the Transfer Gate (TG) is accumulated) and the complete depletion of the Pre-Metal Dielectric (PMD) interface at the highest TID leading to a large dark current and the loss of control of the TG on the dark current. The proposed mechanisms at the origin of these degradations are discussed. It is also demonstrated that biasing (i.e., operating) the PPD CIS during irradiation does not enhance the degradations compared to sensors grounded during irradiation.", "title": "" }, { "docid": "3e4a715c040ebb38674c057de6efc680", "text": "Agricultural data have a major role in the planning and success of rural development activities. Agriculturalists, planners, policy makers, government officials, farmers and researchers require relevant information to trigger decision making processes. This paper presents our approach towards extracting named entities from real-world agricultural data from different areas of agriculture using Conditional Random Fields (CRFs). Specifically, we have created a Named Entity tagset consisting of 19 fine grained tags. To the best of our knowledge, there is no specific tag set and annotated corpus available for the agricultural domain. We have performed several experiments using different combination of features and obtained encouraging results. Most of the issues observed in an error analysis have been addressed by post-processing heuristic rules, which resulted in a significant improvement of our system’s accuracy.", "title": "" }, { "docid": "78b358d12e94a100fc17beabcb34a43d", "text": "Model-free reinforcement learning has been shown to be a promising data driven approach for automatic dialogue policy optimization, but a relatively large amount of dialogue interactions is needed before the system reaches reasonable performance. Recently, Gaussian process based reinforcement learning methods have been shown to reduce the number of dialogues needed to reach optimal performance, and pre-training the policy with data gathered from different dialogue systems has further reduced this amount. Following this idea, a dialogue system designed for a single speaker can be initialised with data from other speakers, but if the dynamics of the speakers are very different the model will have a poor performance. When data gathered from different speakers is available, selecting the data from the most similar ones might improve the performance. We propose a method which automatically selects the data to transfer by defining a similarity measure between speakers, and uses this measure to weight the influence of the data from each speaker in the policy model. The methods are tested by simulating users with different severities of dysarthria interacting with a voice enabled environmental control system.", "title": "" }, { "docid": "970a76190e980afe51928dcaa6d594c8", "text": "Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.", "title": "" }, { "docid": "e9782d003112c64c3dc41c1f2a5c641e", "text": "Osgood-Schlatter's disease is a well known entity affecting the adolescent knee. Radiologic examination of the knee has been an integral part of the diagnosis of this condition for decades. However, the soft tissue changes have not been appreciated sufficiently. Emphasis is placed on the use of optimum radiographic technique and xeroradiography in the examination of the soft tissues of the knee.", "title": "" }, { "docid": "4a4a11d2779eab866ff32c564e54b69d", "text": "Although backpropagation neural networks generally predict better than decision trees do for pattern classiication problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, more often than not, explicit knowledge is needed by human experts. This work drives a symbolic representation for neural networks to make explicit each prediction of a neural network. An algorithm is proposed and implemented to extract symbolic rules from neural networks. Explicitness of the extracted rules is supported by comparing the symbolic rules generated by decision trees methods. Empirical study demonstrates that the proposed algorithm generates high quality rules from neural networks comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule. The symbolic rules from nerual networks preserve high predictive accuracy of original networks. An early and shorter version of this paper has been accepted for presentation at IJCAI'95.", "title": "" }, { "docid": "4301e426c7fac17358a68b815a03d2e3", "text": "What exactly is “nonconscious consumer psychology?” We use the term to describe a category of consumption behavior that is driven by processes that occur outside a consumer's conscious awareness. In other words, individuals engage in consumptionrelated cognition, motivation, decision making, emotion, and behavior without recognizing the role that nonconscious processes played in shaping them. A growing literature has documented that a wide range of consumption behaviors are strongly influenced by factors outside of people's conscious awareness. For instance, consumers are often unaware they have been exposed to an environmental cue that triggers a given consumption behavior, or are unaware of a mental process that is occurring outside conscious awareness, or are even unaware of the consumption-related outcome of such a nonconscious process (Chartrand, 2005). Such processes are often adaptive and highly functional, but at times can lead to undesirable outcomes for consumers. By shining a light on a wide range of nonconscious consumer psychology we hope to facilitate increased reliance on our unconscious systems in certain situations and equip consumers to defend themselves when unconscious processes can lead to negative outcomes. What exactly then is a nonconscious psychological process for a consumer? We define it as a subset of automatic processing (Bargh & Chartrand, 1999). An automatic process is one that, once set into motion, has no need of conscious intervention (Bargh, 1989). The labeling of automatic processes in social and cognitive psychology, including those set forth in dual process models, implies that processes are either automatic or they are not. Labels such as automatic/controlled, implicit/explicit, conscious/ nonconscious, spontaneous/deliberative, and System 1/System 2 by their dichotomous nature suggest that consumers are either in conscious decision making mode or have their unconscious driving their decision making entirely. However, there are", "title": "" }, { "docid": "7d43cf2e0fcc795f6af4bdbcfb56d13e", "text": "Vehicular Ad hoc Networks is a special kind of mobile ad hoc network to provide communication among nearby vehicles and between vehicles and nearby fixed equipments. VANETs are mainly used for improving efficiency and safety of (future) transportation. There are chances of a number of possible attacks in VANET due to open nature of wireless medium. In this paper, we have classified these security attacks and logically organized/represented in a more lucid manner based on the level of effect of a particular security attack on intelligent vehicular traffic. Also, an effective solution is proposed for DOS based attacks which use the redundancy elimination mechanism consists of rate decreasing algorithm and state transition mechanism as its components. This solution basically adds a level of security to its already existing solutions of using various alternative options like channel-switching, frequency-hopping, communication technology switching and multiple-radio transceivers to counter affect the DOS attacks. Proposed scheme enhances the security in VANETs without using any cryptographic scheme.", "title": "" }, { "docid": "cf073f910b70151eab2e066e13e96b94", "text": "Paying health care providers to meet quality goals is an idea with widespread appeal, given the common perception that quality of care in the United States remains unacceptably low despite a decade of benchmarking and public reporting. There has been little critical analysis of the design of the current generation of quality incentive programs. In this paper we examine public reports of paying for quality over the past five years and assess each of the identified programs in terms of key design features, including the market share of payers, the structure of the reward system, the amount of revenue at stake, and the targeted domains of health care quality.", "title": "" }, { "docid": "951213cd4412570709fb34f437a05c72", "text": "In this paper, we present directional skip-gram (DSG), a simple but effective enhancement of the skip-gram model by explicitly distinguishing left and right context in word prediction. In doing so, a direction vector is introduced for each word, whose embedding is thus learned by not only word co-occurrence patterns in its context, but also the directions of its contextual words. Theoretical and empirical studies on complexity illustrate that our model can be trained as efficient as the original skip-gram model, when compared to other extensions of the skip-gram model. Experimental results show that our model outperforms others on different datasets in semantic (word similarity measurement) and syntactic (partof-speech tagging) evaluations, respectively.", "title": "" }, { "docid": "37927017353dc0bab9c081629d33d48c", "text": "Generating a secret key between two parties by extracting the shared randomness in the wireless fading channel is an emerging area of research. Previous works focus mainly on single-antenna systems. Multiple-antenna devices have the potential to provide more randomness for key generation than single-antenna ones. However, the performance of key generation using multiple-antenna devices in a real environment remains unknown. Different from the previous theoretical work on multiple-antenna key generation, we propose and implement a shared secret key generation protocol, Multiple-Antenna KEy generator (MAKE) using off-the-shelf 802.11n multiple-antenna devices. We also conduct extensive experiments and analysis in real indoor and outdoor mobile environments. Using the shared randomness extracted from measured Received Signal Strength Indicator (RSSI) to generate keys, our experimental results show that using laptops with three antennas, MAKE can increase the bit generation rate by more than four times over single-antenna systems. Our experiments validate the effectiveness of using multi-level quantization when there is enough mutual information in the channel. Our results also show the trade-off between bit generation rate and bit agreement ratio when using multi-level quantization. We further find that even if an eavesdropper has multiple antennas, she cannot gain much more information about the legitimate channel.", "title": "" }, { "docid": "c82cecc94eadfa9a916d89a9ee3fac21", "text": "In this paper, we develop a supply chain network model consisting of manufacturers and retailers in which the demands associated with the retail outlets are random. We model the optimizing behavior of the various decision-makers, derive the equilibrium conditions, and establish the finite-dimensional variational inequality formulation. We provide qualitative properties of the equilibrium pattern in terms of existence and uniqueness results and also establish conditions under which the proposed computational procedure is guaranteed to converge. Finally, we illustrate the model through several numerical examples for which the equilibrium prices and product shipments are computed. This is the first supply chain network equilibrium model with random demands for which modeling, qualitative analysis, and computational results have been obtained.", "title": "" }, { "docid": "e3326903fe350778242c039856601dfa", "text": "A review was conducted on the use of thermochemical biomass gasification for producing biofuels, biopower and chemicals. The upstream processes for gasification are similar to other biomass processing methods. However, challenges remain in the gasification and downstream processing for viable commercial applications. The challenges with gasification are to understand the effects of operating conditions on gasification reactions for reliably predicting and optimizing the product compositions, and for obtaining maximal efficiencies. Product gases can be converted to biofuels and chemicals such as Fischer-Tropsch fuels, green gasoline, hydrogen, dimethyl ether, ethanol, methanol, and higher alcohols. Processes and challenges for these conversions are also summarized.", "title": "" } ]
scidocsrr
2cdf51107ab0af158b22f072186f0138
Online python tutor: embeddable web-based program visualization for cs education
[ { "docid": "c57d9c4f62606e8fccef34ddd22edaec", "text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.", "title": "" } ]
[ { "docid": "bc262b5366f1bf14e5120f68df8f5254", "text": "BACKGROUND\nThe aim of this study was to compare the results of laparoscopy-assisted total gastrectomy with those of open total gastrectomy for early gastric cancer.\n\n\nMETHODS\nPatients with gastric cancer who underwent total gastrectomy with curative intent in three Korean tertiary hospitals between January 2003 and December 2010 were included in this multicentre, retrospective, propensity score-matched cohort study. Cox proportional hazards regression models were used to evaluate the association between operation method and survival.\n\n\nRESULTS\nA total of 753 patients with early gastric cancer were included in the study. There were no significant differences in the matched cohort for overall survival (hazard ratio (HR) for laparoscopy-assisted versus open total gastrectomy 0.96, 95 per cent c.i. 0.57 to 1.65) or recurrence-free survival (HR 2.20, 0.51 to 9.52). The patterns of recurrence were no different between the two groups. The severity of complications, according to the Clavien-Dindo classification, was similar in both groups. The most common complications were anastomosis-related in the laparoscopy-assisted group (8.0 per cent versus 4.2 per cent in the open group; P = 0.015) and wound-related in the open group (1.6 versus 5.6 per cent respectively; P = 0.003). Postoperative death was more common in the laparoscopy-assisted group (1.6 versus 0.2 per cent; P = 0.045).\n\n\nCONCLUSION\nLaparoscopy-assisted total gastrectomy for early gastric cancer is feasible in terms of long-term results, including survival and recurrence. However, a higher postoperative mortality rate and an increased risk of anastomotic leakage after laparoscopic-assisted total gastrectomy are of concern.", "title": "" }, { "docid": "1c0e441afd88f00b690900c42b40841a", "text": "Convergence problems occur abundantly in all branches of mathematics or in the mathematical treatment of the sciences. Sequence transformations are principal tools to overcome convergence problems of the kind. They accomplish this by converting a slowly converging or diverging input sequence {sn} ∞ n=0 into another sequence {s ′ n }∞ n=0 with hopefully better numerical properties. Padé approximants, which convert the partial sums of a power series to a doubly indexed sequence of rational functions, are the best known sequence transformations, but the emphasis of the review will be on alternative sequence transformations which for some problems provide better results than Padé approximants.", "title": "" }, { "docid": "ec4d4e6d6f1c95ba3e5f0369562e25c4", "text": "In this paper we merge individual census data, individual patenting data, and individual IQ data from Finnish Defence Force to look at the probability of becoming an innovator and at the returns to invention. On the former, we find that: (i) it is strongly correlated with parental income; (ii) this correlation is greatly decreased when we control for parental education and child IQ. Turning to the returns to invention, we find that: (i) inventing increases the annual wage rate of the inventor by a significant amounts over a prolonged period after the invention; (ii) coworkers in the same firm also benefit from an innovation, the highest returns being earned by senior managers and entrepreneurs in the firm, especially in the long term. Finally, we find that becoming an inventor enhances both, intragenerational and intergenerational income mobility, and that inventors are very likely to make it to top income brackets.", "title": "" }, { "docid": "6d405b0f6b1381cec5e1d001e1102404", "text": "Consensus is an important building block for building replicated systems, and many consensus protocols have been proposed. In this paper, we investigate the building blocks of consensus protocols and use these building blocks to assemble a skeleton that can be configured to produce, among others, three well-known consensus protocols: Paxos, Chandra-Toueg, and Ben-Or. Although each of these protocols specifies only one quorum system explicitly, all also employ a second quorum system. We use the skeleton to implement a replicated service, allowing us to compare the performance of these consensus protocols under various workloads and failure scenarios.", "title": "" }, { "docid": "cac8aa7cfd50da05a6f973b019e8c4f5", "text": "Deep learning has led to remarkable advances when applied to problems where the data distribution does not change over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, and solve a diversity of tasks simultaneously. Furthermore, synapses in biological neurons are not simply real-valued scalars, but possess complex molecular machinery enabling non-trivial learning dynamics. In this study, we take a first step toward bringing this biological complexity into artificial neural networks. We introduce a model of intelligent synapses that accumulate task relevant information over time, and exploit this information to efficiently consolidate memories of old tasks to protect them from being overwritten as new tasks are learned. We apply our framework to learning sequences of related classification problems, and show that it dramatically reduces catastrophic forgetting while maintaining computational efficiency.", "title": "" }, { "docid": "14dec918e2b6b4678c38f533e0f1c9c1", "text": "A method is presented to assess stability changes in waves in early-stage ship design. The method is practical: the calculations can be completed quickly and can be applied as soon as lines are available. The intended use of the described method is for preliminary analysis. If stability changes that result in large roll motion are indicated early in the design process, this permits planning and budgeting for direct assessments using numerical simulations and/or model experiments. The main use of the proposed method is for the justification for hull form shape modification or for necessary additional analysis to better quantify potentially increased stability risk. The method is based on the evaluation of changing stability in irregular seas and can be applied to any type of ship. To demonstrate the robustness of the method, results for ten naval ship types are presented and discussed. The proposed method is shown to identify ships with known risk for large stability changes in waves.", "title": "" }, { "docid": "244b0b0029b4b440e1c5b953bda84aed", "text": "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.", "title": "" }, { "docid": "4e1414ce6a8fde64b0e7a89a2ced1a7e", "text": "Several innovative healthcare executives have recently introduced a new business strategy implementation tool: the Balanced Scorecard. The scorecard's measurement and management system provides the following potential benefits to healthcare organizations: It aligns the organization around a more market-oriented, customer-focused strategy It facilitates, monitors, and assesses the implementation of the strategy It provides a communication and collaboration mechanism It assigns accountability for performance at all levels of the organization It provides continual feedback on the strategy and promotes adjustments to marketplace and regulatory changes. We surveyed executives in nine provider organizations that were implementing the Balanced Scorecard. We asked about the following issues relating to its implementation and effect: 1. The role of the Balanced Scorecard in relation to a well-defined vision, mission, and strategy 2. The motivation for adopting the Balanced Scorecard 3. The difference between the Balanced Scorecard and other measurement systems 4. The process followed to develop and implement the Balanced Scorecard 5. The challenges and barriers during the development and implementation process 6. The benefits gained by the organization from adoption and use. The executives reported that the Balanced Scorecard strategy implementation and performance management tool could be successfully applied in the healthcare sector, enabling organizations to improve their competitive market positioning, financial results, and customer satisfaction. This article concludes with guidelines for other healthcare provider organizations to capture the benefits of the Balanced Scorecard performance management system.", "title": "" }, { "docid": "bc384d12513dc76bf76f11acd04d39f4", "text": "Traffic sign detection is an important task in traffic sign recognition systems. Chinese traffic signs have their unique features compared with traffic signs of other countries. Convolutional neural networks (CNNs) have achieved a breakthrough in computer vision tasks and made great success in traffic sign classification. In this paper, we present a Chinese traffic sign detection algorithm based on a deep convolutional network. To achieve real-time Chinese traffic sign detection, we propose an end-to-end convolutional network inspired by YOLOv2. In view of the characteristics of traffic signs, we take the multiple 1 × 1 convolutional layers in intermediate layers of the network and decrease the convolutional layers in top layers to reduce the computational complexity. For effectively detecting small traffic signs, we divide the input images into dense grids to obtain finer feature maps. Moreover, we expand the Chinese traffic sign dataset (CTSD) and improve the marker information, which is available online. All experimental results evaluated according to our expanded CTSD and German Traffic Sign Detection Benchmark (GTSDB) indicate that the proposed method is the faster and more robust. The fastest detection speed achieved was 0.017 s per image.", "title": "" }, { "docid": "fe536ac94342c96f6710afb4a476278b", "text": "The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.", "title": "" }, { "docid": "4eda25ffa01bb177a41a1d6d82db6a0c", "text": "For ontologiesto becost-efectively deployed,we requirea clearunderstandingof thevariouswaysthatontologiesarebeingusedtoday. To achieve this end,we presenta framework for understandingandclassifyingontology applications.We identify four main categoriesof ontologyapplications:1) neutralauthoring,2) ontologyasspecification, 3) commonaccessto information, and4) ontology-basedsearch. In eachcategory, we identify specific ontologyapplicationscenarios.For each,we indicatetheir intendedpurpose,therole of theontology, thesupporting technologies, who theprincipalactorsareandwhat they do. We illuminatethesimilaritiesanddifferencesbetween scenarios. We draw on work from othercommunities,suchassoftwaredevelopersandstandardsorganizations.We usea relatively broaddefinition of ‘ontology’, to show that muchof the work beingdoneby thosecommunitiesmay be viewedaspracticalapplicationsof ontologies.Thecommonthreadis theneedfor sharingthemeaningof termsin a givendomain,which is a centralrole of ontologies.An additionalaim of this paperis to draw attentionto common goalsandsupportingtechnologiesof theserelatively distinctcommunitiesto facilitateclosercooperationandfaster progress .", "title": "" }, { "docid": "c746704be981521aa38f7760a37d4b83", "text": "Myoelectric or electromyogram (EMG) signals can be useful in intelligently recognizing intended limb motion of a person. This paper presents an attempt to develop a four-channel EMG signal acquisition system as part of an ongoing research in the development of an active prosthetic device. The acquired signals are used for identification and classification of six unique movements of hand and wrist, viz. hand open, hand close, wrist flexion, wrist extension, ulnar deviation and radial deviation. This information is used for actuation of prosthetic drive. The time domain features are extracted, and their dimension is reduced using principal component analysis. The reduced features are classified using two different techniques: k nearest neighbor and artificial neural networks, and the results are compared.", "title": "" }, { "docid": "ad903f1d8998200d89234f0244452ad4", "text": "Within last two decades, social media has emerged as almost an alternate world where people communicate with each other and express opinions about almost anything. This makes platforms like Facebook, Reddit, Twitter, Myspace etc. a rich bank of heterogeneous data, primarily expressed via text but reflecting all textual and non-textual data that human interaction can produce. We propose a novel attention based hierarchical LSTM model to classify discourse act sequences in social media conversations, aimed at mining data from online discussion using textual meanings beyond sentence level. The very uniqueness of the task is the complete categorization of possible pragmatic roles in informal textual discussions, contrary to extraction of question-answers, stance detection or sarcasm identification which are very much role specific tasks. Early attempt was made on a Reddit discussion dataset. We train our model on the same data, and present test results on two different datasets, one from Reddit and one from Facebook. Our proposed model outperformed the previous one in terms of domain independence; without using platformdependent structural features, our hierarchical LSTM with word relevance attention mechanism achieved F1-scores of 71% and 66% respectively to predict discourse roles of comments in Reddit and Facebook discussions. Efficiency of recurrent and convolutional architectures in order to learn discursive representation on the same task has been presented and analyzed, with different word and comment embedding schemes. Our attention mechanism enables us to inquire into relevance ordering of text segments according to their roles in discourse. We present a human annotator experiment to unveil important observations about modeling and data annotation. Equipped with our text-based discourse identification model, we inquire into how Subhabrata Dutta Jadavpur University, Kolkata, India, e-mail: [email protected] Tanmoy Chakraborty IIIT Delhi, India, e-mail: [email protected] Dipankar Das Jadavpur University, Kolkata, India, e-mail: [email protected] 1 ar X iv :1 80 8. 02 29 0v 1 [ cs .C L ] 7 A ug 2 01 8 2 Subhabrata Dutta, Tanmoy Chakraborty and Dipankar Das heterogeneous non-textual features like location, time, leaning of information etc. play their roles in charaterizing online discussions on Facebook.", "title": "" }, { "docid": "3ee39231fc2fbf3b6295b1b105a33c05", "text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.", "title": "" }, { "docid": "dbb2a53d4dfbf0840d96670a25f88113", "text": "In real-world recognition/classification tasks, limited by various objective factors, it is usually difficult to collect training samples to exhaust all classes when training a recognizer or classifier. A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers not only to accurately classify the seen classes, but also to effectively deal with the unseen ones. This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, experiment setup and evaluation metrics. Furthermore, we briefly analyze the relationships between OSR and its related tasks including zero-shot, one-shot (few-shot) recognition/learning techniques, classification with reject option, and so forth. Additionally, we also overview the open world recognition which can be seen as a natural extension of OSR. Importantly, we highlight the limitations of existing approaches and point out some promising subsequent research directions in this field.", "title": "" }, { "docid": "637a1bc6dd1e3445f5ef92df562a57bd", "text": "This paper deals with the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor. It is a challenging task as we consider the almost inevitable accumulation error issue in some previous sequential fusion methods and also the possible failure of surface tracking in a long sequence. Therefore, we propose a global non-rigid registration framework and tackle the drifting problem via an explicit loop closure. Our novel scheme starts with a fusion step to get multiple partial scans from the input sequence, followed by a pairwise non-rigid registration and loop detection step to obtain correspondences between neighboring partial pieces and those pieces that form a loop. Then, we perform a global registration procedure to align all those pieces together into a consistent canonical space as guided by those matches that we have established. Finally, our proposed model-update step helps fixing potential misalignments that still exist after the global registration. Both geometric and appearance constraints are enforced during our alignment; therefore, we are able to get the recovered model with accurate geometry as well as high fidelity color maps for the mesh. Experiments on both synthetic and various real datasets have demonstrated the capability of our approach to reconstruct complete and watertight deformable objects.", "title": "" }, { "docid": "8800dba6bb4cea195c8871eb5be5b0a8", "text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.", "title": "" }, { "docid": "1256f0799ed585092e60b50fb41055be", "text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.", "title": "" }, { "docid": "9bfcaa86b342147a6dd88da683c9dec7", "text": "Applying popular machine learning algorithms to large amounts of data raised new challenges for the ML practitioners. Traditional ML libraries does not support well processing of huge datasets, so that new approaches were needed. Parallelization using modern parallel computing frameworks, such as MapReduce, CUDA, or Dryad gained in popularity and acceptance, resulting in new ML libraries developed on top of these frameworks. We will briefly introduce the most prominent industrial and academic outcomes, such as Apache Mahout, GraphLab or Jubatus. We will investigate how cloud computing paradigm impacted the field of ML. First direction is of popular statistics tools and libraries (R system, Python) deployed in the cloud. A second line of products is augmenting existing tools with plugins that allow users to create a Hadoop cluster in the cloud and run jobs on it. Next on the list are libraries of distributed implementations for ML algorithms, and on-premise deployments of complex systems for data analytics and data mining. Last approach on the radar of this survey is ML as Software-as-a-Service, several BigData start-ups (and large companies as well) already opening their solutions to the market.", "title": "" }, { "docid": "8cbe0ff905a58e575f2d84e4e663a857", "text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. Œis survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Speci€cally, we list and review the di‚erent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-Œings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.", "title": "" } ]
scidocsrr
a3041d0fadc6fba5a081fd6f04a804bf
Jump to better conclusions: SCAN both left and right
[ { "docid": "346349308d49ac2d3bb1cfa5cc1b429c", "text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "title": "" } ]
[ { "docid": "d63a81df4117f2b615f6e7208a2bdb6b", "text": "Recently, Location-based Services (LBS) became proactive by supporting smart notifications in case the user enters or leaves a specific geographical area, well-known as Geofencing. However, different geofences cannot be temporally related to each other. Therefore, we introduce a novel method to formalize sophisticated Geofencing scenarios as state and transition-based geofence models. Such a model considers temporal relations between geofences as well as duration constraints for the time being within a geofence or in transition between geofences. These are two highly important aspects in order to cover sophisticated scenarios in which a notification should be triggered only in case the user crosses multiple geofences in a defined temporal order or leaves a geofence after a certain amount of time. As a proof of concept, we introduce a prototype of a suitable user interface for designing complex geofence models in conjunction with the corresponding proactive LBS.", "title": "" }, { "docid": "3508a963a4f99d02d9c41dab6801d8fd", "text": "The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students’ comprehension and learning. This comprehensive meta-analysis of empirical studies was conducted to examine evidence of the effects of classroom discussion on measures of teacher and student talk and on individual student comprehension and critical-thinking and reasoning outcomes. Results revealed that several discussion approaches produced strong increases in the amount of student talk and concomitant reductions in teacher talk, as well as substantial improvements in text comprehension. Few approaches to discussion were effective at increasing students’ literal or inferential comprehension and critical thinking and reasoning. Effects were moderated by study design, the nature of the outcome measure, and student academic ability. While the range of ages of participants in the reviewed studies was large, a majority of studies were conducted with students in 4th through 6th grades. Implications for research and practice are discussed.", "title": "" }, { "docid": "6c1a21055e21198c2102f2601b835104", "text": "Stroke is a leading cause of adult motor disability. Despite recent progress, recovery of motor function after stroke is usually incomplete. This double blind, Sham-controlled, crossover study was designed to test the hypothesis that non-invasive stimulation of the motor cortex could improve motor function in the paretic hand of patients with chronic stroke. Hand function was measured using the Jebsen-Taylor Hand Function Test (JTT), a widely used, well validated test for functional motor assessment that reflects activities of daily living. JTT measured in the paretic hand improved significantly with non-invasive transcranial direct current stimulation (tDCS), but not with Sham, an effect that outlasted the stimulation period, was present in every single patient tested and that correlated with an increment in motor cortical excitability within the affected hemisphere, expressed as increased recruitment curves (RC) and reduced short-interval intracortical inhibition. These results document a beneficial effect of non-invasive cortical stimulation on a set of hand functions that mimic activities of daily living in the paretic hand of patients with chronic stroke, and suggest that this interventional strategy in combination with customary rehabilitative treatments may play an adjuvant role in neurorehabilitation.", "title": "" }, { "docid": "fab33f2e32f4113c87e956e31674be58", "text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition", "title": "" }, { "docid": "842202ed67b71c91630fcb63c4445e38", "text": "Yaumatei Dermatology Clinic, 12/F Yaumatei Specialist Clinic (New Extension), 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 46-year-old Chinese man presented with one year history of itchy verrucous lesions over penis and scrotum. Skin biopsy confirmed epidermolytic acanthoma. Epidermolytic acanthoma is a rare benign tumour. Before making such a diagnosis, exclusion of other diseases, especially genital warts and bowenoid papulosis is necessary. Treatment of multiple epidermolytic acanthoma remains unsatisfactory.", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "ea9f43aaab4383369680c85a040cedcf", "text": "Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.", "title": "" }, { "docid": "dfb16d97d293776e255397f1dc49bbbf", "text": "Self-service automatic teller machines (ATMs) have dramatically altered the ways in which customers interact with banks. ATMs provide the convenience of completing some banking transactions remotely and at any time. AT&T Global Information Solutions (GIS) is the world's leading provider of ATMs. These machines support such familiar services as cash withdrawals and balance inquiries. Further technological development has extended the utility and convenience of ATMs produced by GIS by facilitating check cashing and depositing, as well as direct bill payment, using an on-line system. These enhanced services, discussed in this paper, are made possible primarily through sophisticated optical character recognition (OCR) technology. Developed by an AT&T team that included GIS, AT&T Bell Laboratories Quality, Engineering, Software, and Technologies (QUEST), and AT&T Bell Laboratories Research, OCR technology was crucial to the development of these advanced ATMs.", "title": "" }, { "docid": "3bb4666a27f6bc961aa820d3f9301560", "text": "The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.", "title": "" }, { "docid": "30e93cb20194b989b26a8689f06b8343", "text": "We present a robust method for solving the map matching problem exploiting massive GPS trace data. Map matching is the problem of determining the path of a user on a map from a sequence of GPS positions of that user --- what we call a trajectory. Commonly obtained from GPS devices, such trajectory data is often sparse and noisy. As a result, the accuracy of map matching is limited due to ambiguities in the possible routes consistent with trajectory samples. Our approach is based on the observation that many regularity patterns exist among common trajectories of human beings or vehicles as they normally move around. Among all possible connected k-segments on the road network (i.e., consecutive edges along the network whose total length is approximately k units), a typical trajectory collection only utilizes a small fraction. This motivates our data-driven map matching method, which optimizes the projected paths of the input trajectories so that the number of the k-segments being used is minimized. We present a formulation that admits efficient computation via alternating optimization. Furthermore, we have created a benchmark for evaluating the performance of our algorithm and others alike. Experimental results demonstrate that the proposed approach is superior to state-of-art single trajectory map matching techniques. Moreover, we also show that the extracted popular k-segments can be used to process trajectories that are not present in the original trajectory set. This leads to a map matching algorithm that is as efficient as existing single trajectory map matching algorithms, but with much improved map matching accuracy.", "title": "" }, { "docid": "76a99c83dfbe966839dd0bcfbd32fad6", "text": "Virtually all domains of cognitive function require the integration of distributed neural activity. Network analysis of human brain connectivity has consistently identified sets of regions that are critically important for enabling efficient neuronal signaling and communication. The central embedding of these candidate 'brain hubs' in anatomical networks supports their diverse functional roles across a broad range of cognitive tasks and widespread dynamic coupling within and across functional networks. The high level of centrality of brain hubs also renders them points of vulnerability that are susceptible to disconnection and dysfunction in brain disorders. Combining data from numerous empirical and computational studies, network approaches strongly suggest that brain hubs play important roles in information integration underpinning numerous aspects of complex cognitive function.", "title": "" }, { "docid": "1e8acf321f7ff3a1a496e4820364e2a8", "text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.", "title": "" }, { "docid": "c896c4c81a3b8d18ad9f8073562f5514", "text": "A fully integrated passive UHF RFID tag with embedded temperature sensor, compatible with the ISO/IEC 18000 type 6C protocol, is developed in a standard 0.18µm CMOS process, which is designed to measure the axle temperature of a running train. The consumption of RF/analog front-end circuits is 1.556µ[email protected], and power dissipation of digital part is 5µ[email protected]. The CMOS temperature sensor exhibits a conversion time under 2 ms, less than 7 µW power dissipation, resolution of 0.31°C/LSB and error of +2.3/−1.1°C with a 1.8 V power supply for range from −35°C to 105°C. Measured sensitivity of tag is −5dBm at room temperature.", "title": "" }, { "docid": "8c04758d9f1c44e007abf6d2727d4a4f", "text": "The automatic identification and diagnosis of rice diseases are highly desired in the field of agricultural information. Deep learning is a hot research topic in pattern recognition and machine learning at present, it can effectively solve these problems in vegetable pathology. In this study, we propose a novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques. Using a dataset of 500 natural images of diseased and healthy rice leaves and stems captured from rice experimental field, CNNs are trained to identify 10 common rice diseases. Under the 10-fold cross-validation strategy, the proposed CNNs-based model achieves an accuracy of 95.48%. This accuracy is much higher than conventional machine learning model. The simulation results for the identification of rice diseases show the feasibility and effectiveness of the proposed method. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "23c1bd79e91f2e07b883c5cdbd97a780", "text": "BACKGROUND\nPostprandial hypertriglyceridemia and hyperglycemia are considered risk factors for cardiovascular disease. Evidence suggests that postprandial hypertriglyceridemia and hyperglycemia induce endothelial dysfunction and inflammation through oxidative stress. Statins and angiotensin type 1 receptor blockers have been shown to reduce oxidative stress and inflammation, improving endothelial function.\n\n\nMETHODS AND RESULTS\nTwenty type 2 diabetic patients ate 3 different test meals: a high-fat meal, 75 g glucose alone, and a high-fat meal plus glucose. Glycemia, triglyceridemia, endothelial function, nitrotyrosine, C-reactive protein, intercellular adhesion molecule-1, and interleukin-6 were assayed during the tests. Subsequently, diabetics took atorvastatin 40 mg/d, irbesartan 300 mg/d, both, or placebo for 1 week. The 3 tests were performed again between 5 and 7 days after the start of each treatment. High-fat load and glucose alone produced a decrease in endothelial function and increases in nitrotyrosine, C-reactive protein, intercellular adhesion molecule-1, and interleukin-6. These effects were more pronounced when high-fat load and glucose were combined. Short-term atorvastatin and irbesartan treatments significantly counterbalanced these phenomena, and their combination was more effective than either therapy alone.\n\n\nCONCLUSIONS\nThis study confirms an independent and cumulative effect of postprandial hypertriglyceridemia and hyperglycemia on endothelial function and inflammation, suggesting oxidative stress as a common mediator of such an effect. Short-term treatment with atorvastatin and irbesartan may counterbalance this phenomenon; the combination of the 2 compounds is most effective.", "title": "" }, { "docid": "2e9a0bce883548288de0a5d380b1ddf6", "text": "Three-level neutral point clamped (NPC) inverter is a widely used topology of multilevel inverters. However, the neutral point fluctuates for certain switching states. At low modulation index, the fluctuations can be compensated using redundant switching states. But, at higher modulation index and in overmodulation region, the neutral point fluctuation deteriorates the performance of the inverter. This paper proposes a simple space vector pulsewidth modulation scheme for operating a three-level NPC inverter at higher modulation indexes, including overmodulation region, with neutral point balancing. Experimental results are provided", "title": "" }, { "docid": "4fc64e24e9b080ffcc45cae168c2e339", "text": "During real time control of a dynamic system, one needs to design control systems with advanced control strategies to handle inherent nonlinearities and disturbances. This paper deals with the designing of a model reference adaptive control system with the use of MIT rule for real time control of a ball and beam system. This paper uses the gradient theory to develop MIT rule in which one or more parameters of adaptive controller needs to be adjusted so that the plant could track the reference model. A linearized model of ball and beam system is used in this paper to design the controller on MATLAB and the designed controller is then applied for real time control of ball and beam system. Simulations carried on SIMULINK and MATLAB show good performance of the designed adaptive controller in real time.", "title": "" }, { "docid": "25e7e22d19d786ff953c8cfa47988aa2", "text": "The world of human-object interactions is rich. While generally we sit on chairs and sofas, if need be we can even sit on TVs or top of shelves. In recent years, there has been progress in modeling actions and human-object interactions. However, most of these approaches require lots of data. It is not clear if the learned representations of actions are generalizable to new categories. In this paper, we explore the problem of zero-shot learning of human-object interactions. Given limited verb-noun interactions in training data, we want to learn a model than can work even on unseen combinations. To deal with this problem, In this paper, we propose a novel method using external knowledge graph and graph convolutional networks which learns how to compose classifiers for verbnoun pairs. We also provide benchmarks on several dataset for zero-shot learning including both image and video. We hope our method, dataset and baselines will facilitate future research in this direction.", "title": "" }, { "docid": "e6633bf0c5f2fd18f739a7f3a1751854", "text": "Image inpainting in wavelet domain refers to the recovery of an image from incomplete and/or inaccurate wavelet coefficients. To reconstruct the image, total variation (TV) models have been widely used in the literature and they produce high-quality reconstructed images. In this paper, we consider an unconstrained TV-regularized, l2-data-fitting model to recover the image. The model is solved by the alternating direction method (ADM). At each iteration, ADM needs to solve three subproblems, all of which have closed-form solutions. The per-iteration computational cost of ADM is dominated by two Fourier transforms and two wavelet transforms, all of which admit fast computation. Convergence of the ADM iterative scheme is readily obtained. We also discuss extensions of this ADM scheme to solving two closely related constrained models. We present numerical results to show the efficiency and stability of ADM for solving wavelet domain image inpainting problems. Numerical comparison results of ADM with some recent algorithms are also reported.", "title": "" }, { "docid": "2910fe6ac9958d9cbf9014c5d3140030", "text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.", "title": "" } ]
scidocsrr
81c05160a1fdae91c7e8607538e0fc38
FINANCIAL TIME SERIES FORECASTING USING ARTIFICIAL NEURAL NETWORKS
[ { "docid": "41a19d0e799e1801bacfbab19b1da467", "text": "This paper presents a neural network model for technical analysis of stock market, and its application to a buying and selling timing prediction system for stock index. When the numbers of learning samples are uneven among categories, the neural network with normal learning has the problem that it tries to improve only the prediction accuracy of most dominant category. In this paper, a learning method is proposed for improving prediction accuracy of other categories, controlling the numbers of learning samples by using information about the importance of each category. Experimental simulation using actual price data is carried out to demonstrate the usefulness of the method.", "title": "" } ]
[ { "docid": "ed0b269f861775550edd83b1eb420190", "text": "The continuous innovation process of the Information and Communication Technology (ICT) sector shape the way businesses redefine their business models. Though, current drivers of innovation processes focus solely on a technical dimension, while disregarding social and environmental drivers. However, examples like Nokia, Yahoo or Hewlett-Packard show that even though a profitable business model exists, a sound strategic innovation process is needed to remain profitable in the long term. A sustainable business model innovation demands the incorporation of all dimensions of the triple bottom line. Nevertheless, current management processes do not take the responsible steps to remain sustainable and keep being in denial of the evolutionary direction in which the markets develop, because the effects are not visible in short term. The implications are of substantial effect and can bring the foundation of the company’s business model in danger. This work evaluates the decision process that lets businesses decide in favor of un-sustainable changes and points out the barriers that prevent the development towards a sustainable business model that takes the new balance of forces into account.", "title": "" }, { "docid": "45c04c80a5e4c852c4e84ba66bd420dd", "text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.", "title": "" }, { "docid": "2f94bd95e2b17b4ff517133544087fc9", "text": "MPEG DASH is a widely used standard for adaptive video streaming over HTTP. The conceptual architecture for DASH includes a web server and clients, which download media segments from the server. Clients select the resolution of video segments by using an Adaptive Bit-Rate (ABR) strategy; in particular, a throughput-based ABR is used in the case of live video applications. However, recent papers show that these strategies may suffer from the presence of proxies/caches in the network, which are instrumental in streaming video on a large scale. To face this issue, we propose to extend the MPEG DASH architecture with a Tracker functionality, enabling client-to-client sharing of control information. This extension paves the way to a novel family of Tracker-assisted strategies that allow a greater design flexibility, while solving the specific issue caused by proxies/caches; in addition, its utility goes beyond the problem at hand, as it can be used by other applications as well, e.g. for peer-to-peer streaming.", "title": "" }, { "docid": "edb0edc7962f8b09495240131681db9d", "text": "A new theory of motivation is described along with its applications to addiction and aversion. The theory assumes that many hedonic, affective, or emotional states are automatically opposed by central nervous system mechanisms which reduce the intensity of hedonic feelings, both pleasant and aversive. The opponent processes for most hedonic states are strengthened by use and are weakened by disuse. These simple assumptions lead to deductions of many known facts about acquired motivation. In addition, the theory suggests several new lines of research on motivation. It argues that the establishment of some types of acquired motivation does not depend on conditioning and is nonassociative in nature. The relationships between conditioning processes and postulated opponent processes are discussed. Finally, it is argued that the data on several types of acquired motivation, arising from either pleasurable or aversive stimulation, can be fruitfully reorganized and understood within the framework provided by the opponent-process model.", "title": "" }, { "docid": "f61567cb43dfa4941a8b87dedce0b051", "text": "A single layer wideband SIW-fed differential patch array is proposed in this paper. A SIW-CPS-CMS (substrate integrated waveguide - coupled lines - coupled microstrip line) transition is designed and has a bandwidth of about 50%, covering the E-band and W-band. The differential phase deviation between the coupled microstrip lines is less than 7.5° within the operation band. A 1×4 array and a 4×4 array are designed. The antenna is composed of SIW parallel power di vider network, SIW-CPS-CMS transition, and series differential-fed patch array. Simulated results show that the bandwidth of the 1×4 array and 4×4 array are 37% and 12%, and the realized gain are 10.5-12 dB and 17.2-20.2dB within the corresponding operation band, respectively. The features of single layer and wideband on impedance and gain of the proposed SIW-fed differential patch array make it a good candidate for automotive radar or other millimeter wave applications.", "title": "" }, { "docid": "83f14923970c83a55152464179e6bae9", "text": "Urine drug screening can detect cases of drug abuse, promote workplace safety, and monitor drugtherapy compliance. Compliance testing is necessary for patients taking controlled drugs. To order and interpret these tests, it is required to know of testing modalities, kinetic of drugs, and different causes of false-positive and false-negative results. Standard immunoassay testing is fast, cheap, and the preferred primarily test for urine drug screening. This method reliably detects commonly drugs of abuse such as opiates, opioids, amphetamine/methamphetamine, cocaine, cannabinoids, phencyclidine, barbiturates, and benzodiazepines. Although immunoassays are sensitive and specific to the presence of drugs/drug metabolites, false negative and positive results may be created in some cases. Unexpected positive test results should be checked with a confirmatory method such as gas chromatography/mass spectrometry. Careful attention to urine collection methods and performing the specimen integrity tests can identify some attempts by patients to produce false-negative test results.", "title": "" }, { "docid": "fdd7237680ee739b598cd508c4a2ed38", "text": "Rectovaginal Endometriosis (RVE) is a severe form of endometriosis classified by Kirtner as stage 4 [1,2]. It is less frequent than peritoneal or ovarian endometriosis affecting 3.8% to 37% of patients with endometriosis [3,4]. RVE infiltrates the rectum, vagina, and rectovaginal septum, up to obliteration of the pouch of Douglas [4]. Endometriotic nodules exceeding 30 mm in diameter have 17.9% risk of ureteral involvement [5], while 5.3% to 12% of patients have bowel endometriosis, most commonly found in the recto-sigmoid involving 74% of those patients [3,4].", "title": "" }, { "docid": "884aa1d674e431e2a781eb7e861f2541", "text": "A key question facing education policymakers in many emerging economies is whether to promote the local language, as opposed to English, in elementary schools. The dilemma is particularly strong in countries that underwent rapid globalization making English a lingua franca for international as well as domestic exchange. In this paper, we estimate the English premium in globalization globalizing economy, by exploiting an exogenous language policy intervention in India. English training was revoked from the primary grades of all public schools in the state of West Bengal. In a two-way fixed effects model we combine differences across birth cohorts and districts in the exposure to English education, to estimate the effect of the language policy on wage premium. In addition, since the policy was introduced only in the state of West Bengal, we combine other states with no such intervention to address the potential threat of differential district trends confounding our two-way estimates. Our results indicate a remarkably high English skill premium in the labor market. A 1% increase in the probability of learning English raises weekly wages by 1.6%. On the average this implies a 68% reduction in wages for those that do not learn English due to the change in language policy. We provide further evidence that occupational choice played a decisive role in determining the wage gap. JEL Classifications: H4, I2, J0, O1 1 We thank Sukkoo Kim, Sebastian Galiani, Charles Moul, Bruce Petersen, and Robert Pollak for their invaluable advice and support, Barry Chiswick for his helpful comments and seminar participants at the 2008 Canadian Economic Conference and NEUDC conference for the discussions. We also thank Daifeng He and Michael Plotzke for their feedback. We are grateful to the Bradley Foundation for providing research support and Center for Research in Economics and Strategy (CRES), in the Olin Business School, Washington University in St. Louis, for travel grants. All errors are ours. 2 Department of Economics, Washington University in St Louis, email: [email protected] 3 Department of Economics, Washington University in St Louis, email: [email protected]", "title": "" }, { "docid": "72142ddc1ad3906fd0b1320ab3a1e48f", "text": "The American Herbal Pharmacopoeia (AHP) today announced the release of a section of the soon-to-be-completed Cannabis Therapeutic Compendium Cannabis in the Management and Treatment of Seizures and Epilepsy. This scientific review is one of numerous scientific reviews that will encompass the broad range of science regarding the therapeutic effects and safety of cannabis. In recent months there has been considerable attention given to the potential benefit of cannabis for treating intractable seizure disorders including rare forms of epilepsy. For this reason, the author of the section, Dr. Ben Whalley, and AHP felt it important to release this section, in its near-finalized form, into the public domain for free dissemination. The full release of AHP's Therapeutic Compendium is scheduled for early 2014. Dr. Whalley is a Senior Lecturer in Pharmacology and Pharmacy Director of Research at the School of Pharmacy of the University of Reading in the United Kingdom. He is also a member of the UK Epilepsy Research Network. Dr. Whalley's research interests lie in investigating neuronal processes that underlie complex physiological functions such as neuronal hyperexcitability states and their consequential disorders such as epilepsy, ataxia and dystonias, as well as learning and memory. Since 2003, Dr. Whalley has authored and co-authored numerous scientific peer-reviewed papers on the potential effects of cannabis in relieving seizure disorders and investigating the underlying pathophysiological mechanisms of these disorders. The release of this comprehensive review is timely given the growing claims being made for cannabis to relieve even the most severe forms of seizures. According to Dr. Whalley: \" Recent announcements of regulated human clinical trials of pure components of cannabis for the treatment of epilepsy have raised hopes among patients with drug-resistant epilepsy, their caregivers, and clinicians. Also, claims in the media of the successful use of cannabis extracts for the treatment of epilepsies, particularly in children, have further highlighted the urgent need for new and effective treatments. \" However, Dr. Whalley added, \" We must bear in mind that the use of any new treatment, particularly in the critically ill, carries inherent risks. Releasing this section of the monograph into the public domain at this time provides clinicians, patients, and their caregivers with a single document that comprehensively summarizes the scientific knowledge to date regarding cannabis and epilepsy and so fully support informed, evidence-based decision making. \" This release also follows recommendations of the Epilepsy Foundation, which has called for increasing medical …", "title": "" }, { "docid": "dcda412c18e92650d9791023f13e4392", "text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.", "title": "" }, { "docid": "cb7a9b816fc1b83670cb9fb377974e5d", "text": "BACKGROUND\nCare attendants constitute the main workforce in nursing homes, but their heavy workload, low autonomy, and indefinite responsibility result in high levels of stress and may affect quality of care. However, few studies have focused of this problem.\n\n\nOBJECTIVES\nThe aim of this study was to examine work-related stress and associated factors that affect care attendants in nursing homes and to offer suggestions for how management can alleviate these problems in care facilities.\n\n\nMETHODS\nWe recruited participants from nine nursing homes with 50 or more beds located in middle Taiwan; 110 care attendants completed the questionnaire. The work stress scale for the care attendants was validated and achieved good reliability (Cronbach's alpha=0.93). We also conducted exploratory factor analysis.\n\n\nRESULTS\nSix factors were extracted from the work stress scale: insufficient ability, stressful reactions, heavy workload, trouble in care work, poor management, and working time problems. The explained variance achieved 64.96%. Factors related to higher work stress included working in a hospital-based nursing home, having a fixed schedule, night work, feeling burden, inconvenient facility, less enthusiasm, and self-rated higher stress.\n\n\nCONCLUSION\nWork stress for care attendants in nursing homes is related to human resource management and quality of care. We suggest potential management strategies to alleviate work stress for these workers.", "title": "" }, { "docid": "beff14cfa1d0e5437a81584596e666ea", "text": "Graphene has exceptional optical, mechanical, and electrical properties, making it an emerging material for novel optoelectronics, photonics, and flexible transparent electrode applications. However, the relatively high sheet resistance of graphene is a major constraint for many of these applications. Here we propose a new approach to achieve low sheet resistance in large-scale CVD monolayer graphene using nonvolatile ferroelectric polymer gating. In this hybrid structure, large-scale graphene is heavily doped up to 3 × 10(13) cm(-2) by nonvolatile ferroelectric dipoles, yielding a low sheet resistance of 120 Ω/□ at ambient conditions. The graphene-ferroelectric transparent conductors (GFeTCs) exhibit more than 95% transmittance from the visible to the near-infrared range owing to the highly transparent nature of the ferroelectric polymer. Together with its excellent mechanical flexibility, chemical inertness, and the simple fabrication process of ferroelectric polymers, the proposed GFeTCs represent a new route toward large-scale graphene-based transparent electrodes and optoelectronics.", "title": "" }, { "docid": "3a3d6fecb580c2448c21838317aec3e2", "text": "The Vehicle Routing Problem with Time windows (VRPTW) is an extension of the capacity constrained Vehicle Routing Problem (VRP). The VRPTW is NP-Complete and instances with 100 customers or more are very hard to solve optimally. We represent the VRPTW as a multi-objective problem and present a genetic algorithm solution using the Pareto ranking technique. We use a direct interpretation of the VRPTW as a multi-objective problem, in which the two objective dimensions are number of vehicles and total cost (distance). An advantage of this approach is that it is unnecessary to derive weights for a weighted sum scoring formula. This prevents the introduction of solution bias towards either of the problem dimensions. We argue that the VRPTW is most naturally viewed as a multi-objective problem, in which both vehicles and cost are of equal value, depending on the needs of the user. A result of our research is that the multi-objective optimization genetic algorithm returns a set of solutions that fairly consider both of these dimensions. Our approach is quite effective, as it provides solutions competitive with the best known in the literature, as well as new solutions that are not biased toward the number of vehicles. A set of well-known benchmark data are used to compare the effectiveness of the proposed method for solving the VRPTW.", "title": "" }, { "docid": "60afd7bbb52b4e644258bf73466be036", "text": "This article describes the physiology of wound healing, discusses considerations and techniques for dermabrasion, and presents case studies and figures for a series of patients who underwent dermabrasion after surgeries for facial trauma.", "title": "" }, { "docid": "f50f7daeac03fbd41f91ff48c054955b", "text": "Neuronal signalling and communication underpin virtually all aspects of brain activity and function. Network science approaches to modelling and analysing the dynamics of communication on networks have proved useful for simulating functional brain connectivity and predicting emergent network states. This Review surveys important aspects of communication dynamics in brain networks. We begin by sketching a conceptual framework that views communication dynamics as a necessary link between the empirical domains of structural and functional connectivity. We then consider how different local and global topological attributes of structural networks support potential patterns of network communication, and how the interactions between network topology and dynamic models can provide additional insights and constraints. We end by proposing that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.", "title": "" }, { "docid": "f7cdf631c12567fd37b04419eb8e4daa", "text": "A multiple-beam photonic beamforming receiver is proposed and demonstrated. The architecture is based on a large port-count demultiplexer and fast tunable lasers to achieve a passive design, with independent beam steering for multiple beam operation. A single true time delay module with four independent beams is experimentally demonstrated, showing extremely smooth RF response in the -band, fast switching capabilities, and negligible crosstalk.", "title": "" }, { "docid": "cc2a7d6ac63f12b29a6d30f20b5547be", "text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.", "title": "" }, { "docid": "4ce67aeca9e6b31c5021712f148108e2", "text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.", "title": "" }, { "docid": "3476f91f068102ccf35c3855102f4d1b", "text": "Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safety-critical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both in-house developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of building-block experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a validation benchmark to a complex engineering system of interest.", "title": "" }, { "docid": "17287942eaf5c590b0d48b73eac7bc7c", "text": "The successof the Particle Swarm Optimization (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) hasmotivated researchers to extend the useof this bioinspired techniqueto other areas.One of them is multiobjective optimization. Despite the fact that the first proposalof a Multi-Objecti veParticle SwarmOptimizer (MOPSO) is over six years old, a considerable number of other algorithms have beenproposedsincethen. This paper presentsa comprehensi ve review of the various MOPSOsreported in the specializedliteratur e. As part of this review, we include a classificationof the approaches,and weidentify the main featuresof eachproposal. In the last part of the paper, we list someof the topicswithin this field that weconsideraspromisingareasof futur e research.", "title": "" } ]
scidocsrr
08a8ce6bea9ce053a4a2c10da877bf2f
PinOS: a programmable framework for whole-system dynamic instrumentation
[ { "docid": "b7222f86da6f1e44bd1dca88eb59dc4b", "text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.", "title": "" } ]
[ { "docid": "a338df86cf504d246000c42512473f93", "text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.", "title": "" }, { "docid": "755c4c452a535f30e53f0e9e77f71d20", "text": "Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video superresolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video superresolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video superresolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.", "title": "" }, { "docid": "6089f02c3fc3b1760c03190818c28af1", "text": "In this paper we suggest viewing images (as well as attacks on them) as a sequence of linear operators and propose novel hashing algorithms employing transforms that are based on matrix invariants. To derive this sequence, we simply cover a two dimensional representation of an image by a sequence of (possibly overlapping) rectangles R/sub i/ whose sizes and locations are chosen randomly/sup 1/ from a suitable distribution. The restriction of the image (representation) to each R/sub i/ gives rise to a matrix A/sub i/. The fact that A/sub i/'s will overlap and are random, makes the sequence (respectively) a redundant and non-standard representation of images, but is crucial for our purposes. Our algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics. From the secondary image (which does not perceptually resemble the input), we further extract the final features which can be used as a hash value (and can be further suitably quantized). In this paper, we use spectral matrix invariants as embodied by singular value decomposition. Surprisingly, formation of the secondary image turns out be quite important since it not only introduces further robustness (i.e., resistance against standard signal processing transformations), but also enhances the security properties (i.e. resistance against intentional attacks). Indeed, our experiments reveal that our hashing algorithms extract most of the geometric information from the images and hence are robust to severe perturbations (e.g. up to %50 cropping by area with 20 degree rotations) on images while avoiding misclassification. Our methods are general enough to yield a watermark embedding scheme, which will be studied in another paper.", "title": "" }, { "docid": "60d6869cadebea71ef549bb2a7d7e5c3", "text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.", "title": "" }, { "docid": "bb1554d174df80e7db20e943b4a69249", "text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.", "title": "" }, { "docid": "f8ac5a0dbd0bf8228b8304c1576189b9", "text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.", "title": "" }, { "docid": "3e24de04f0b1892b27fc60bb8a405d0d", "text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.", "title": "" }, { "docid": "c03a2f4634458d214d961c3ae9438d1d", "text": "An accurate small-signal model of three-phase photovoltaic (PV) inverters with a high-order grid filter is derived in this paper. The proposed model takes into account the influence of both the inverter operating point and the PV panel characteristics on the inverter dynamic response. A sensitivity study of the control loops to variations of the DC voltage, PV panel transconductance, supplied power, and grid inductance is performed using the proposed small-signal model. Analytical and experimental results carried out on a 100-kW PV inverter are presented.", "title": "" }, { "docid": "28bb2aa8a05e90072e2dc4a3b5d871d5", "text": "Radio Frequency Identification (RFID) security has not been properly handled in numerous applications, such as in public transportation systems. In this paper, a methodology to reverse engineer and detect security flaws is put into practice. Specifically, the communications protocol of an ISO/IEC 14443-B public transportation card used by hundreds of thousands of people in Spain was analyzed. By applying the methodology with a hardware tool (Proxmark 3), it was possible to access private information (e.g. trips performed, buses taken, fares applied…), to capture tag-reader communications, and even emulate both tags and readers.", "title": "" }, { "docid": "7db9cf29dd676fa3df5a2e0e95842b6e", "text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.", "title": "" }, { "docid": "207e90cebdf23fb37f10b5ed690cb4fc", "text": "In the scientific digital libraries, some papers from different research communities can be described by community-dependent keywords even if they share a semantically similar topic. Articles that are not tagged with enough keyword variations are poorly indexed in any information retrieval system which limits potentially fruitful exchanges between scientific disciplines. In this paper, we introduce a novel experimentally designed pipeline for multi-label semantic-based tagging developed for open-access metadata digital libraries. The approach starts by learning from a standard scientific categorization and a sample of topic tagged articles to find semantically relevant articles and enrich its metadata accordingly. Our proposed pipeline aims to enable researchers reaching articles from various disciplines that tend to use different terminologies. It allows retrieving semantically relevant articles given a limited known variation of search terms. In addition to achieving an accuracy that is higher than an expanded query based method using a topic synonym set extracted from a semantic network, our experiments also show a higher computational scalability versus other comparable techniques. We created a new benchmark extracted from the open-access metadata of a scientific digital library and published it along with the experiment code to allow further research in the topic.", "title": "" }, { "docid": "d004de75764e87fe246617cb7e3259a6", "text": "OBJECTIVE\nClinical decision-making regarding the prevention of depression is complex for pregnant women with histories of depression and their health care providers. Pregnant women with histories of depression report preference for nonpharmacological care, but few evidence-based options exist. Mindfulness-based cognitive therapy has strong evidence in the prevention of depressive relapse/recurrence among general populations and indications of promise as adapted for perinatal depression (MBCT-PD). With a pilot randomized clinical trial, our aim was to evaluate treatment acceptability and efficacy of MBCT-PD relative to treatment as usual (TAU).\n\n\nMETHOD\nPregnant adult women with depression histories were recruited from obstetric clinics at 2 sites and randomized to MBCT-PD (N = 43) or TAU (N = 43). Treatment acceptability was measured by assessing completion of sessions, at-home practice, and satisfaction. Clinical outcomes were interview-based depression relapse/recurrence status and self-reported depressive symptoms through 6 months postpartum.\n\n\nRESULTS\nConsistent with predictions, MBCT-PD for at-risk pregnant women was acceptable based on rates of completion of sessions and at-home practice assignments, and satisfaction with services was significantly higher for MBCT-PD than TAU. Moreover, at-risk women randomly assigned to MBCT-PD reported significantly improved depressive outcomes compared with participants receiving TAU, including significantly lower rates of depressive relapse/recurrence and lower depressive symptom severity during the course of the study.\n\n\nCONCLUSIONS\nMBCT-PD is an acceptable and clinically beneficial program for pregnant women with histories of depression; teaching the skills and practices of mindfulness meditation and cognitive-behavioral therapy during pregnancy may help to reduce the risk of depression during an important transition in many women's lives.", "title": "" }, { "docid": "97a3c599c7410a0e12e1784585260b95", "text": "This research focuses on 3D printed carbon-epoxy composite components in which the reinforcing carbon fibers have been preferentially aligned during the micro-extrusion process. Most polymer 3D printing techniques use unreinforced polymers. By adding carbon fiber as a reinforcing material, properties such as mechanical strength, electrical conductivity, and thermal conductivity can be greatly enhanced. However, these properties are significantly influenced by the degree of fiber alignment (or lack thereof). A Design of Experiments (DOE) approach was used to identify significant process parameters affecting preferential fiber alignment in the micro-extrusion process. A 2D Fast Fourier Transform (FFT) was used with ImageJ software to quantify the degree of fiber alignment in micro-extruded carbonepoxy pastes. Based on analysis of experimental results, tensile test samples were printed with fibers aligned parallel and perpendicular to the tensile axis. A standard test method for tensile properties of plastic revealed that the 3D printed test coupons with fibers aligned parallel to the tensile axis were significantly better in tensile strength and modulus. Results of this research can be used to 3D print components with locally controlled fiber alignment that is difficult to achieve via conventional composite manufacturing techniques.", "title": "" }, { "docid": "8a613c019c6b3b83d55378c3149df8f7", "text": "For the performance and accuracy requirements of brushless DC motor speed control system, this paper integrates organically neural network and the traditional PID to constitute brushless DC motor speed control system based on BP neural network self-tuning parameters PID control. The traditional PID controller is used in the beginning several seconds, and then another parameter self-tuning PID controller based on BP neural network is converted to after training for seconds. Simulation model was established in Matlab/Simulink. The simulation results indicate that the neutral network PID controller can improve the robustness of the system and has better adaptabilities to the model and environments compared with the traditional PID controller.", "title": "" }, { "docid": "22bbeceff175ee2e9a462b753ce24103", "text": "BACKGROUND\nEUS-guided FNA can help diagnose and differentiate between various pancreatic and other lesions.The aim of this study was to compare approaches among involved/relevant physicians to the controversies surrounding the use of FNA in EUS.\n\n\nMETHODS\nA five-case survey was developed, piloted, and validated. It was collected from a total of 101 physicians, who were all either gastroenterologists (GIs), surgeons or oncologists. The survey compared the management strategies chosen by members of these relevant disciplines regarding EUS-guided FNA.\n\n\nRESULTS\nFor CT operable T2NOM0 pancreatic tumors the research demonstrated variance as to whether to undertake EUS-guided FNA, at p < 0.05. For inoperable pancreatic tumors 66.7% of oncologists, 62.2% of surgeons and 79.1% of GIs opted for FNA (p < 0.05). For cystic pancreatic lesions, oncologists were more likely to send patients to surgery without FNA. For stable simple pancreatic cysts (23 mm), most physicians (66.67%) did not recommend FNA. For a submucosal gastric 19 mm lesion, 63.2% of surgeons recommended FNA, vs. 90.0% of oncologists (p < 0.05).\n\n\nCONCLUSIONS\nControversies as to ideal application of EUS-FNA persist. Optimal guidelines should reflect the needs and concerns of the multidisciplinary team who treat patients who need EUS-FNA. Multi-specialty meetings assembled to manage patients with these disorders may be enlightening and may help develop consensus.", "title": "" }, { "docid": "c5a15fd3102115aebc940cbc4ce5e474", "text": "We present a novel approach for visual detection and attribute-based search of vehicles in crowded surveillance scenes. Large-scale processing is addressed along two dimensions: 1) large-scale indexing, where hundreds of billions of events need to be archived per month to enable effective search and 2) learning vehicle detectors with large-scale feature selection, using a feature pool containing millions of feature descriptors. Our method for vehicle detection also explicitly models occlusions and multiple vehicle types (e.g., buses, trucks, SUVs, cars), while requiring very few manual labeling. It runs quite efficiently at an average of 66 Hz on a conventional laptop computer. Once a vehicle is detected and tracked over the video, fine-grained attributes are extracted and ingested into a database to allow future search queries such as “Show me all blue trucks larger than 7 ft. length traveling at high speed northbound last Saturday, from 2 pm to 5 pm”. We perform a comprehensive quantitative analysis to validate our approach, showing its usefulness in realistic urban surveillance settings.", "title": "" }, { "docid": "ced0328f339248158e8414c3315330c5", "text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB", "title": "" }, { "docid": "2210176bcb0f139e3f7f7716447f3920", "text": "Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a Support Vector Machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [17] and EbizSearch[24]. We believe it can be generalized to other digital libraries.", "title": "" }, { "docid": "15c3ddb9c01d114ab7d09f010195465b", "text": "In this paper we have described a solution for supporting independent living of the elderly by means of equipping their home with a simple sensor network to monitor their behaviour. Standard home automation sensors including movement sensors and door entry point sensors are used. By monitoring the sensor data, important information regarding any anomalous behaviour will be identified. Different ways of visualizing large sensor data sets and representing them in a format suitable for clustering the abnormalities are also investigated. In the latter part of this paper, recurrent neural networks are used to predict the future values of the activities for each sensor. The predicted values are used to inform the caregiver in case anomalous behaviour is predicted in the near future. Data collection, classification and prediction are investigated in real home environments with elderly occupants suffering from dementia.", "title": "" }, { "docid": "a478b6f7accfb227e6ee5a6b35cd7fa1", "text": "This paper presents the development of an ultra-high-speed permanent magnet synchronous motor (PMSM) that produces output shaft power of 2000 W at 200 000 rpm with around 90% efficiency. Due to the guaranteed open-loop stability over the full operating speed range, the developed motor system is compact and low cost since it can avoid the design complexity of a closed-loop controller. This paper introduces the collaborative design approach of the motor system in order to ensure both performance requirements and stability over the full operating speed range. The actual implementation of the motor system is then discussed. Finally, computer simulation and experimental results are provided to validate the proposed design and its effectiveness", "title": "" } ]
scidocsrr
d49032ba4809876b705c678cd2e8bdde
A Communication System for Deaf and Dumb People
[ { "docid": "b73526f1fb0abb4373421994dbd07822", "text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.", "title": "" } ]
[ { "docid": "3b4fec89137f9d4690bff6470b285192", "text": "The poor contrast and the overlapping of cervical cell cytoplasm are the major issues in the accurate segmentation of cervical cell cytoplasm. This paper presents an automated unsupervised cytoplasm segmentation approach which can effectively find the cytoplasm boundaries in overlapping cells. The proposed approach first segments the cell clumps from the cervical smear image and detects the nuclei in each cell clump. A modified Otsu method with prior class probability is proposed for accurate segmentation of nuclei from the cell clumps. Using distance regularized level set evolution, the contour around each nucleus is evolved until it reaches the cytoplasm boundaries. Promising results were obtained by experimenting on ISBI 2015 challenge dataset.", "title": "" }, { "docid": "6ee55ac672b1d87d4f4947655d321fb8", "text": "Federated identity providers, e.g., Facebook and PayPal, offer a convenient means for authenticating users to third-party applications. Unfortunately such cross-site authentications carry privacy and tracking risks. For example, federated identity providers can learn what applications users are accessing; meanwhile, the applications can know the users' identities in reality.\n This paper presents Crypto-Book, an anonymizing layer enabling federated identity authentications while preventing these risks. Crypto-Book uses a set of independently managed servers that employ a (t,n)-threshold cryptosystem to collectively assign credentials to each federated identity (in the form of either a public/private keypair or blinded signed messages). With the credentials in hand, clients can then leverage anonymous authentication techniques such as linkable ring signatures or partially blind signatures to log into third-party applications in an anonymous yet accountable way.\n We have implemented a prototype of Crypto-Book and demonstrated its use with three applications: a Wiki system, an anonymous group communication system, and a whistleblower submission system. Crypto-Book is practical and has low overhead: in a deployment within our research group, Crypto-Book group authentication took 1.607s end-to-end, an overhead of 1.2s compared to traditional non-privacy-preserving federated authentication.", "title": "" }, { "docid": "9dbff74b02153ee33f23d00884d909f7", "text": "The trend in isolated DC/DC converters is increasing output power demands and higher operating frequencies. Improved topologies and semiconductors can allow for lower loss at higher frequencies. A major barrier to further improvement is the transformer design. With high current levels and high frequency effects the transformers can become the major loss component in the circuit. High values of transformer leakage inductance can also greatly degrade the performance of the converter. Matrix transformers offer the ability to reduce winding loss and leakage inductance. This paper will study the impact of increased switching frequencies on transformer size and explore the use of matrix transformers in high current high frequency isolated applications. This paper will also propose an improved integrated matrix transformer design that can decrease core loss and further improve the performance of matrix transformers.", "title": "" }, { "docid": "3a7657130cb165682cc2e688a7e7195b", "text": "The functional simulator Simics provides a co-simulation integration path with a SystemC simulation environment to create Virtual Platforms. With increasing complexity of the SystemC models, this platform suffers from performance degradation due to the single threaded nature of the integrated Virtual Platform. In this paper, we present a multi-threaded Simics SystemC platform solution that significantly improves performance over the existing single threaded solution. The two schedulers run independently, only communicating in a thread safe manner through a message interface. Simics based logging and checkpointing are preserved within SystemC and tied to the corresponding Simics' APIs for a seamless experience. The solution also scales to multiple SystemC models within the platform, each running its own thread with an instantiation of the SystemC kernel. A second multi-cell solution is proposed providing comparable performance with the multi-thread solution, but reducing the burden of integration on the SystemC model. Empirical data is presented showing performance gains over the legacy single threaded solution.", "title": "" }, { "docid": "af4106bc4051e01146101aeb58a4261f", "text": "In recent years a great amount of research has focused on algorithms that learn features from unlabeled data. In this work we propose a model based on the Self-Organizing Map (SOM) neural network to learn features useful for the problem of automatic natural images classification. In particular we use the SOM model to learn single-layer features from the extremely challenging CIFAR-10 dataset, containing 60.000 tiny labeled natural images, and subsequently use these features with a pyramidal histogram encoding to train a linear SVM classifier. Despite the large number of images, the proposed feature learning method requires only few minutes on an entry-level system, however we show that a supervised classifier trained with learned features provides significantly better results than using raw pixels values or other handcrafted features designed specifically for image classification. Moreover, exploiting the topological property of the SOM neural network, it is possible to reduce the number of features and speed up the supervised training process combining topologically close neurons, without repeating the feature learning process.", "title": "" }, { "docid": "abef10b620026b2c054ca69a3c75f930", "text": "The idea that general intelligence may be more variable in males than in females has a long history. In recent years it has been presented as a reason that there is little, if any, mean sex difference in general intelligence, yet males tend to be overrepresented at both the top and bottom ends of its overall, presumably normal, distribution. Clear analysis of the actual distribution of general intelligence based on large and appropriately population-representative samples is rare, however. Using two population-wide surveys of general intelligence in 11-year-olds in Scotland, we showed that there were substantial departures from normality in the distribution, with less variability in the higher range than in the lower. Despite mean IQ-scale scores of 100, modal scores were about 105. Even above modal level, males showed more variability than females. This is consistent with a model of the population distribution of general intelligence as a mixture of two essentially normal distributions, one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation. Though present at the high end of the distribution, sex differences in variability did not appear to account for sex differences in high-level achievement.", "title": "" }, { "docid": "bf7203bb63cda371d78fa7337f2d7e2f", "text": "Want to get experience? Want to get any ideas to create new things in your life? Read mathematical structures of language now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.", "title": "" }, { "docid": "dfa611e19a3827c66ea863041a3ef1e2", "text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.", "title": "" }, { "docid": "1235be9c8056b20ded217bc7474208e1", "text": "Pathological gambling (PG) is most likely associated with functional brain changes as well as neuropsychological and personality alterations. Recent research with the Iowa Gambling Task suggests decision-making impairments in PG. These deficits are usually attributed to disturbances in feedback processing and associated functional alterations of the orbitofrontal cortex. However, previous studies with other clinical populations found relations between executive (dorsolateral prefrontal) functions and decision-making using a task with explicit rules for gains and losses, the Game of Dice Task. In the present study, we assessed 25 male PG patients and 25 male healthy controls with the Game of Dice Task. PG patients showed pronounced deficits in the Game of Dice Task, and the frequency of risky decisions was correlated with executive functions and feedback processing. Therefore, risky decisions of PG patients might be influenced by both dorsolateral prefrontal and orbitofrontal cortex dysfunctions.", "title": "" }, { "docid": "06803b2748e6a16ecb3bb93efe60e9a7", "text": "Considerable buzz surrounds artificial intelligence, and, indeed, AI is all around us. As with any software-based technology, it is also prone to vulnerabilities. Here, the author examines how we determine whether AI is sufficiently reliable to do its job and how much we should trust its outcomes.", "title": "" }, { "docid": "0d2791ea015a251efd06de0468315194", "text": "We introduce a novel two-stage approach for the important cybersecurity problem of detecting the presence of a botnet and identifying the compromised nodes (the bots), ideally before the botnet becomes active. The first stage detects anomalies by leveraging large deviations of an empirical distribution. We propose two approaches to create the empirical distribution: 1) a flow-based approach estimating the histogram of quantized flows and 2) a graph-based approach estimating the degree distribution of node interaction graphs, encompassing both Erdős-Rényi graphs and scale-free graphs. The second stage detects the bots using ideas from social network community detection in a graph that captures correlations of interactions among nodes over time. Community detection is performed by maximizing a modularity measure in this graph. The modularity maximization problem is nonconvex. We propose a convex relaxation, an effective randomization algorithm, and establish sharp bounds on the suboptimality gap. We apply our method to real-world botnet traffic and compare its performance with other methods.", "title": "" }, { "docid": "8e3bf062119c6de9fa5670ce4b00764b", "text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2)  V(-1)  s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.", "title": "" }, { "docid": "1df103aef2a4a5685927615cfebbd1ea", "text": "While human subjects lift small objects using the precision grip between the tips of the fingers and thumb the ratio between the grip force and the load force (i.e. the vertical lifting force) is adapted to the friction between the object and the skin. The present report provides direct evidence that signals in tactile afferent units are utilized in this adaptation. Tactile afferent units were readily excited by small but distinct slips between the object and the skin revealed as vibrations in the object. Following such afferent slip responses the force ratio was upgraded to a higher, stable value which provided a safety margin to prevent further slips. The latency between the onset of the a slip and the appearance of the ratio change (74 ±9 ms) was about half the minimum latency for intended grip force changes triggered by cutaneous stimulation of the fingers. This indicated that the motor responses were automatically initiated. If the subjects were asked to very slowly separate their thumb and the opposing finger while the object was held in air, grip force reflexes originating from afferent slip responses appeared to counteract the voluntary command, but the maintained upgrading of the force ratio was suppressed. In experiments with weak electrical cutaneous stimulation delivered through the surfaces of the object it was established that tactile input alone could trigger the upgrading of the force ratio. Although, varying in responsiveness, each of the three types of tactile units which exhibit a pronounced dynamic sensitivity (FA I, FA II and SA I units) could reliably signal these slips. Similar but generally weaker afferent responses, sometimes followed by small force ratio changes, also occurred in the FA I and the SA I units in the absence of detectable vibrations events. In contrast to the responses associated with clear vibratory events, the weaker afferent responses were probably caused by localized frictional slips, i.e. slips limited to small fractions of the skin area in contact with the object. Indications were found that the early adjustment to a new frictional condition, which may appear soon (ca. 0.1–0.2 s) after the object is initially gripped, might depend on the vigorous responses in the FA I units during the initial phase of the lifts (see Westling and Johansson 1987). The role of the tactile input in the adaptation of the force coordination to the frictional condition is discussed.", "title": "" }, { "docid": "40369d066befb131bf48114534a79698", "text": "Spark has been increasingly adopted by industries in recent years for big data analysis by providing a fault tolerant, scalable and easy-to-use in memory abstraction. Moreover, the community has been actively developing a rich ecosystem around Spark, making it even more attractive. However, there is not yet a Spark specify benchmark existing in the literature to guide the development and cluster deployment of Spark to better fit resource demands of user applications. In this paper, we present SparkBench, a Spark specific benchmarking suite, which includes a comprehensive set of applications. SparkBench covers four main categories of applications, including machine learning, graph computation, SQL query and streaming applications. We also characterize the resource consumption, data flow and timing information of each application and evaluate the performance impact of a key configuration parameter to guide the design and optimization of Spark data analytic platform.", "title": "" }, { "docid": "40f3a647fcaac638373f51fe125c36bb", "text": "In this paper we presented a design of 4 bit attenuator with RF MEMS switches and distributed attenuation networks. The substrate of this attenuator is high resistance silicon and the TaN thin film is used as resistors. RF MEMS switches have excellent microwave properties to reduce the insertion loss of attenuator and increase the insulation. Distributed attenuation networks employed as fixed attenuators have the advantages of smaller size and better performance in comparison to conventional π or T-type fixed attenuators. Over DC-20GHz, the simulation results show the attenuation flatness of 1.52-1.65dB and the attenuation range of 15.35-17.02dB. The minimum attenuation is 0.44-1.96dB in the interesting frequency range. The size of the attenuator is 2152 × 7500μm2.", "title": "" }, { "docid": "9d7a67f2cd12a6fd033ad102fb9c526e", "text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.", "title": "" }, { "docid": "b24fc322e0fec700ec0e647c31cfd74d", "text": "Organometal trihalide perovskite solar cells offer the promise of a low-cost easily manufacturable solar technology, compatible with large-scale low-temperature solution processing. Within 1 year of development, solar-to-electric power-conversion efficiencies have risen to over 15%, and further imminent improvements are expected. Here we show that this technology can be successfully made compatible with electron acceptor and donor materials generally used in organic photovoltaics. We demonstrate that a single thin film of the low-temperature solution-processed organometal trihalide perovskite absorber CH3NH3PbI3-xClx, sandwiched between organic contacts can exhibit devices with power-conversion efficiency of up to 10% on glass substrates and over 6% on flexible polymer substrates. This work represents an important step forward, as it removes most barriers to adoption of the perovskite technology by the organic photovoltaic community, and can thus utilize the extensive existing knowledge of hybrid interfaces for further device improvements and flexible processing platforms.", "title": "" }, { "docid": "eece7dab68d56d3d5f28a72e873a0a72", "text": "OBJECTIVES\nTo describe the effect of multidisciplinary care on survival in women treated for breast cancer.\n\n\nDESIGN\nRetrospective, comparative, non-randomised, interventional cohort study.\n\n\nSETTING\nNHS hospitals, health boards in the west of Scotland, UK.\n\n\nPARTICIPANTS\n14,358 patients diagnosed with symptomatic invasive breast cancer between 1990 and 2000, residing in health board areas in the west of Scotland. 13,722 (95.6%) patients were eligible (excluding 16 diagnoses of inflammatory cancers and 620 diagnoses of breast cancer at death).\n\n\nINTERVENTION\nIn 1995, multidisciplinary team working was introduced in hospitals throughout one health board area (Greater Glasgow; intervention area), but not in other health board areas in the west of Scotland (non-intervention area).\n\n\nMAIN OUTCOME MEASURES\nBreast cancer specific mortality and all cause mortality.\n\n\nRESULTS\nBefore the introduction of multidisciplinary care (analysed time period January 1990 to September 1995), breast cancer mortality was 11% higher in the intervention area than in the non-intervention area (hazard ratio adjusted for year of incidence, age at diagnosis, and deprivation, 1.11; 95% confidence interval 1.00 to 1.20). After multidisciplinary care was introduced (time period October 1995 to December 2000), breast cancer mortality was 18% lower in the intervention area than in the non-intervention area (0.82, 0.74 to 0.91). All cause mortality did not differ significantly between populations in the earlier period, but was 11% lower in the intervention area than in the non-interventional area in the later period (0.89, 0.82 to 0.97). Interrupted time series analyses showed a significant improvement in breast cancer survival in the intervention area in 1996, compared with the expected survival in the same year had the pre-intervention trend continued (P=0.004). This improvement was maintained after the intervention was introduced.\n\n\nCONCLUSION\nIntroduction of multidisciplinary care was associated with improved survival and reduced variation in survival among hospitals. Further analysis of clinical audit data for multidisciplinary care could identify which aspects of care are most associated with survival benefits.", "title": "" }, { "docid": "bc4ce8c0dce6515d1432a6baecef4614", "text": "The lsemantica command, presented in this paper, implements Latent Semantic Analysis in Stata. Latent Semantic Analysis is a machine learning algorithm for word and text similarity comparison. Latent Semantic Analysis uses Truncated Singular Value Decomposition to derive the hidden semantic relationships between words and texts. lsemantica provides a simple command for Latent Semantic Analysis in Stata as well as complementary commands for text similarity comparison.", "title": "" }, { "docid": "4413b7c20191b443be6184fe927384c8", "text": "Falls and resulting physical-psychological consequences in the elderly are a major health hazard and a serious obstacle for independent living. So development of intelligent video surveillance systems is so important due to providing safe and secure environments. To this end, this paper proposes a novel approach for human fall detection based on human shape variation. Combination of best-fit approximated ellipse around the human body, projection histograms of the segmented silhouette and temporal changes of head pose, would provide a useful cue for detection different behaviors. Extracted feature vectors are finally fed to a multi-class support vector machine for precise classification of motions and determination of a fall event. Unlike existent fall detection systems that only deal with limited movement patterns, we considered wide range of motions consisting of normal daily life activities, abnormal behaviors and also unusual events. Reliable recognition rate of experimental results underlines satisfactory performance of our system.", "title": "" } ]
scidocsrr
c451fd0cb6ff3f6f33922eb71fe4b875
The Post Adoption Switching Of Social Network Service: A Human Migratory Model
[ { "docid": "1c0efa706f999ee0129d21acbd0ef5ab", "text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN", "title": "" }, { "docid": "9948738a487ed899ec50ac292e1f9c6d", "text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.", "title": "" }, { "docid": "013bf71ab18747afefa07cbe6ae6d477", "text": "Mobile commerce is becoming increasingly important in business. This trend is particularly evident in the service industry. To cope with this demand, various platforms have been proposed to provide effective mobile commerce solutions. Among these solutions, wireless application protocol (WAP) is one of the most widespread technical standards for mobile commerce. Following continuous technical evolution, WAP has come to include various new features. However, WAP services continue to struggle for market share. Hence, understanding WAP service adoption is increasingly important for enterprises interested in developing mobile commerce. This study aims to (1) identify the critical factors of WAP service adoption; (2) explore the relative importance of each factor for users who adopt WAP and those who do not; (3) examine the causal relationships among variables on WAP service adoption behavior. This study conducts an empirical test of WAP service adoption in Taiwan, based on theory of planned behavior (TPB) and innovation diffusion theory (IDT). The results help clarify the critical factors influences on WAP service adoption in the Greater China economic region. The Greater China economic region is a rapidly growing market. Many western telecommunication enterprises are strongly interested in providing wireless services in Shanghai, Singapore, Hong Kong and Taipei. Since these cities share a similar culture and the same language, the analytical results and conclusions of this study may be a good reference for global telecommunication enterprises to establish the developing strategy for their eastern branches. From the analysis conducted in this study, the critical factors for influences on WAP service adoption include connection speed, service cost, user satisfaction, personal innovativeness, ease of use, peer influence, and facilitating condition. Therefore, this study proposes that strategies for marketing WAP services in the Greater China economic region can pay increased attention to these factors. Notably, this study also provides some suggestion for subsequent researchers and practitioners seeking to understand WAP service adoption behavior.", "title": "" } ]
[ { "docid": "11f84f99de269ca5ca43fc6d761504b7", "text": "Effective use of distributed collaboration environments requires shared mental models that guide users in sensemaking and categorization. In Lotus Notes -based collaboration systems, such shared models are usually implemented as views and document types. TeamRoom, developed at Lotus Institute, implements in its design a theory of effective social process that creates a set of team-specific categories, which can then be used as a basis for knowledge sharing, collaboration, and team memory. This paper reports an exploratory study in collective concept formation in the TeamRoom environment. The study was run in an ecological setting, while the team members used the system for their everyday work. We apply theory developed by Lev Vygotsky, and use a modified version of an experiment on concept formation, devised by Lev Sakharov, and discussed in Vygotsky (1986). Vygotsky emphasized the role of language, cognitive artifacts, and historical and social sources in the development of thought processes. Within the Vygotskian framework it becomes clear that development of thinking does not end in adolescence. In teams of adult people, learning and knowledge creation are continuous processes. New concepts are created, shared, and developed into systems. The question, then, becomes how spontaneous concepts are collectively generated in teams, how they become integrated as systems, and how computer mediated collaboration environments affect these processes. d in ittle ons", "title": "" }, { "docid": "bf7bc12a4f5cbac481c8a0a4e92854b9", "text": "Recurrent neural networks (RNN), especially the ones requiring extremely long term memories, are difficult to training. Hence, they provide an ideal testbed for benchmarking the performance of optimization algorithms. This paper reports test results of a recently proposed preconditioned stochastic gradient descent (PSGD) algorithm on RNN training. We find that PSGD may outperform Hessian-free optimization which achieves the state-of-the-art performance on the target problems, although it is only slightly more complicated than stochastic gradient descent (SGD) and is user friendly, virtually a tuning free algorithm.", "title": "" }, { "docid": "4762cbac8a7e941f26bce8217cf29060", "text": "The 2-D maximum entropy method not only considers the distribution of the gray information, but also takes advantage of the spatial neighbor information with using the 2-D histogram of the image. As a global threshold method, it often gets ideal segmentation results even when the image s signal noise ratio (SNR) is low. However, its time-consuming computation is often an obstacle in real time application systems. In this paper, the image thresholding approach based on the index of entropy maximization of the 2-D grayscale histogram is proposed to deal with infrared image. The threshold vector (t, s), where t is a threshold for pixel intensity and s is another threshold for the local average intensity of pixels, is obtained through a new optimization algorithm, namely, the particle swarm optimization (PSO) algorithm. PSO algorithm is realized successfully in the process of solving the 2-D maximum entropy problem. The experiments of segmenting the infrared images are illustrated to show that the proposed method can get ideal segmentation result with less computation cost. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "721ff703dfafad6b1b330226c36ed641", "text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.", "title": "" }, { "docid": "9d5de7a0330d8bba49eb8d73597473b9", "text": "Web crawlers are highly automated and seldom regulated manually. The diversity of crawler activities often leads to ethical problems such as spam and service attacks. In this research, quantitative models are proposed to measure the web crawler ethics based on their behaviors on web servers. We investigate and define rules to measure crawler ethics, referring to the extent to which web crawlers respect the regulations set forth in robots.txt configuration files. We propose a vector space model to represent crawler behavior and measure the ethics of web crawlers based on the behavior vectors. The results show that ethicality scores vary significantly among crawlers. Most commercial web crawlers' behaviors are ethical. However, many commercial crawlers still consistently violate or misinterpret certain robots.txt rules. We also measure the ethics of big search engine crawlers in terms of return on investment. The results show that Google has a higher score than other search engines for a US website but has a lower score than Baidu for Chinese websites.", "title": "" }, { "docid": "766c723d00ac15bf31332c8ab4b89b63", "text": "For those people without artistic talent, they can only draw rough or even awful doodles to express their ideas. We propose a doodle beautification system named Doodle Master, which can transfer a rough doodle to a plausible image and also keep the semantic concepts of the drawings. The Doodle Master applies the VAE/GAN model to decode and generate the beautified result from a constrained latent space. To achieve better performance for sketch data which is more like discrete distribution, a shared-weight method is proposed to improve the learnt features of the discriminator with the aid of the encoder. Furthermore, we design an interface for the user to draw with basic drawing tools and adjust the number of reconstruction times. The experiments show that the proposed Doodle Master system can successfully beautify the rough doodle or sketch in real-time.", "title": "" }, { "docid": "c337226d663e69ecde67ff6f35ba7654", "text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.", "title": "" }, { "docid": "b5b4e637065ba7c0c18a821bef375aea", "text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.", "title": "" }, { "docid": "bf44cc7e8e664f930edabf20ca06dd29", "text": "Nowadays, our living environment is rich in radio-frequency energy suitable for harvesting. This energy can be used for supplying low-power consumption devices. In this paper, we analyze a new type of a Koch-like antenna which was designed for energy harvesting specifically. The designed antenna covers two different frequency bands (GSM 900 and Wi-Fi). Functionality of the antenna is verified by simulations and measurements.", "title": "" }, { "docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2", "text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).", "title": "" }, { "docid": "6097315ac2e4475e8afd8919d390babf", "text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.", "title": "" }, { "docid": "9d089af812c0fdd245a218362d88b62a", "text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.", "title": "" }, { "docid": "61615f5aefb0aa6de2dd1ab207a966d5", "text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.", "title": "" }, { "docid": "4d297680cd342f46a5a706c4969273b8", "text": "Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.", "title": "" }, { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" }, { "docid": "30520912723d67f7d07881aa33cdf229", "text": "OBJECTIVE\nA study to examine the incidence and characteristics of concussions among Canadian university athletes during 1 full year of football and soccer participation.\n\n\nDESIGN\nRetrospective survey.\n\n\nPARTICIPANTS\nThree hundred eighty Canadian university football and 240 Canadian university soccer players reporting to 1999 fall training camp. Of these, 328 football and 201 soccer players returned a completed questionnaire.\n\n\nMAIN OUTCOME MEASURES\nBased on self-reported symptoms, calculations were made to determine the number of concussions experienced during the previous full year of football or soccer participation, the duration of symptoms, the time for return to play, and any associated risk factors for concussions.\n\n\nRESULTS\nOf all the athletes who returned completed questionnaires, 70.4% of the football players and 62.7% of the soccer players had experienced symptoms of a concussion during the previous year. Only 23.4% of the concussed football players and 19.8% of the concussed soccer players realized they had suffered a concussion. More than one concussion was experienced by 84.6% of the concussed football players and 81.7% of the concussed soccer players. Examining symptom duration, 27.6% of all concussed football players and 18.8% of all concussed soccer players experienced symptoms for at least 1 day or longer. Tight end and defensive lineman were the positions most commonly affected in football, while goalies were the players most commonly affected in soccer. Variables that increased the odds of suffering a concussion during the previous year for football players included a history of a traumatic loss of consciousness or a recognized concussion in the past. Variables that increased the odds of suffering a concussion during the previous year for soccer players included a past history of a recognized concussion while playing soccer and being female.\n\n\nCONCLUSIONS\nUniversity football and soccer players seem to be experiencing a significant amount of concussions while participating in their respective sports. Variables that seem to increase the odds of suffering a concussion during the previous year for football and soccer players include a history of a recognized concussion. Despite being relatively common, symptoms of concussion may not be recognized by many players.", "title": "" }, { "docid": "28d8cad6fda1f1345b9905e71495e745", "text": "To provide COSMOS, a dynamic model baaed manipulator control system, with an improved dynamic model, a PUMA 560 arm waa diaaaaembled; the inertial propertiea of the individual links were meaaured; and an ezplicit model incorporating all ofthe non-zero meaaured parametera waa deriued. The ezplicit model of the PUMA arm has been obtained with a derivation procedure comprised of aeveral heuristic rulea for simplification. A aimplijied model, abbreviated from the full ezplicit model with a 1% aignijicance criterion, can be evaluated with 305 calculationa, one fifth the number required by the recuraive Newton-Euler method. The procedure used to derive the model i a laid out; the meaaured inertial parametera are preaented, and the model ia included in an appendiz.", "title": "" }, { "docid": "ff6a487e49d1fed033ad082ad7cd0524", "text": "We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.", "title": "" }, { "docid": "5063adc5020cacddb5a4c6fd192fc17e", "text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.", "title": "" } ]
scidocsrr
749466410f80db68ff91b3e2a31105c2
Subjectivity and sentiment analysis of Arabic: Trends and challenges
[ { "docid": "c757cc329886c1192b82f36c3bed8b7f", "text": "Though much research has been conducted on Subjectivity and Sentiment Analysis (SSA) during the last decade, little work has focused on Arabic. In this work, we focus on SSA for both Modern Standard Arabic (MSA) news articles and dialectal Arabic microblogs from Twitter. We showcase some of the challenges associated with SSA on microblogs. We adopted a random graph walk approach to extend the Arabic SSA lexicon using ArabicEnglish phrase tables, leading to improvements for SSA on Arabic microblogs. We used different features for both subjectivity and sentiment classification including stemming, part-of-speech tagging, as well as tweet specific features. Our classification features yield results that surpass Arabic SSA results in the literature.", "title": "" }, { "docid": "3553d1dc8272bf0366b2688e5107aa3f", "text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.", "title": "" } ]
[ { "docid": "93dd889fe9be3209be31e77c7191ac17", "text": "The aim of this review is to provide greater insight and understanding regarding the scientific nature of cycling. Research findings are presented in a practical manner for their direct application to cycling. The two parts of this review provide information that is useful to athletes, coaches and exercise scientists in the prescription of training regimens, adoption of exercise protocols and creation of research designs. Here for the first time, we present rationale to dispute prevailing myths linked to erroneous concepts and terminology surrounding the sport of cycling. In some studies, a review of the cycling literature revealed incomplete characterisation of athletic performance, lack of appropriate controls and small subject numbers, thereby complicating the understanding of the cycling research. Moreover, a mixture of cycling testing equipment coupled with a multitude of exercise protocols stresses the reliability and validity of the findings. Our scrutiny of the literature revealed key cycling performance-determining variables and their training-induced metabolic responses. The review of training strategies provides guidelines that will assist in the design of aerobic and anaerobic training protocols. Paradoxically, while maximal oxygen uptake (V-O(2max)) is generally not considered a valid indicator of cycling performance when it is coupled with other markers of exercise performance (e.g. blood lactate, power output, metabolic thresholds and efficiency/economy), it is found to gain predictive credibility. The positive facets of lactate metabolism dispel the 'lactic acid myth'. Lactate is shown to lower hydrogen ion concentrations rather than raise them, thereby retarding acidosis. Every aspect of lactate production is shown to be advantageous to cycling performance. To minimise the effects of muscle fatigue, the efficacy of employing a combination of different high cycling cadences is evident. The subconscious fatigue avoidance mechanism 'teleoanticipation' system serves to set the tolerable upper limits of competitive effort in order to assure the athlete completion of the physical challenge. Physiological markers found to be predictive of cycling performance include: (i) power output at the lactate threshold (LT2); (ii) peak power output (W(peak)) indicating a power/weight ratio of > or =5.5 W/kg; (iii) the percentage of type I fibres in the vastus lateralis; (iv) maximal lactate steady-state, representing the highest exercise intensity at which blood lactate concentration remains stable; (v) W(peak) at LT2; and (vi) W(peak) during a maximal cycling test. Furthermore, the unique breathing pattern, characterised by a lack of tachypnoeic shift, found in professional cyclists may enhance the efficiency and metabolic cost of breathing. The training impulse is useful to characterise exercise intensity and load during training and competition. It serves to enable the cyclist or coach to evaluate the effects of training strategies and may well serve to predict the cyclist's performance. Findings indicate that peripheral adaptations in working muscles play a more important role for enhanced submaximal cycling capacity than central adaptations. Clearly, relatively brief but intense sprint training can enhance both glycolytic and oxidative enzyme activity, maximum short-term power output and V-O(2max). To that end, it is suggested to replace approximately 15% of normal training with one of the interval exercise protocols. Tapering, through reduction in duration of training sessions or the frequency of sessions per week while maintaining intensity, is extremely effective for improvement of cycling time-trial performance. Overuse and over-training disabilities common to the competitive cyclist, if untreated, can lead to delayed recovery.", "title": "" }, { "docid": "559637a4f8f5b99bb3210c5c7d03d2e0", "text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "2752c235aea735a04b70272deb042ea6", "text": "Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.", "title": "" }, { "docid": "0c7b5a51a0698f261d147b2aa77acc83", "text": "The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering); and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an on-going crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.", "title": "" }, { "docid": "62d76b82614c64d022409081c71796a5", "text": "The statistical modeling of large multi-relational datasets has increasingly gained attention in recent years. Typical applications involve large knowledge bases like DBpedia, Freebase, YAGO and the recently introduced Google Knowledge Graph that contain millions of entities, hundreds and thousands of relations, and billions of relational tuples. Collective factorization methods have been shown to scale up to these large multi-relational datasets, in particular in form of tensor approaches that can exploit the highly scalable alternating least squares (ALS) algorithms for calculating the factors. In this paper we extend the recently proposed state-of-the-art RESCAL tensor factorization to consider relational type-constraints. Relational type-constraints explicitly define the logic of relations by excluding entities from the subject or object role. In addition we will show that in absence of prior knowledge about type-constraints, local closed-world assumptions can be approximated for each relation by ignoring unobserved subject or object entities in a relation. In our experiments on representative large datasets (Cora, DBpedia), that contain up to millions of entities and hundreds of type-constrained relations, we show that the proposed approach is scalable. It further significantly outperforms RESCAL without type-constraints in both, runtime and prediction quality.", "title": "" }, { "docid": "6fb0aac60ec74b5efca4eeda22be979d", "text": "Images captured in hazy or foggy weather conditions are seriously degraded by the scattering of atmospheric particles, which directly influences the performance of outdoor computer vision systems. In this paper, a fast algorithm for single image dehazing is proposed based on linear transformation by assuming that a linear relationship exists in the minimum channel between the hazy image and the haze-free image. First, the principle of linear transformation is analyzed. Accordingly, the method of estimating a medium transmission map is detailed and the weakening strategies are introduced to solve the problem of the brightest areas of distortion. To accurately estimate the atmospheric light, an additional channel method is proposed based on quad-tree subdivision. In this method, average grays and gradients in the region are employed as assessment criteria. Finally, the haze-free image is obtained using the atmospheric scattering model. Numerous experimental results show that this algorithm can clearly and naturally recover the image, especially at the edges of sudden changes in the depth of field. It can, thus, achieve a good effect for single image dehazing. Furthermore, the algorithmic time complexity is a linear function of the image size. This has obvious advantages in running time by guaranteeing a balance between the running speed and the processing effect.", "title": "" }, { "docid": "903b68096d2559f0e50c38387260b9c8", "text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.", "title": "" }, { "docid": "cf121f496ae49eed2846b5be05d35d4d", "text": "Objective: This study provides evidence for the validity and reliability of the Rey Auditory Verbal Learning Test", "title": "" }, { "docid": "d9cdbff5533837858b1cd8334acd128d", "text": "A four-leaf steel spring used in the rear suspension system of light vehicles is analyzed using ANSYS V5.4 software. The finite element results showing stresses and deflections verified the existing analytical and experimental solutions. Using the results of the steel leaf spring, a composite one made from fiberglass with epoxy resin is designed and optimized using ANSYS. Main consideration is given to the optimization of the spring geometry. The objective was to obtain a spring with minimum weight that is capable of carrying given static external forces without failure. The design constraints were stresses (Tsai–Wu failure criterion) and displacements. The results showed that an optimum spring width decreases hyperbolically and the thickness increases linearly from the spring eyes towards the axle seat. Compared to the steel spring, the optimized composite spring has stresses that are much lower, the natural frequency is higher and the spring weight without eye units is nearly 80% lower. 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "deca482835114a5a0fd6dbdc62ae54d0", "text": "This paper presents an approach to design the transformer and the link inductor for the high-frequency link matrix converter. The proposed method aims to systematize the design process of the HF-link using analytic and software tools. The models for the characterization of the core and winding losses have been reviewed. Considerations about the practical implementation and construction of the magnetic devices are also provided. The software receives the inputs from the mathematical analysis and runs the optimization to find the best design. A 10 kW / 20 kHz transformer plus a link inductor are designed using this strategy achieving a combined efficiency of 99.32%.", "title": "" }, { "docid": "c2d926337d32cf88838546d19e6f9bde", "text": "This paper discusses the use of natural language or „conversational‟ agents in e-learning environments. We describe and contrast the various applications of conversational agent technology represented in the e-learning literature, including tutors, learning companions, language practice and systems to encourage reflection. We offer two more detailed examples of conversational agents, one which provides learning support, and the other support for self-assessment. Issues and challenges for developers of conversational agent systems for e-learning are identified and discussed.", "title": "" }, { "docid": "8b5ea4603ac53a837c3e81dfe953a706", "text": "Many teaching practices implicitly assume that conceptual knowledge can be abstracted from the situations in which it is learned and used. This article argues that this assumption inevitably limits the effectiveness of such practices. Drawing on recent research into cognition as it is manifest in everyday activity, the authors argue that knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used. They discuss how this view of knowledge affects our understanding of learning, and they note that conventional schooling too often ignores the influence of school culture on what is learned in school. As an alternative to conventional practices, they propose cognitive apprenticeship (Collins, Brown, Newman, in press), which honors the situated nature of knowledge. They examine two examples of mathematics instruction that exhibit certain key features of this approach to teaching. The breach between learning and use, which is captured by the folk categories \"know what\" and \"know how,\" may well be a product of the structure and practices of our education system. Many methods of didactic education assume a separation between knowing and doing, treating knowledge as an integral, self-sufficient substance, theoretically independent of the situations in which it is learned and used. The primary concern of schools often seems to be the transfer of this substance, which comprises abstract, decontextualized formal concepts. The activity and context in which learning takes place are thus regarded as merely ancillary to learning---pedagogically useful, of course, but fundamentally distinct and even neutral with respect to what is learned. Recent investigations of learning, however, challenge this separating of what is learned from how it is learned and used. The activity in which knowledge is developed and deployed, it is now argued, is not separable from or ancillary to learning and cognition. Nor is it neutral. Rather, it is an integral part of what is learned. Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated. In this paper, we try to explain in a deliberately speculative way, why activity and situations are integral to cognition and learning, and how different ideas of what is appropriate learning activity produce very different results. We suggest that, by ignoring the situated nature of cognition, education defeats its own goal of providing useable, robust knowledge. And conversely, we argue that approaches such as cognitive apprenticeship (Collins, Brown, & Newman, in press) that embed learning in activity and make deliberate use of the social and physical context are more in line with the understanding of learning and cognition that is emerging from research. Situated Knowledge and Learning Miller and Gildea's (1987) work on vocabulary teaching has shown how the assumption that knowing and doing can be separated leads to a teaching method that ignores the way situations structure cognition. Their work has described how children are taught words from dictionary definitions and a few exemplary sentences, and they have compared this method with the way vocabulary is normally learned outside school. People generally learn words in the context of ordinary communication. This process is startlingly fast and successful. Miller and Gildea note that by listening, talking, and reading, the average 17-year-old has learned vocabulary at a rate of 5,000 words per year (13 per day) for over 16 years. By contrast, learning words from abstract definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way:definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way: \"Me and my parents correlate, because without them I wouldn't be here.\" \"I was meticulous about falling off the cliff.\" \"Mrs. Morrow stimulated the soup.\" Given the method, such mistakes seem unavoidable. Teaching from dictionaries assumes that definitions and exemplary sentences are self-contained \"pieces\" of knowledge. But words and sentences are not islands, entire unto themselves. Language use would involve an unremitting confrontation with ambiguity, polysemy, nuance, metaphor, and so forth were these not resolved with the extralinguistic help that the context of an utterance provides (Nunberg, 1978). Prominent among the intricacies of language that depend on extralinguistic help are indexical words --words like I, here, now, next, tomorrow, afterwards, this. Indexical terms are those that \"index\"or more plainly point to a part of the situation in which communication is being conducted. They are not merely contextsensitive; they are completely context-dependent. Words like I or now, for instance, can only be interpreted in the 'context of their use. Surprisingly, all words can be seen as at least partially indexical (Barwise & Perry, 1983). Experienced readers implicitly understand that words are situated. They, therefore, ask for the rest of the sentence or the context before committing themselves to an interpretation of a word. And they go to dictionaries with situated examples of usage in mind. The situation as well as the dictionary supports the interpretation. But the students who produced the sentences listed had no support from a normal communicative situation. In tasks like theirs, dictionary definitions are assumed to be self-sufficient. The extralinguistic props that would structure, constrain, and ultimately allow interpretation in normal communication are ignored. Learning from dictionaries, like any method that tries to teach abstract concepts independently of authentic situations, overlooks the way understanding is developed through continued, situated use. This development, which involves complex social negotiations, does not crystallize into a categorical definition. Because it is dependent on situations and negotiations, the meaning of a word cannot, in principle, be captured by a definition, even when the definition is supported by a couple of exemplary sentences. All knowledge is, we believe, like language. Its constituent parts index the world and so are inextricably a product of the activity and situations in which they are produced. A concept, for example, will continually evolve with each new occasion of use, because new situations, negotiations, and activities inevitably recast it in a new, more densely textured form. So a concept, like the meaning of a word, is always under construction. This would also appear to be true of apparently well-defined, abstract technical concepts. Even these are not wholly definable and defy categorical description; part of their meaning is always inherited from the context of use. Learning and tools. To explore the idea that concepts are both situated and progressively developed through activity, we should abandon any notion that they are abstract, self-contained entities. Instead, it may be more useful to consider conceptual knowledge as, in some ways, similar to a set of tools. Tools share several significant features with knowledge: They can only be fully understood through use, and using them entails both changing the user's view of the world and adopting the belief system of the culture in which they are used. First, if knowledge is thought of as tools, we can illustrate Whitehead's (1929) distinction between the mere acquisition of inert concepts and the development of useful, robust knowledge. It is quite possible to acquire a tool but to be unable to use it. Similarly, it is common for students to acquire algorithms, routines, and decontextualized definitions that they cannot use and that, therefore, lie inert. Unfortunately, this problem is not always apparent. Old-fashioned pocket knives, for example, have a device for removing stones from horses' hooves. People with this device may know its use and be able to talk wisely about horses, hooves, and stones. But they may never betray --or even recognize --that they would not begin to know how to use this implement on a horse. Similarly, students can often manipulate algorithms, routines, and definitions they have acquired with apparent competence and yet not reveal, to their teachers or themselves, that they would have no idea what to do if they came upon the domain equivalent of a limping horse. People who use tools actively rather than just acquire them, by contrast, build an increasingly rich implicit understanding of the world in which they use the tools and of the tools themselves. The understanding, both of the world and of the tool, continually changes as a result of their interaction. Learning and acting are interestingly indistinct, learning being a continuous, life-long process resulting from acting in situations. Learning how to use a tool involves far more than can be accounted for in any set of explicit rules. The occasions and conditions for use arise directly out of the context of activities of each community that uses the tool, framed by the way members of that community see the world. The community and its viewpoint, quite as much as the tool itself, determine how a tool is used. Thus, carpenters and cabinet makers use chisels differently. Because tools and the way they are used reflect the particular accumulated insights of communities, it is not ", "title": "" }, { "docid": "e28ee6e29f61652f752ef311ebb40eaa", "text": "The increasing prevalence of Distributed Denial of Service (DDoS) attacks on the Internet has led to the wide adoption of DDoS Protection Service (DPS), which is typically provided by Content Delivery Networks (CDNs) and is integrated with CDN's security extensions. The effectiveness of DPS mainly relies on hiding the IP address of an origin server and rerouting the traffic to the DPS provider's distributed infrastructure, where malicious traffic can be blocked. In this paper, we perform a measurement study on the usage dynamics of DPS customers and reveal a new vulnerability in DPS platforms, called residual resolution, by which a DPS provider may leak origin IP addresses when its customers terminate the service or switch to other platforms, resulting in the failure of protection from future DPS providers as adversaries are able to discover the origin IP addresses and launch the DDoS attack directly to the origin servers. We identify that two major DPS/CDN providers, Cloudflare and Incapsula, are vulnerable to such residual resolution exposure, and we then assess the magnitude of the problem in the wild. Finally, we discuss the root causes of residual resolution and the practical countermeasures to address this security vulnerability.", "title": "" }, { "docid": "40db41aa0289dbf45bef067f7d3e3748", "text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.", "title": "" }, { "docid": "0e6ed8195ef4ebadf86d881770c78137", "text": "In mixed radio-frequency (RF) and digital designs, noise from high-speed digital circuits can interfere with RF receivers, resulting in RF interference issues such as receiver desensitization. In this paper, an effective methodology is proposed to estimate the RF interference received by an antenna due to near-field coupling, which is one of the common noise-coupling mechanisms, using decomposition method based on reciprocity. In other words, the noise-coupling problem is divided into two steps. In the first step, the coupling from the noise source to a Huygens surface that encloses the antenna is studied, with the actual antenna structure removed, and the induced tangential electromagnetic fields due to the noise source on this surface are obtained. In the second step, the antenna itself with the same Huygens surface is studied. The antenna is treated as a transmitting one and the induced tangential electromagnetic fields on the surface are obtained. Then, the reciprocity theory is used and the noise power coupled to the antenna port in the original problem is estimated based on the results obtained in the two steps. The proposed methodology is validated through comparisons with full-wave simulations. It fits well with engineering practice, and is particularly suitable for prelayout wireless system design and planning.", "title": "" }, { "docid": "88033862d9fac08702977f1232c91f3a", "text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.", "title": "" }, { "docid": "a280f710b0e41d844f1b9c76e7404694", "text": "Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed.", "title": "" }, { "docid": "c2e53358f9d78071fc5204624cf9d6ad", "text": "This paper explores how the adoption of mobile and social computing technologies has impacted upon the way in which we coordinate social group-activities. We present a diary study of 36 individuals that provides an overview of how group coordination is currently performed as well as the challenges people face. Our findings highlight that people primarily use open-channel communication tools (e.g., text messaging, phone calls, email) to coordinate because the alternatives are seen as either disrupting or curbing to the natural conversational processes. Yet the use of open-channel tools often results in conversational overload and a significant disparity of work between coordinating individuals. This in turn often leads to a sense of frustration and confusion about coordination details. We discuss how the findings argue for a significant shift in our thinking about the design of coordination support systems.", "title": "" }, { "docid": "67f13c2b686593398320d8273d53852f", "text": "Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.", "title": "" }, { "docid": "09f812cae6c8952d27ef86168906ece8", "text": "Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception.<<ETX>>", "title": "" } ]
scidocsrr
034fd5f04c38a95b847b43254c370df3
Unsupervised Surgical Task Segmentation with Milestone Learning
[ { "docid": "bad378dceb9e4c060fa52acdf328d845", "text": "Autonomous robot execution of surgical sub-tasks has the potential to reduce surgeon fatigue and facilitate supervised tele-surgery. This paper considers the sub-task of surgical debridement: removing dead or damaged tissue fragments to allow the remaining healthy tissue to heal. We present an autonomous multilateral surgical debridement system using the Raven, an open-architecture surgical robot with two cable-driven 7 DOF arms. Our system combines stereo vision for 3D perception with trajopt, an optimization-based motion planner, and model predictive control (MPC). Laboratory experiments involving sensing, grasping, and removal of 120 fragments suggest that an autonomous surgical robot can achieve robustness comparable to human performance. Our robot system demonstrated the advantage of multilateral systems, as the autonomous execution was 1.5× faster with two arms than with one; however, it was two to three times slower than a human. Execution speed could be improved with better state estimation that would allow more travel between MPC steps and fewer MPC replanning cycles. The three primary contributions of this paper are: (1) introducing debridement as a sub-task of interest for surgical robotics, (2) demonstrating the first reliable autonomous robot performance of a surgical sub-task using the Raven, and (3) reporting experiments that highlight the importance of accurate state estimation for future research. Further information including code, photos, and video is available at: http://rll.berkeley.edu/raven.", "title": "" }, { "docid": "951f79f828d3375c7544129cdb575940", "text": "In this paper, we deal with imitation learning of arm movements in humanoid robots. Hidden Markov models (HMM) are used to generalize movements demonstrated to a robot multiple times. They are trained with the characteristic features (key points) of each demonstration. Using the same HMM, key points that are common to all demonstrations are identified; only those are considered when reproducing a movement. We also show how HMM can be used to detect temporal dependencies between both arms in dual-arm tasks. We created a model of the human upper body to simulate the reproduction of dual-arm movements and generate natural-looking joint configurations from tracked hand paths. Results are presented and discussed", "title": "" }, { "docid": "3ff06c4ecf9b8619150c29c9c9a940b9", "text": "It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.", "title": "" }, { "docid": "ecd79e88962ca3db82eaf2ab94ecd5f4", "text": "Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.", "title": "" } ]
[ { "docid": "fd9d857d7299bf37bba90bf1b5adf300", "text": "How should we assess the comparability of driving on a road and ‘‘driving’’ in a simulator? If similar patterns of behaviour are observed, with similar differences between individuals, then we can conclude that driving in the simulator will deliver representative results and the advantages of simulators (controlled environments, hazardous situations) can be appreciated. To evaluate a driving simulator here we compare hazard detection while driving on roads, while watching short film clips recorded from a vehicle moving through traffic, and while driving through a simulated city in a fully instrumented fixed-base simulator with a 90 degree forward view (plus mirrors) that is under the speed/direction control of the driver. In all three situations we find increased scanning by more experienced and especially professional drivers, and earlier eye fixations on hazardous objects for experienced drivers. This comparability encourages the use of simulators in drivers training and testing. ! 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c9f9673f3e46bb6fe17075fd212ef3ef", "text": "This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs’ performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs’ performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring 1.18% (model of Zeiler and Fergus [24], 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network.", "title": "" }, { "docid": "a6d26826ee93b3b5dec8282d0c632f8e", "text": "Superficial Acral Fibromyxoma is a rare tumor of soft tissues. It is a relatively new entity described in 2001 by Fetsch et al. It probably represents a fibrohistiocytic tumor with less than 170 described cases. We bring a new case of SAF on the 5th toe of the right foot, in a 43-year-old woman. After surgical excision with safety margins which included the nail apparatus, it has not recurred (22 months of follow up). We carried out a review of the location of all SAF published up to the present day.", "title": "" }, { "docid": "361a340945df32d535fcf92a7288f0fe", "text": "Digital soil mapping has been widely used as a cost-effective method for generating soil maps. However, current DSM data representation rarely incorporate contextual information of the landscape. DSM models are usually calibrated using point observations intersected with spatially corresponding point covariates. Here, we demonstrate the use of the convolutional neural network model that incorporates contextual information surrounding an observation to significantly improve the prediction accuracy over conventional DSM models. We describe a convolutional neural network (CNN) model that takes inputs 5 as images of covariates and explores spatial contextual information by finding non-linear local spatial relationships of neighbouring pixels. Unique features of the proposed model include: input represented as 3D stack of images, data augmentation to reduce overfitting, and simultaneously predicting multiple outputs. Using a soil mapping example in Chile, the CNN model was trained to simultaneously predict soil organic carbon at multiples depths across the country. The results showed the CNN model reduced the error by 30% compared with conventional techniques that only used point information of covariates. In the 10 example of country-wide mapping at 100 m resolution, the neighbourhood size from 3 to 9 pixels is more effective than at a point location and larger neighbourhood sizes. In addition, the CNN model produces less prediction uncertainty and it is able to predict soil carbon at deeper soil layers more accurately. Because the CNN model takes covariate represented as images, it offers a simple and effective framework for future DSM models. Copyright statement. Author(s) 2018. CC BY 4.0 License 15", "title": "" }, { "docid": "bc8a6c63603c34587acd4a5b2c2a36e1", "text": "This paper presents adoption of a new hash algorithm in digital signature. Digital signature presents a technique to endorse the content of the message. This message has not been altered throughout the communication process. Due to this, it increased the receiver confidence that the message was unchanged. If the message is digitally signed, any changes in the message will invalidate the signature. The comparison of digital signature between Rivest, Shamir and Adleman (RSA) algorithms are summarized. The finding reveals that previous algorithms used large file sizes. Finally the new encoding and decoding dynamic hash algorithm is proposed in a digital signature. The proposed algorithm had reduced significantly the file sizes (8 bytes) during the transferring message.", "title": "" }, { "docid": "72b67938df75b1668218e290dc2e1478", "text": "Forensic entomology, the use of insects and other arthropods in forensic investigations, is becoming increasingly more important in such investigations. To ensure its optimal use by a diverse group of professionals including pathologists, entomologists and police officers, a common frame of guidelines and standards is essential. Therefore, the European Association for Forensic Entomology has developed a protocol document for best practice in forensic entomology, which includes an overview of equipment used for collection of entomological evidence and a detailed description of the methods applied. Together with the definitions of key terms and a short introduction to the most important methods for the estimation of the minimum postmortem interval, the present paper aims to encourage a high level of competency in the field of forensic entomology.", "title": "" }, { "docid": "914a9f6945aab20ece40cb0979126ad9", "text": "Large-scale cover song recognition involves calculating itemto-item similarities that can accommodate differences in timing and tempo, rendering simple Euclidean measures unsuitable. Expensive solutions such as dynamic time warping do not scale to million of instances, making them inappropriate for commercial-scale applications. In this work, we transform a beat-synchronous chroma matrix with a 2D Fourier transform and show that the resulting representation has properties that fit the cover song recognition task. We can also apply PCA to efficiently scale comparisons. We report the best results to date on the largest available dataset of around 18,000 cover songs amid one million tracks, giving a mean average precision of 3.0%.", "title": "" }, { "docid": "492f99ab4470578ce8ac207c1da726fe", "text": "In recent years, there has been an intense research effort to understand the cognitive processes and structures underlying expert behaviour. Work in different fields, including scientific domains, sports, games, and mnemonics, has shown that there are vast differences in perceptual abilities between experts and novices, and that these differences may underpin other cognitive differences in learning, memory, and problem solving. In this article, we evaluate the progress made in the last years through the eyes of an outstanding, albeit fictional, expert: Sherlock Holmes. We first use the Sherlock Holmes character to illustrate expert processes as described by current research and theories. In particular, the role of perception, as well as the nature and influence of expert knowledge, are all present in the description of Conan Doyle’s hero. In the second part of the article, we discuss a number of issues that current research on expertise has barely addressed. These gaps include, for example, several forms of reasoning, the influence of emotions on cognition, and the effect of age on experts’ knowledge and cognitive processes. Thus, although nearly 120 years old, Conan Doyle’s books show remarkable illustrations of expert behaviour, including the coverage of themes that have mostly been overlooked by current research.", "title": "" }, { "docid": "4be57bfa4e510cdf0e8ad833034d7fce", "text": "Dynamic data flow tracking (DFT) is a technique broadly used in a variety of security applications that, unfortunately, exhibits poor performance, preventing its adoption in production systems. We present ShadowReplica, a new and efficient approach for accelerating DFT and other shadow memory-based analyses, by decoupling analysis from execution and utilizing spare CPU cores to run them in parallel. Our approach enables us to run a heavyweight technique, like dynamic taint analysis (DTA), twice as fast, while concurrently consuming fewer CPU cycles than when applying it in-line. DFT is run in parallel by a second shadow thread that is spawned for each application thread, and the two communicate using a shared data structure. We avoid the problems suffered by previous approaches, by introducing an off-line application analysis phase that utilizes both static and dynamic analysis methodologies to generate optimized code for decoupling execution and implementing DFT, while it also minimizes the amount of information that needs to be communicated between the two threads. Furthermore, we use a lock-free ring buffer structure and an N-way buffering scheme to efficiently exchange data between threads and maintain high cache-hit rates on multi-core CPUs. Our evaluation shows that ShadowReplica is on average ~2.3× faster than in-line DFT (~2.75× slowdown over native execution) when running the SPEC CPU2006 benchmark, while similar speed ups were observed with command-line utilities and popular server software. Astoundingly, ShadowReplica also reduces the CPU cycles used up to 30%.", "title": "" }, { "docid": "6998297aeba2e02133a6d62aa94508be", "text": "License Plate Detection and Recognition System is an image processing technique used to identify a vehicle by its license plate. Here we propose an accurate and robust method of license plate detection and recognition from an image using contour analysis. The system is composed of two phases: the detection of the license plate, and the character recognition. The license plate detection is performed for obtaining the candidate region of the vehicle license plate and determined using the edge based text detection technique. In the recognition phase, the contour analysis is used to recognize the characters after segmenting each character. The performance of the proposed system has been tested on various images and provides better results.", "title": "" }, { "docid": "090f5cb05d2f9d6d2456b3eb02a3a663", "text": "The mesialization of molars in the lower jaw represents a particularly demanding scenario for the quality of orthodontic anchorage. The use of miniscrew implants has proven particularly effective; whereby, these orthodontic implants are either directly loaded (direct anchorage) or employed indirectly to stabilize a dental anchorage block (indirect anchorage). The objective of this study was to analyze the biomechanical differences between direct and indirect anchorage and their effects on the primary stability of the miniscrew implants. For this purpose, several computer-aided design/computer-aided manufacturing (CAD-CAM)-models were prepared from the CT data of a 21-year-old patient, and these were combined with virtually constructed models of brackets, arches, and miniscrew implants. Based on this, four finite element method (FEM) models were generated by three-dimensional meshing. Material properties, boundary conditions, and the quality of applied forces (direction and magnitude) were defined. After solving the FEM equations, strain values were recorded at predefined measuring points. The calculations made using the FEM models with direct and indirect anchorage were statistically evaluated. The loading of the compact bone in the proximity of the miniscrew was clearly greater with direct than it was with indirect anchorage. The more anchor teeth were integrated into the anchoring block with indirect anchorage, the smaller was the peri-implant loading of the bone. Indirect miniscrew anchorage is a reliable possibility to reduce the peri-implant loading of the bone and to reduce the risk of losing the miniscrew. The more teeth are integrated into the anchoring block, the higher is this protective effect. In clinical situations requiring major orthodontic forces, it is better to choose an indirect anchorage in order to minimize the risk of losing the miniscrew.", "title": "" }, { "docid": "aae97dd982300accb15c05f9aa9202cd", "text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.", "title": "" }, { "docid": "67c8047fbb9e027f92910c4a4f93347a", "text": "Mastocytosis is a rare, heterogeneous disease of complex etiology, characterized by a marked increase in mast cell density in the skin, bone marrow, liver, spleen, gastrointestinal mucosa and lymph nodes. The most frequent site of organ involvement is the skin. Cutaneous lesions include urticaria pigmentosa, mastocytoma, diffuse and erythematous cutaneous mastocytosis, and telangiectasia macularis eruptiva perstans. Human mast cells originate from CD34 progenitors, under the influence of stem cell factor (SCF); a substantial number of patients exhibit activating mutations in c-kit, the receptor for SCF. Mast cells can synthesize a variety of cytokines that could affect the skeletal system, increasing perforating bone resorption and leading to osteoporosis. The coexistence of hematologic disorders, such as myeloproliferative or myelodysplastic syndromes, or of lymphoreticular malignancies, is common. Compared with radiographs, Tc-99m methylenediphosphonate (MDP) scintigraphy is better able to show the widespread skeletal involvement in patients with diffuse disease. T1-weighted MR imaging is a sensitive technique for detecting marrow abnormalities in patients with systemic mastocytosis, showing several different patterns of marrow involvement. We report the imaging findings a 36-year old male with well-documented urticaria pigmentosa. In order to evaluate mastocytic bone marrow involvement, 99mTc-MDP scintigraphy, T1-weighted spin echo and short tau inversion recovery MRI at 1.0 T, were performed. Both scan findings were consistent with marrow hyperactivity. Thus, the combined use of bone scan and MRI may be useful in order to recognize marrow involvement in suspected systemic mastocytosis, perhaps avoiding bone biopsy.", "title": "" }, { "docid": "47a81a3dc982326877f6d8a15c6ae05b", "text": "Traditional rumors detection methods often rely on statistical analysis to manually select features to construct classifiers. Not only is the message feature selection difficult, but the gap, between the representation space where the shallow statistical features of information exist and the representation space where the highly abstract features including semantics and emotion of information exist, is very big. Thus, the result of traditional classifiers based on the shallow or middle features is not so good. Due to this problem, a rumors deteciton method based on Deep Bidirectional Gated Recurrent Unit (D-Bi-GRU) is presented. To capture the evolution of group response information of microblog events over time, we consider the forward and backward sequences of microblog flow of group response information along time line simultaneously. The evolution representations of deep latent space including semantic and emotion learned by stack multi layers Bi-GRUs to rumor detection. Experimental results on a real world data set showed that rumor events detection by considering bidirectional sequence of group response information simultaneously can obtain a better performance, and stack multi-layers Bi-GRUs can better detect rumor events in microblog.", "title": "" }, { "docid": "2122697f764fbffc588f9a407105c5ba", "text": "Very rare cases of human T cell acute lymphoblastic leukemia (T-ALL) harbor chromosomal translocations that involve NOTCH1, a gene encoding a transmembrane receptor that regulates normal T cell development. Here, we report that more than 50% of human T-ALLs, including tumors from all major molecular oncogenic subtypes, have activating mutations that involve the extracellular heterodimerization domain and/or the C-terminal PEST domain of NOTCH1. These findings greatly expand the role of activated NOTCH1 in the molecular pathogenesis of human T-ALL and provide a strong rationale for targeted therapies that interfere with NOTCH signaling.", "title": "" }, { "docid": "be73344151ac52835ba9307e363f36d9", "text": "BACKGROUND AND OBJECTIVE\nSmoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states.\n\n\nMETHODS\nTo test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision.\n\n\nRESULTS\nThe outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications.\n\n\nCONCLUSIONS\nIn conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions.", "title": "" }, { "docid": "0a557bbd59817ceb5ae34699c72d79ee", "text": "In this paper, we propose a PTS-based approach to solve the high peak-to-average power ratio (PAPR) problem in filter bank multicarrier (FBMC) system with the consider of the prototype filter and the overlap feature of the symbols in time domain. In this approach, we improve the performance of the traditional PTS approach by modifying the choice of the best weighting factors with the consideration of the overlap between the present symbol and the past symbols. The simulation result shows this approach performs better than traditional PTS approach in the reduction of PAPR in FBMC system.", "title": "" }, { "docid": "8c11b7c29b4f3f4a7fe98b432b97c2b4", "text": "Chromosome 8 is the largest autosome in which mosaic trisomy is compatible with life. Constitutional trisomy 8 (T8) is estimated to occur in approximately 0.1% of all recognized pregnancies. The estimated frequency of trisomy 8 mosaicism (T8M), also known as Warkany syndrome, is about 1/25,000 to 50,000 liveborns, and is found to be more prevalent in males than females, 5:1. T8M is known to demonstrate extreme clinical variability affecting multiple systems including central nervous, ocular, cardiac, gastrointestinal, genitourinary, and musculoskeletal. There appears to be little correlation between the level of mosaicism and the extent of the clinical phenotype. Additionally, the exact mechanism that causes the severity of phenotype in patients with T8M remains unknown. We report on a mildly dysmorphic male patient with partial low-level T8M due to a pseudoisodicentric chromosome 8 with normal 6.0 SNP microarray and high resolution chromosome analyses in lymphocytes. The aneuploidy was detected in fibroblasts and confirmed by FISH in lymphocytes. This report elaborates further the clinical variability seen in trisomy 8 mosaicism.", "title": "" }, { "docid": "2559eeb2a4f2f58f82d215de134f32be", "text": "We propose FCLT – a fully-correlational long-term tracker. The two main components of FCLT are a shortterm tracker which localizes the target in each frame and a detector which re-detects the target when it is lost. Both the short-term tracker and the detector are based on correlation filters. The detector exploits properties of the recent constrained filter learning and is able to re-detect the target in the whole image efficiently. A failure detection mechanism based on correlation response quality is proposed. The FCLT is tested on recent short-term and long-term benchmarks. It achieves state-of-the-art results on the short-term benchmarks and it outperforms the current best-performing tracker on the long-term benchmark by over 18%.", "title": "" } ]
scidocsrr
cd036da5d6036ba4781cb1791e82f40e
Choice and ego-depletion: the moderating role of autonomy.
[ { "docid": "99f52d6a7412060231a0bfe1d5dcea0d", "text": "The concepts of self-regulation and autonomy are examined within an organizational framework. We begin by retracing the historical origins of the organizational viewpoint in early debates within the field of biology between vitalists and reductionists, from which the construct of self-regulation emerged. We then consider human autonomy as an evolved behavioral, developmental, and experiential phenomenon that operates at both neurobiological and psychological levels and requires very specific supports within higher order social organizations. We contrast autonomy or true self-regulation with controlling regulation (a nonautonomous form of intentional behavior) in phenomenological and functional terms, and we relate the forms of regulation to the developmental processes of intrinsic motivation and internalization. Subsequently, we describe how self-regulation versus control may be characterized by distinct neurobiological underpinnings, and we speculate about some of the adaptive advantages that may underlie the evolution of autonomy. Throughout, we argue that disturbances of autonomy, which have both biological and psychological etiologies, are central to many forms of psychopathology and social alienation.", "title": "" } ]
[ { "docid": "5e453defd762bb4ecfae5dcd13182b4a", "text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.", "title": "" }, { "docid": "ee5b46719023b5dbae96997bbf9925b0", "text": "The teaching of reading in different languages should be informed by an effective evidence base. Although most children will eventually become competent, indeed skilled, readers of their languages, the pre-reading (e.g. phonological awareness) and language skills that they bring to school may differ in systematic ways for different language environments. A thorough understanding of potential differences is required if literacy teaching is to be optimized in different languages. Here we propose a theoretical framework based on a psycholinguistic grain size approach to guide the collection of evidence in different countries. We argue that the development of reading depends on children's phonological awareness in all languages studied to date. However, we propose that because languages vary in the consistency with which phonology is represented in orthography, there are developmental differences in the grain size of lexical representations, and accompanying differences in developmental reading strategies across orthographies.", "title": "" }, { "docid": "e2d25382acd23c9431ccd3905d8bf13a", "text": "Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.", "title": "" }, { "docid": "8b4b8c7bff6a6351edbae640a28bbed4", "text": "Hardware Trojans recently emerged as a serious issue for computer systems, especially for those used in critical applications such as medical or military. Trojan proposed so far can affect the reliability of a device in various ways. Proposed effects range from the leakage of secret information to the complete malfunctioning of the device. A crucial point for securing the overall operation of a device is to guarantee the absence of hardware Trojans. In this paper, we survey several techniques for detecting malicious modification of circuit introduced at different phases of the design flow. We also highlight their capabilities limitations in thwarting hardware Trojans.", "title": "" }, { "docid": "7dda8adb207e69ccbc52ce0497d3f5d4", "text": "Statistics from security firms, research institutions and government organizations show that the number of data-leak instances have grown rapidly in recent years. Among various data-leak cases, human mistakes are one of the main causes of data loss. There exist solutions detecting inadvertent sensitive data leaks caused by human mistakes and to provide alerts for organizations. A common approach is to screen content in storage and transmission for exposed sensitive information. Such an approach usually requires the detection operation to be conducted in secrecy. However, this secrecy requirement is challenging to satisfy in practice, as detection servers may be compromised or outsourced. In this paper, we present a privacy-preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection. The advantage of our method is that it enables the data owner to safely delegate the detection operation to a semihonest provider without revealing the sensitive data to the provider. We describe how Internet service providers can offer their customers DLD as an add-on service with strong privacy guarantees. The evaluation results show that our method can support accurate detection with very small number of false alarms under various data-leak scenarios.", "title": "" }, { "docid": "0b10bd76d0d78e609c6397b60257a2ed", "text": "Persistent increase in population of world is demanding more and more supply of food. Hence there is a significant need of advancement in cultivation to meet up the future food needs. It is important to know moisture levels in soil to maximize the output. But most of farmers cannot afford high cost devices to measure soil moisture. Our research work in this paper focuses on home-made low cost moisture sensor with accuracy. In this paper we present a method to manufacture soil moisture sensor to estimate moisture content in soil hence by providing information about required water supply for good cultivation. This sensor is tested with several samples of soil and able to meet considerable accuracy. Measuring soil moisture is an effective way to determine condition of soil and get information about the quantity of water that need to be supplied for cultivation. Two separate methods are illustrated in this paper to determine soil moisture over an area and along the depth.", "title": "" }, { "docid": "e024246deed46b3166a466d2d5ee3214", "text": "INTRODUCTION\nThis study reports on the development of a new measure of hostile social-cognitive biases for use in paranoia research, the Ambiguous Intentions Hostility Questionnaire (AIHQ). The AIHQ is comprised of a variety of negative situations that differ in terms of intentionality. Items were developed to reflect causes that were ambiguous, intentional, and accidental in nature.\n\n\nMETHODS\nParticipants were 322 college students who completed the AIHQ along with measures of paranoia, hostility, attributional style, and psychosis proneness. The reliability and validity of the AIHQ was evaluated using both correlational and multiple regression methods.\n\n\nRESULTS\nThe AIHQ had good levels of reliability (internal consistency and interrater reliability). The AIHQ was positively correlated with paranoia and hostility and was not correlated with measures of psychosis proneness, which supported the convergent and discriminant validity of the scale. In addition, the AIHQ predicted incremental variance in paranoia scores as compared to the attributional, hostility, and psychosis proneness measures. Ambiguous items showed the most consistent relationships with paranoia.\n\n\nCONCLUSIONS\nThe AIHQ appears to be a reliable and valid measure of hostile social cognitive biases in paranoia. Recommendations for using the AIHQ in the study of paranoia are discussed.", "title": "" }, { "docid": "57d1648391cac4ccfefd85aacef6b5ba", "text": "Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.", "title": "" }, { "docid": "94c9eec9aa4f36bf6b2d83c3cc8dbb12", "text": "Many real world security problems can be modelled as finite zero-sum games with structured sequential strategies and limited interactions between the players. An abstract class of games unifying these models are the normal-form games with sequential strategies (NFGSS). We show that all games from this class can be modelled as well-formed imperfect-recall extensiveform games and consequently can be solved by counterfactual regret minimization. We propose an adaptation of the CFR algorithm for NFGSS and compare its performance to the standard methods based on linear programming and incremental game generation. We validate our approach on two security-inspired domains. We show that with a negligible loss in precision, CFR can compute a Nash equilibrium with five times less computation than its competitors. Game theory has been recently used to model many real world security problems, such as protecting airports (Pita et al. 2008) or airplanes (Tsai et al. 2009) from terrorist attacks, preventing fare evaders form misusing public transport (Yin et al. 2012), preventing attacks in computer networks (Durkota et al. 2015), or protecting wildlife from poachers (Fang, Stone, and Tambe 2015). Many of these security problems are sequential in nature. Rather than a single monolithic action, the players’ strategies are formed by sequences of smaller individual decisions. For example, the ticket inspectors make a sequence of decisions about where to check tickets and which train to take; a network administrator protects the network against a sequence of actions an attacker uses to penetrate deeper into the network. Sequential decision making in games has been extensively studied from various perspectives. Recent years have brought significant progress in solving massive imperfectinformation extensive-form games with a focus on the game of poker. Counterfactual regret minimization (Zinkevich et al. 2008) is the family of algorithms that has facilitated much of this progress, with a recent incarnation (Tammelin et al. 2015) essentially solving for the first time a variant of poker commonly played by people (Bowling et al. 2015). However, there has not been any transfer of these results to research on real world security problems. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We focus on an abstract class of sequential games that can model many sequential security games, such as games taking place in physical space that can be discretized as a graph. This class of games is called normal-form games with sequential strategies (NFGSS) (Bosansky et al. 2015) and it includes, for example, existing game theoretic models of ticket inspection (Jiang et al. 2013), border patrolling (Bosansky et al. 2015), and securing road networks (Jain et al. 2011). In this work we formally prove that any NFGSS can be modelled as a slightly generalized chance-relaxed skew well-formed imperfect-recall game (CRSWF) (Lanctot et al. 2012; Kroer and Sandholm 2014), a subclass of extensiveform games with imperfect recall in which counterfactual regret minimization is guaranteed to converge to the optimal strategy. We then show how to adapt the recent variant of the algorithm, CFR, directly to NFGSS and present experimental validation on two distinct domains modelling search games and ticket inspection. We show that CFR is applicable and efficient in domains with imperfect recall that are substantially different from poker. Moreover, if we are willing to sacrifice a negligible degree of approximation, CFR can find a solution substantially faster than methods traditionally used in research on security games, such as formulating the game as a linear program (LP) and incrementally building the game model by double oracle methods.", "title": "" }, { "docid": "ca807d3bed994a8e7492898e6bfe6dd2", "text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.", "title": "" }, { "docid": "74949417ff2ba47f153e05aac587e0dc", "text": "This review examines the descriptive epidemiology, and risk and protective factors for youth suicide and suicidal behavior. A model of youth suicidal behavior is articulated, whereby suicidal behavior ensues as a result of an interaction of socio-cultural, developmental, psychiatric, psychological, and family-environmental factors. On the basis of this review, clinical and public health approaches to the reduction in youth suicide and recommendations for further research will be discussed.", "title": "" }, { "docid": "b9ae6a5e5a0626db08a59d39220e9749", "text": "The paper describes the architecture of SCIT supercomputer system of cluster type and the base architecture features used during this research project. This supercomputer system is put into research operation in Glushkov Institute of Cybernetics NAS of Ukraine from the early 2006 year. The paper may be useful for those scientists and engineers that are practically engaged in a cluster supercomputer systems design, integration and services.", "title": "" }, { "docid": "a0fb601da8e6b79d4a876730cfee4271", "text": "Social media platforms provide an inexpensive communication medium that allows anyone to publish content and anyone interested in the content can obtain it. However, this same potential of social media provide space for discourses that are harmful to certain groups of people. Examples of these discourses include bullying, offensive content, and hate speech. Out of these discourses hate speech is rapidly recognized as a serious problem by authorities of many countries. In this paper, we provide the first of a kind systematic large-scale measurement and analysis study of explicit expressions of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech, the sensitivity of hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.", "title": "" }, { "docid": "78007b3276e795d76b692b40c4808c51", "text": "The construct of trait emotional intelligence (trait EI or trait emotional self-efficacy) provides a comprehensive operationalization of emotion-related self-perceptions and dispositions. In the first part of the present study (N=274, 92 males), we performed two joint factor analyses to determine the location of trait EI in Eysenckian and Big Five factor space. The results showed that trait EI is a compound personality construct located at the lower levels of the two taxonomies. In the second part of the study, we performed six two-step hierarchical regressions to investigate the incremental validity of trait EI in predicting, over and above the Giant Three and Big Five personality dimensions, six distinct criteria (life satisfaction, rumination, two adaptive and two maladaptive coping styles). Trait EI incrementally predicted four criteria over the Giant Three and five criteria over the Big Five. The discussion addresses common questions about the operationalization of emotional intelligence as a personality trait.", "title": "" }, { "docid": "2b4d85ad7ec9bbb3b2b964d1552b3006", "text": "The transmission of pain from peripheral tissues through the spinal cord to the higher centres of the brain is clearly not a passive simple process using exclusive pathways. Rather, circuitry within the spinal cord has the potential to alter, dramatically, the relation between the stimulus and the response to pain in an individual. Thus an interplay between spinal neuronal systems, both excitatory and inhibitory, will determine the messages delivered to higher levels of the central nervous system. The incoming messages may be attenuated or enhanced, changes which may be determined by the particular circumstances. The latter state, termed central hypersensitivity [61], whereby low levels of afferent activity are amplified by spinal pharmacological mechanisms has attracted much attention [13, 15]. However, additionally, inhibitory controls are subject to alteration so that opioid sensitivity in different pain states is not fixed [14]. This plasticity, the capacity for transmission in nociceptive systems to change, can be induced over very short time courses. Recent research on the pharmacology of nociception has started to shed some well-needed light on this rapid plasticity which could have profound consequences for the pharmacological treatment of pain [8, 13, 15, 23, 24, 35, 36, 41, 62]. The pharmacology of the sensory neurones in the dorsal horn of the spinal cord is complex, so much so that most of the candidate neurotransmitters and their receptors found in the CNS are also found here [4, 32]. The transmitters are derived from either the afferent fibres, intrinsic neurones or descending fibres. The majority of the transmitters and receptors are concentrated in the substantia gelatinosa, one of the densest neuronal areas in the CNS and crucial for the reception and modulation of nociceptive messages transmitted via the peripheral fibres [4]. Nociceptive C-fibres terminate in the outer lamina 1 and the underlying substantia gelatinosa, whereas the large tactile fibres terminate in deeper laminae. However, in addition to the lamina 1 cells which send long ascending axons to the brain, deep dorsal horn cells also give rise to ascending axons and respond to C-fibre stimulation. In the case of these deep cells the C-fibre input may be relayed via", "title": "" }, { "docid": "ddfd1bc1ca748bee286df92f8850286c", "text": "The rapid growth of Location-based Social Networks (LBSNs) provides a great opportunity to satisfy the strong demand for personalized Point-of-Interest (POI) recommendation services. However, with the tremendous increase of users and POIs, POI recommender systems still face several challenging problems: (1) the hardness of modeling complex user-POI interactions from sparse implicit feedback; (2) the difficulty of incorporating the geographical context information. To cope with these challenges, we propose a novel autoencoder-based model to learn the complex user-POI relations, namely SAE-NAD, which consists of a self-attentive encoder (SAE) and a neighbor-aware decoder (NAD). In particular, unlike previous works equally treat users' checked-in POIs, our self-attentive encoder adaptively differentiates the user preference degrees in multiple aspects, by adopting a multi-dimensional attention mechanism. To incorporate the geographical context information, we propose a neighbor-aware decoder to make users' reachability higher on the similar and nearby neighbors of checked-in POIs, which is achieved by the inner product of POI embeddings together with the radial basis function (RBF) kernel. To evaluate the proposed model, we conduct extensive experiments on three real-world datasets with many state-of-the-art methods and evaluation metrics. The experimental results demonstrate the effectiveness of our model.", "title": "" }, { "docid": "1e8acf321f7ff3a1a496e4820364e2a8", "text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.", "title": "" }, { "docid": "065fc50e811af9a7080486eaf852ae3f", "text": "While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multi-modal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multi-modal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance <inline-formula><tex-math notation=\"LaTeX\">$-$</tex-math><alternatives> <inline-graphic xlink:href=\"asif-ieq1-2747134.gif\"/></alternatives></inline-formula>this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability<inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math><alternatives><inline-graphic xlink:href=\"asif-ieq2-2747134.gif\"/> </alternatives></inline-formula>this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multi-modal hierarchical fusion<inline-formula><tex-math notation=\"LaTeX\">$-$</tex-math><alternatives> <inline-graphic xlink:href=\"asif-ieq3-2747134.gif\"/></alternatives></inline-formula>this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., image- and pixel-levels), and fused into a Conditional Random Field (CRF)-based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.", "title": "" }, { "docid": "4a817638751fdfe46dfccc43eea76cbd", "text": "In this article we present a classification scheme for quantum computing technologies that is based on the characteristics most relevant to computer systems architecture. The engineering trade-offs of execution speed, decoherence of the quantum states, and size of systems are described. Concurrency, storage capacity, and interconnection network topology influence algorithmic efficiency, while quantum error correction and necessary quantum state measurement are the ultimate drivers of logical clock speed. We discuss several proposed technologies. Finally, we use our taxonomy to explore architectural implications for common arithmetic circuits, examine the implementation of quantum error correction, and discuss cluster-state quantum computation.", "title": "" } ]
scidocsrr
79a8b4f2ab81b9bd36b436e347f80ed7
Consumer perception of interface quality, security, and loyalty in electronic commerce
[ { "docid": "b44600830a6aacd0a1b7ec199cba5859", "text": "Existing e-service quality scales mainly focus on goal-oriented e-shopping behavior excluding hedonic quality aspects. As a consequence, these scales do not fully cover all aspects of consumer's quality evaluation. In order to integrate both utilitarian and hedonic e-service quality elements, we apply a transaction process model to electronic service encounters. Based on this general framework capturing all stages of the electronic service delivery process, we develop a transaction process-based scale for measuring service quality (eTransQual). After conducting exploratory and confirmatory factor analysis, we identify five discriminant quality dimensions: functionality/design, enjoyment, process, reliability and responsiveness. All extracted dimensions of eTransQual show a significant positive impact on important outcome variables like perceived value and customer satisfaction. Moreover, enjoyment is a dominant factor in influencing both relationship duration and repurchase intention as major drivers of customer lifetime value. As a result, we present conceptual and empirical evidence for the need to integrate both utilitarian and hedonic e-service quality elements into one measurement scale. © 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "3da6fadaf2363545dfd0cea87fe2b5da", "text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030", "title": "" }, { "docid": "5542f4693a4251edcf995e7608fbda56", "text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.", "title": "" } ]
[ { "docid": "18e2a2c5c213ae1e0e73f0fca3243d55", "text": "In the past 20 years we have learned a great deal about GABAA receptor (GABAAR) subtypes, and which behaviors are regulated or which drug effects are mediated by each subtype. However, the question of where GABAARs involved in specific drug effects and behaviors are located in the brain remains largely unanswered. We review here recent studies taking a circuit pharmacology approach to investigate the functions of GABAAR subtypes in specific brain circuits controlling fear, anxiety, learning, memory, reward, addiction, and stress-related behaviors. The findings of these studies highlight the complexity of brain inhibitory systems and the importance of taking a subtype-, circuit-, and neuronal population-specific approach to develop future therapeutic strategies using cell type-specific drug delivery.", "title": "" }, { "docid": "80a5eaec904b8412cebfe17e392e448a", "text": "Distributional semantic models learn vector representations of words through the contexts they occur in. Although the choice of context (which often takes the form of a sliding window) has a direct influence on the resulting embeddings, the exact role of this model component is still not fully understood. This paper presents a systematic analysis of context windows based on a set of four distinct hyperparameters. We train continuous SkipGram models on two English-language corpora for various combinations of these hyper-parameters, and evaluate them on both lexical similarity and analogy tasks. Notable experimental results are the positive impact of cross-sentential contexts and the surprisingly good performance of right-context windows.", "title": "" }, { "docid": "98d3dddfca32c442f6b7c0a6da57e690", "text": "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce β-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter β that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that β-VAE with appropriately tuned β > 1 qualitatively outperforms VAE (β = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, β-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter β, which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.", "title": "" }, { "docid": "8dba7b19c15cbb04965ac483b7660ec9", "text": "Deep Belief Networks (DBN) have been successful in classification especially image recognition tasks. However, the performance of a DBN is often highly dependent on settings in particular the combination of runtime parameter values. In this work, we propose a hyper-heuristic based framework which can optimise DBNs independent from the problem domain. It is the first time hyper-heuristic entering this domain. The framework iteratively selects suitable heuristics based on a heuristic set, apply the heuristic to tune the DBN to better fit with the current search space. Under this framework the setting of DBN learning is adaptive. Three well-known image reconstruction benchmark sets were used for evaluating the performance of this new approach. Our experimental results show this hyper-heuristic approach can achieve high accuracy under different scenarios on diverse image sets. In addition state-of-the-art meta-heuristic methods for tuning DBN were introduced for comparison. The results illustrate that our hyper-heuristic approach can obtain better performance on almost all test cases.", "title": "" }, { "docid": "5b630705e4f90e1e845ff81df079cf13", "text": "feature extraction for text classification Göksel BİRİCİK∗, Banu DİRİ, Ahmet Coşkun SÖNMEZ Department of Computer Engineering, Yıldız Technical University, Esenler, İstanbul-TURKEY e-mails: {goksel,banu,acsonmez}@ce.yildiz.edu.tr Received: 03.02.2011 Abstract Feature selection and extraction are frequently used solutions to overcome the curse of dimensionality in text classification problems. We introduce an extraction method that summarizes the features of the document samples, where the new features aggregate information about how much evidence there is in a document, for each class. We project the high dimensional features of documents onto a new feature space having dimensions equal to the number of classes in order to form the abstract features. We test our method on 7 different text classification algorithms, with different classifier design approaches. We examine performances of the classifiers applied on standard text categorization test collections and show the enhancements achieved by applying our extraction method. We compare the classification performance results of our method with popular and well-known feature selection and feature extraction schemes. Results show that our summarizing abstract feature extraction method encouragingly enhances classification performances on most of the classifiers when compared with other methods.Feature selection and extraction are frequently used solutions to overcome the curse of dimensionality in text classification problems. We introduce an extraction method that summarizes the features of the document samples, where the new features aggregate information about how much evidence there is in a document, for each class. We project the high dimensional features of documents onto a new feature space having dimensions equal to the number of classes in order to form the abstract features. We test our method on 7 different text classification algorithms, with different classifier design approaches. We examine performances of the classifiers applied on standard text categorization test collections and show the enhancements achieved by applying our extraction method. We compare the classification performance results of our method with popular and well-known feature selection and feature extraction schemes. Results show that our summarizing abstract feature extraction method encouragingly enhances classification performances on most of the classifiers when compared with other methods.", "title": "" }, { "docid": "28d75588fdb4ff45929da124b001e8cc", "text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch", "title": "" }, { "docid": "1cdd599b49d9122077a480a75391aae8", "text": "Two aspects of children's early gender development-the spontaneous production of gender labels and gender-typed play-were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children's gender labeling as based on mothers' biweekly telephone interviews regarding their children's language from 9 through 21 months. Videotapes of children's play both alone and with mother during home visits at 17 and 21 months were independently analyzed for play with gender-stereotyped and gender-neutral toys. Finally, the relation between gender labeling and gender-typed play was examined. Children transitioned to using gender labels at approximately 19 months, on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in gender-typed play, suggesting that knowledge of gender categories might influence gender typing before the age of 2.", "title": "" }, { "docid": "0506a7f5dddf874487c90025dff0bc7d", "text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.", "title": "" }, { "docid": "34992b86a8ac88c5f5bbca770954ae61", "text": "Entity search over text corpora is not geared for relationship queries where answers are tuples of related entities and where a query often requires joining cues from multiple documents. With large knowledge graphs, structured querying on their relational facts is an alternative, but often suffers from poor recall because of mismatches between user queries and the knowledge graph or because of weakly populated relations.\n This paper presents the TriniT search engine for querying and ranking on extended knowledge graphs that combine relational facts with textual web contents. Our query language is designed on the paradigm of SPO triple patterns, but is more expressive, supporting textual phrases for each of the SPO arguments. We present a model for automatic query relaxation to compensate for mismatches between the data and a user's query. Query answers -- tuples of entities -- are ranked by a statistical language model. We present experiments with different benchmarks, including complex relationship queries, over a combination of the Yago knowledge graph and the entity-annotated ClueWeb'09 corpus.", "title": "" }, { "docid": "fbbc7080f9c235c3f696f6fb78714771", "text": "Powered exoskeletons can provide motion enhancement for both healthy and physically challenged people. Compared with lower limb exoskeletons, upper limb exoskeletons are required to have multiple degrees-of-freedom and can still produce sufficient augmentation force. Designs using serial mechanisms usually result in complicated and bulky exoskeletons that prevent themselves from being wearable. This paper presents a new exoskeleton aimed to achieve compactness and wearability. We consider a shoulder exoskeleton that consists of two spherical mechanisms with two slider crank mechanisms. The actuators can be made stationary and attached side-by-side, close to a human body. Thus better inertia properties can be obtained while maintaining lightweight. The dimensions of the exoskeleton are synthesized to achieve maximum output force. Through illustrations of a prototype, the exoskeleton is shown to be wearable and can provide adequate motion enhancement of a human's upper limb.", "title": "" }, { "docid": "79685eeb67edbb3fbb6e6340fac420c3", "text": "Fatma Özcan IBM Almaden Research Center San Jose, CA [email protected] Nesime Tatbul Intel Labs and MIT Cambridge, MA [email protected] Daniel J. Abadi Yale University New Haven, CT [email protected] Marcel Kornacker Cloudera San Francisco, CA [email protected] C Mohan IBM Almaden Research Center San Jose, CA [email protected] Karthik Ramasamy Twitter, Inc. San Francisco, CA [email protected] Janet Wiener Facebook, Inc. Menlo Park, CA [email protected]", "title": "" }, { "docid": "e42805b57fa2f8f95d03fea8af2e8560", "text": "Models are used in a variety of fields, including land change science, to better understand the dynamics of systems, to develop hypotheses that can be tested empirically, and to make predictions and/or evaluate scenarios for use in assessment activities. Modeling is an important component of each of the three foci outlined in the science plan of the Land-use and -cover change (LUCC) project (Turner et al. 1995) of the International Geosphere-Biosphere Program (IGBP) and the International Human Dimensions Program (IHDP). In Focus 1, on comparative land-use dynamics, models are used to help improve our understanding of the dynamics of land-use that arise from human decision-making at all levels, households to nations. These models are supported by surveys and interviews of decision makers. Focus 2 emphasizes development of empirical diagnostic models based on aerial and satellite observations of spatial and temporal land-cover dynamics. Finally, Focus 3 focuses specifically on the development of models of land-use and -cover change (LUCC) that can be used for prediction and scenario generation in the context of integrative assessments of global change.", "title": "" }, { "docid": "a525ba232412bcab7885c54ae7932fa3", "text": "Deep recurrent neural networks have been successfully applied to knowledge tracing, namely, deep knowledge tracing (DKT), which aims to automatically trace students’ knowledge states by mining their exercise performance data. Two main issues exist in the current DKT models: First, the complexity of the DKT models increases the tension of psychological interpretation. Second, the input of existing DKT models is only the exercise tags representing via one-hot encoding. The correlation between the hidden knowledge components and students’ responses to the exercises heavily relies on training the DKT models. The existing rich and informative features are excluded in the training, which may yield sub-optimal performance. To utilize the information embedded in these features, researchers have proposed a manual method to pre-process the features, i.e., discretizing them based on the inner characteristics of individual features. However, the proposed method requires many feature engineering efforts and is infeasible when the selected features are huge. To tackle the above issues, we design an automatic system to embed the heterogeneous features implicitly and effectively into the original DKT model. More specifically, we apply tree-based classifiers to predict whether the student can correctly answer the exercise given the heterogeneous features, an effective way to capture how the student deviates from others in the exercise. The predicted response and the true response are then encoded into a 4-bit one-hot encoding and concatenated with the original one-hot encoding features on the exercise tags to train a long short-term memory (LSTM) model, which can output the probability that a student will answer the exercise correctly on the corresponding exercise. We conduct a thorough evaluation on two educational datasets and demonstrate the merits and observations of our proposal.", "title": "" }, { "docid": "1fd87c65968630b6388985a41b7890ce", "text": "Cyber Defense Exercises have received much attention in recent years, and are increasingly becoming the cornerstone for ensuring readiness in this new domain. Crossed Swords is an exercise directed at training Red Team members for responsive cyber defense. However, prior iterations have revealed the need for automated and transparent real-time feedback systems to help participants improve their techniques and understand technical challenges. Feedback was too slow and players did not understand the visibility of their actions. We developed a novel and modular open-source framework to address this problem, dubbed Frankenstack. We used this framework during Crossed Swords 2017 execution and evaluated its effectiveness by interviewing participants and conducting an online survey. Due to the novelty of Red Team-centric exercises, very little academic research exists on providing real-time feedback during such exercises. Thus, this paper serves as a first foray into a novel research field.", "title": "" }, { "docid": "783e003838f327c9cabe128b965dfe4d", "text": "To assess original research addressing the effect of the application of compression clothing on sport performance and recovery after exercise, a computer-based literature research was performed in July 2011 using the electronic databases PubMed, MEDLINE, SPORTDiscus, and Web of Science. Studies examining the effect of compression clothing on endurance, strength and power, motor control, and physiological, psychological, and biomechanical parameters during or after exercise were included, and means and measures of variability of the outcome measures were recorded to estimate the effect size (Hedges g) and associated 95% confidence intervals for comparisons of experimental (compression) and control trials (noncompression). The characteristics of the compression clothing, participants, and study design were also extracted. The original research from peer-reviewed journals was examined using the Physiotherapy Evidence Database (PEDro) Scale. Results indicated small effect sizes for the application of compression clothing during exercise for short-duration sprints (10-60 m), vertical-jump height, extending time to exhaustion (such as running at VO2max or during incremental tests), and time-trial performance (3-60 min). When compression clothing was applied for recovery purposes after exercise, small to moderate effect sizes were observed in recovery of maximal strength and power, especially vertical-jump exercise; reductions in muscle swelling and perceived muscle pain; blood lactate removal; and increases in body temperature. These results suggest that the application of compression clothing may assist athletic performance and recovery in given situations with consideration of the effects magnitude and practical relevance.", "title": "" }, { "docid": "b466803c9a9be5d38171ece8d207365e", "text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.", "title": "" }, { "docid": "bc6cbf7da118c01d74914d58a71157ac", "text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.", "title": "" }, { "docid": "05dc76d17fea57d22de982f9590e386b", "text": "Hierarchical multi-label classification assigns a document to multiple hierarchical classes. In this paper we focus on hierarchical multi-label classification of social text streams. Concept drift, complicated relations among classes, and the limited length of documents in social text streams make this a challenging problem. Our approach includes three core ingredients: short document expansion, time-aware topic tracking, and chunk-based structural learning. We extend each short document in social text streams to a more comprehensive representation via state-of-the-art entity linking and sentence ranking strategies. From documents extended in this manner, we infer dynamic probabilistic distributions over topics by dividing topics into dynamic \"global\" topics and \"local\" topics. For the third and final phase we propose a chunk-based structural optimization strategy to classify each document into multiple classes. Extensive experiments conducted on a large real-world dataset show the effectiveness of our proposed method for hierarchical multi-label classification of social text streams.", "title": "" }, { "docid": "e33dd9c497488747f93cfcc1aa6fee36", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" }, { "docid": "2f7b1f2422526d99e75dce7d38665774", "text": "Conventional Open Information Extraction (Open IE) systems are usually built on hand-crafted patterns from other NLP tools such as syntactic parsing, yet they face problems of error propagation. In this paper, we propose a neural Open IE approach with an encoder-decoder framework. Distinct from existing methods, the neural Open IE approach learns highly confident arguments and relation tuples bootstrapped from a state-of-the-art Open IE system. An empirical study on a large benchmark dataset shows that the neural Open IE system significantly outperforms several baselines, while maintaining comparable computational efficiency.", "title": "" } ]
scidocsrr
9b26ccdaafcfd71b7bad0623378094f7
Pendulum-balanced autonomous unicycle: Conceptual design and dynamics model
[ { "docid": "730d5e6577936ef3b513d0a7f4fa3641", "text": "In this research a computer simulation for implementing attitude controller of wheeled inverted pendulum is carried out. The wheeled inverted pendulum is a kind of an inverted pendulum that has two equivalent points. In order to keep the naturally unstable equivalent point, it should be controlling the wheels persistently. Dynamic equations of the wheeled inverted pendulum are derived with considering tilted road as one of various road conditions. A linear quadratic regulator is adopted for the attitude controller since it is easy to obtain full state variables from the sensors for that control scheme and based on controllable condition of the destination as well. Various computer simulation shows that the LQR controller is doing well not only flat road but also tilted road.", "title": "" }, { "docid": "54120754dc82632e6642cbd08401d2dc", "text": "In this paper we study the dynamic modeling of a unicycle robot composed of a wheel, a frame and a disk. The unicycle can reach longitudinal stability by appropriate control to the wheel and lateral stability by adjusting appropriate torque imposed by the disk. The dynamic modeling of the unicycle robot is derived by Euler-Lagrange method. The stability and controllability of the system are analyzed according to the mathematic model. Independent simulation using MATLAB and ODE methods are then proposed respectively. Through the simulation, we confirm the validity of the two obtained models of the unicycle robot system, and provide two experimental platforms for the designing of the balance controller.", "title": "" } ]
[ { "docid": "448d70d9f5f8e5fcb8d04d355a02c8f9", "text": "Structural health monitoring (SHM) using wireless sensor networks (WSNs) has gained research interest due to its ability to reduce the costs associated with the installation and maintenance of SHM systems. SHM systems have been used to monitor critical infrastructure such as bridges, high-rise buildings, and stadiums and has the potential to improve structure lifespan and improve public safety. The high data collection rate of WSNs for SHM pose unique network design challenges. This paper presents a comprehensive survey of SHM using WSNs outlining the algorithms used in damage detection and localization, outlining network design challenges, and future research directions. Solutions to network design problems such as scalability, time synchronization, sensor placement, and data processing are compared and discussed. This survey also provides an overview of testbeds and real-world deployments of WSNs for SH.", "title": "" }, { "docid": "52c7469ba9164280a9de841537e530d7", "text": "Monitoring the “physics” of control systems to detect attacks is a growing area of research. In its basic form a security monitor creates time-series models of sensor readings for an industrial control system and identifies anomalies in these measurements in order to identify potentially false control commands or false sensor readings. In this paper, we review previous work based on a unified taxonomy that allows us to identify limitations, unexplored challenges, and new solutions. In particular, we propose a new adversary model and a way to compare previous work with a new evaluation metric based on the trade-off between false alarms and the negative impact of undetected attacks. We also show the advantages and disadvantages of three experimental scenarios to test the performance of attacks and defenses: real-world network data captured from a large-scale operational facility, a fully-functional testbed that can be used operationally for water treatment, and a simulation of frequency control in the power grid.", "title": "" }, { "docid": "28c5fada2aab828af16ee5d7bffb4093", "text": "Based on the notion of accumulators, we propose a new cryptog raphic scheme called universal accumulators. This scheme enables one to commit to a set of values using a short accumulator and to efficiently com pute a membership witness of any value that has been accumulated. Unlike tradi tional accumulators, this scheme also enables one to efficiently compute a nonmemb ership witness of any value that has not been accumulated. We give a construc tion for universal accumulators and prove its security based on the strong RSA a ssumption. We further present a construction for dynamic universal accumula tors; this construction allows one to dynamically add and delete inputs with constan t computational cost. Our construction directly builds upon Camenisch and L ysyanskaya’s dynamic accumulator scheme. Universal accumulators can be se en as an extension to dynamic accumulators with support of nonmembership witn ess. We also give an efficient zero-knowledge proof protocol for proving that a committed value is not in the accumulator. Our dynamic universal accumulator c onstruction enables efficient membership revocation in an anonymous fashion.", "title": "" }, { "docid": "148d0709c58111c2f703f68d348c09af", "text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.", "title": "" }, { "docid": "f0cabaa5dedadd65313af78c42a2df35", "text": "In this paper, a quadrifilar spiral antenna (QSA) with an integrated module for UHF radio frequency identification (RFID) reader is presented. The proposed QSA consists of four spiral antennas with short stubs and a microstrip feed network. Also, the shielded module is integrated on the center of the ground inside the proposed QSA. In order to match the proposed QSA with the integrated module, we adopt a short stub connected from each spiral antenna to ground. Experimental result shows that the QSA of size 80 × 80 × 11.2 mm3 with the integrated module (40 × 40 × 3 mm3) has a peak gain of 3.5 dBic, an axial ratio under 2.5 dB and a 3-dB beamwidth of about 130o.", "title": "" }, { "docid": "0ccfe04a4426e07dcbd0260d9af3a578", "text": "We present an efficient algorithm to perform approximate offsetting operations on geometric models using GPUs. Our approach approximates the boundary of an object with point samples and computes the offset by merging the balls centered at these points. The underlying approach uses Layered Depth Images (LDI) to organize the samples into structured points and performs parallel computations using multiple cores. We use spatial hashing to accelerate intersection queries and balance the workload among various cores. Furthermore, the problem of offsetting with a large distance is decomposed into successive offsetting using smaller distances. We derive bounds on the accuracy of offset computation as a function of the sampling rate of LDI and offset distance. In practice, our GPU-based algorithm can accurately compute offsets of models represented using hundreds of thousands of points in a few seconds on GeForce GTX 580 GPU. We observe more than 100 times speedup over prior serial CPU-based approximate offset computation algorithms.", "title": "" }, { "docid": "e72f8ad61a7927fee8b0a32152b0aa4b", "text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.", "title": "" }, { "docid": "d3682d2a9e11f80a51c53659c9b6623d", "text": "Despite the considerable clinical impact of congenital human cytomegalovirus (HCMV) infection, the mechanisms of maternal–fetal transmission and the resultant placental and fetal damage are largely unknown. Here, we discuss animal models for the evaluation of CMV vaccines and virus-induced pathology and particularly explore surrogate human models for HCMV transmission and pathogenesis in the maternal–fetal interface. Studies in floating and anchoring placental villi and more recently, ex vivo modeling of HCMV infection in integral human decidual tissues, provide unique insights into patterns of viral tropism, spread, and injury, defining the outcome of congenital infection, and the effect of potential antiviral interventions.", "title": "" }, { "docid": "5fba6770fef320c6e7dee2c848a0a503", "text": "Person re-identification (Re-ID) aims at recognizing the same person from images taken across different cameras. To address this task, one typically requires a large amount labeled data for training an effective Re-ID model, which might not be practical for real-world applications. To alleviate this limitation, we choose to exploit a sufficient amount of pre-existing labeled data from a different (auxiliary) dataset. By jointly considering such an auxiliary dataset and the dataset of interest (but without label information), our proposed adaptation and re-identification network (ARN) performs unsupervised domain adaptation, which leverages information across datasets and derives domain-invariant features for Re-ID purposes. In our experiments, we verify that our network performs favorably against state-of-the-art unsupervised Re-ID approaches, and even outperforms a number of baseline Re-ID methods which require fully supervised data for training.", "title": "" }, { "docid": "9dceccb7b171927a5cba5a16fd9d76c6", "text": "This paper involved developing two (Type I and Type II) equal-split Wilkinson power dividers (WPDs). The Type I divider can use two short uniform-impedance transmission lines, one resistor, one capacitor, and two quarter-wavelength (λ/4) transformers in its circuit. Compared with the conventional equal-split WPD, the proposed Type I divider can relax the two λ/4 transformers and the output ports layout restrictions of the conventional WPD. To eliminate the number of impedance transformers, the proposed Type II divider requires only one impedance transformer attaining the optimal matching design and a compact size. A compact four-way equal-split WPD based on the proposed Type I and Type II dividers was also developed, facilitating a simple layout, and reducing the circuit size. Regarding the divider, to obtain favorable selectivity and isolation performance levels, two Butterworth filter transformers were integrated in the proposed Type I divider to perform filter response and power split functions. Finally, a single Butterworth filter transformer was integrated in the proposed Type II divider to demonstrate a compact filtering WPD.", "title": "" }, { "docid": "39e30b2303342235780c7fff68cdc0aa", "text": "The impact factor is only one of three standardized measures created by the Institute of Scientific Information (ISI), which can be used to measure the way a journal receives citations to its articles over time. The build-up of citations tends to follow a curve like that of Figure 1. Citations to articles published in a given year rise sharply to a peak between two and six years after publication. From this peak citations decline exponentially. The citation curve of any journal can be described by the relative size of the curve (in terms of area under the line), the extent to which the peak of the curve is close to the origin and the rate of decline of the curve. These characteristics form the basis of the ISI indicators impact factor, immediacy index and cited half-life . The impact factor is a measure of the relative size of the citation curve in years 2 and 3. It is calculated by dividing the number of current citations a journal receives to articles published in the two previous years by the number of articles published in those same years. So, for example, the 1999 impact factor is the citations in 1999 to articles published in 1997 and 1998 divided by the number of articles published in 1997 and 1998. The number that results can be thought of as the average number of citations the average article receives per annum in the two years after the publication year. The immediacy index gives a measure of the skewness of the curve, that is, the extent to which the peak of the curve lies near the origin of the graph. It is calculated by dividing the citations a journal receives in the current year by the number of articles it publishes in that year, i.e., the 1999 immediacy index is the average number of citations in 1999 to articles published in 1999. The number that results can be thought of as the initial gradient of the citation curve, a measure of how quickly items in that journal get cited upon publication. The cited half-life is a measure of the rate of decline of the citation curve. It is the number of years that the number of current citations takes to decline to 50% of its initial value; the cited half-life is 6 years in the example given in (Figure 1). It is a measure of how long articles in a journal continue to be cited after publication.", "title": "" }, { "docid": "200ee6830f8b8f54ecb1c808c6712337", "text": "DC power distribution systems for building application are gaining interest both in academic and industrial world, due to potential benefits in terms of energy efficiency and capital savings. These benefits are more evident were the end-use loads are natively DC (e.g., computers, solid-state lighting or variable speed drives for electric motors), like in data centers and commercial buildings, but also in houses. When considering the presence of onsite renewable generation, e.g. PV or micro-wind generators, storage systems and electric vehicles, DC-based building microgrids can bring additional benefits, allowing direct coupling of DC loads and DC Distributed energy Resources (DERs). A number of demonstrating installations have been built and operated around the world, and an effort is being made both in USA and Europe to study different aspects involved in the implementation of a DC distribution system (e.g. safety, protection, control) and to develop standards for DC building application. This paper discusses on the planning of an experimental DC microgrid with power hardware in the loop features at the University of Naples Federico II, Dept. of Electr. Engineering and Inf. Technologies. The microgrid consists of a 3-wire DC bus, with positive, negative and neutral poles, with a voltage range of +/-0÷400 V. The system integrates a number of DERs, like PV, Wind and Fuel Cell generators, battery and super capacitor based storage systems, EV chargers, standard loads and smart loads. It will include also a power-hardware-in-the-loop platform with the aim to enable the real time emulation of single components or parts of the microgrid, or of systems and sub-systems interacting with the microgrid, thus realizing a virtual extension of the scale of the system. Technical features and specifications of the power amplifier to be used as power interface of the PHIL platform will be discussed in detail.", "title": "" }, { "docid": "92137a6f5fa3c5059bdb08db2fb5c39d", "text": "Motivated by our ongoing efforts in the development of Refraction 2, a puzzle game targeting mathematics education, we realized that the quality of a puzzle is critically sensitive to the presence of alternative solutions with undesirable properties. Where, in our game, we seek a way to automatically synthesize puzzles that can only be solved if the player demonstrates specific concepts, concern for the possibility of undesirable play touches other interactive design domains. To frame this problem (and our solution to it) in a general context, we formalize the problem of generating solvable puzzles that admit no undesirable solutions as an NPcomplete search problem. By making two design-oriented extensions to answer set programming (a technology that has been recently applied to constrained game content generation problems) we offer a general way to declaratively pose and automatically solve the high-complexity problems coming from this formulation. Applying this technique to Refraction, we demonstrate a qualitative leap in the kind of puzzles we can reliably generate. This work opens up new possibilities for quality-focused content generators that guarantee properties over their entire combinatorial space of play.", "title": "" }, { "docid": "584d2858178e4e33855103a71d7fdce4", "text": "This paper presents 5G mm-wave phased-array antenna for 3D-hybrid beamforming. This uses MFC to steer beam for the elevation, and uses butler matrix network for the azimuth. In case of butler matrix network, this, using 180° ring hybrid coupler switch network, is proposed to get additional beam pattern and improved SRR in comparison with conventional structure. Also, it can be selected 15 of the azimuth beam pattern. When using the chip of proposed structure, it is possible to get variable kind of beam-forming over 1000. In addition, it is suitable 5G system or a satellite communication system that requires a beamforming.", "title": "" }, { "docid": "292d7fbc9352dc1d2a84364d66dda308", "text": "The ultrastructure of somatic cells present in gonadal tubules in male oyster Crassostrea gigas was investigated. These cells, named Intragonadal Somatic Cells (ISCs) have a great role in the organization of the germinal epithelium in the gonad. Immunological detection of α-tubulin tyrosine illustrates their association in columns from the basis to the lumen of the tubule, stabilized by numerous adhesive junctions. This somatic intragonadal organization delimited some different groups of germ cells along the tubule walls. In early stages of gonad development, numerous phagolysosomes were observed in the cytoplasm of ISCs indicating that these cells have in this species an essential role in the removal of waste sperm in the tubules. Variations of lipids droplets content in the cytoplasm of ISCs were also noticed along the spermatogenesis course. ISCs also present some mitochondria with tubullo-lamellar cristae.", "title": "" }, { "docid": "5c31ed81a9c8d6463ce93890e38ad7b5", "text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.", "title": "" }, { "docid": "428069c804c035e028e9047d6c1f70f7", "text": "We present a co-designed scheduling framework and platform architecture that together support compositional scheduling of real-time systems. The architecture is built on the Xen virtualization platform, and relies on compositional scheduling theory that uses periodic resource models as component interfaces. We implement resource models as periodic servers and consider enhancements to periodic server design that significantly improve response times of tasks and resource utilization in the system while preserving theoretical schedulability results. We present an extensive evaluation of our implementation using workloads from an avionics case study as well as synthetic ones.", "title": "" }, { "docid": "ec9f793761ebd5199c6a2cc8c8215ac4", "text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.", "title": "" }, { "docid": "d62ab0d9f243aebea62d782ec4163c69", "text": "Recommender Systems (RS) serve online customers in identifying those items from a variety of choices that best match their needs and preferences. In this context explanations summarize the reasons why a specific item is proposed and strongly increase the users' trust in the system's results. In this paper we propose a framework for generating knowledgeable explanations that exploits domain knowledge to transparently argue why a recommended item matches the user's preferences. Furthermore, results of an online experiment on a real-world platform show that users' perception of the usability of a recommender system is positively influenced by knowledgeable explanations and that consequently users' experience in interacting with the system, their intention to use it repeatedly as well as their commitment to recommend it to others are increased.", "title": "" }, { "docid": "cd9632f63fc5e3acf0ebb1039048f671", "text": "The authors completed an 8-week practice placement at Thrive’s garden project in Battersea Park, London, as part of their occupational therapy degree programme. Thrive is a UK charity using social and therapeutic horticulture (STH) to enable disabled people to make positive changes to their own lives (Thrive 2008). STH is an emerging therapeutic movement, using horticulture-related activities to promote the health and wellbeing of disabled and vulnerable people (Sempik et al 2005, Fieldhouse and Sempik 2007). Within Battersea Park, Thrive has a main garden with available indoor facilities and two satellite gardens. All these gardens are publicly accessible. Thrive Battersea’s service users include people with learning disabilities, mental health challenges and physical disabilities. Thrive’s group facilitators (referred to as therapists) lead regular gardening groups, aiming to enable individual performance within the group and being mindful of health conditions and circumstances. The groups have three types of participant: Thrive’s therapists, service users (known as gardeners) and volunteers. The volunteers help Thrive’s therapists and gardeners to perform STH activities. The gardening groups comprise participants from various age groups and abilities. Thrive Battersea provides ongoing contact between the gardeners, volunteers and therapists. Integrating service users and non-service users is a method of tackling negative attitudes to disability and also promoting social inclusion (Sayce 2000). Thrive Battersea is an example of a ‘role-emerging’ practice placement, which is based outside either local authorities or the National Health Service (NHS) and does not have an on-site occupational therapist (College of Occupational Therapists 2006). The connection of occupational therapy theory to practice is essential on any placement (Alsop 2006). The roleemerging nature of this placement placed additional reflective onus on the authors to identify the links between theory and practice. The authors observed how Thrive’s gardeners connected to the spaces they worked and to the people they worked with. A sense of individual Gardening and belonging: reflections on how social and therapeutic horticulture may facilitate health, wellbeing and inclusion", "title": "" } ]
scidocsrr
d87f406d133744ede250d3eb2a722164
On the Impact of Touch ID on iPhone Passcodes
[ { "docid": "20563a2f75e074fe2a62a5681167bc01", "text": "The introduction of a new generation of attractive touch screen-based devices raises many basic usability questions whose answers may influence future design and market direction. With a set of current mobile devices, we conducted three experiments focusing on one of the most basic interaction actions on touch screens: the operation of soft buttons. Issues investigated in this set of experiments include: a comparison of soft button and hard button performance; the impact of audio and vibrato-tactile feedback; the impact of different types of touch sensors on use, behavior, and performance; a quantitative comparison of finger and stylus operation; and an assessment of the impact of soft button sizes below the traditional 22 mm recommendation as well as below finger width.", "title": "" }, { "docid": "b56b90d98b4b1b136e283111e9acf732", "text": "Mobile phones are widely used nowadays and during the last years developed from simple phones to small computers with an increasing number of features. These result in a wide variety of data stored on the devices which could be a high security risk in case of unauthorized access. A comprehensive user survey was conducted to get information about what data is really stored on the mobile devices, how it is currently protected and if biometric authentication methods could improve the current state. This paper states the results from about 550 users of mobile devices. The analysis revealed a very low securtiy level of the devices. This is partly due to a low security awareness of their owners and partly due to the low acceptance of the offered authentication method based on PIN. Further results like the experiences with mobile thefts and the willingness to use biometric authentication methods as alternative to PIN authentication are also stated.", "title": "" } ]
[ { "docid": "30bad49dc45651010b49e78951827f6a", "text": "In this paper we present a case study of frequent surges of unusually high rail-to-earth potential values at Taipei Rapid Transit System. The rail potential values observed and the resulting stray current flow associated with the diode-ground DC traction system during operation are contradictory to the moderate values on which the grounding of the DC traction system design was based. Thus we conducted both theoretical study and field measurements to obtain better understanding of the phenomenon, and to develop a more accurate algorithm for computing the rail-to-earth potential of the diode-ground DC traction systems.", "title": "" }, { "docid": "733e3b25a53a7dc537df94a4cb5e473f", "text": "Brain activity associated with attention sustained on the task of safe driving has received considerable attention recently in many neurophysiological studies. Those investigations have also accurately estimated shifts in drivers' levels of arousal, fatigue, and vigilance, as evidenced by variations in their task performance, by evaluating electroencephalographic (EEG) changes. However, monitoring the neurophysiological activities of automobile drivers poses a major measurement challenge when using a laboratory-oriented biosensor technology. This work presents a novel dry EEG sensor based mobile wireless EEG system (referred to herein as Mindo) to monitor in real time a driver's vigilance status in order to link the fluctuation of driving performance with changes in brain activities. The proposed Mindo system incorporates the use of a wireless and wearable EEG device to record EEG signals from hairy regions of the driver conveniently. Additionally, the proposed system can process EEG recordings and translate them into the vigilance level. The study compares the system performance between different regression models. Moreover, the proposed system is implemented using JAVA programming language as a mobile application for online analysis. A case study involving 15 study participants assigned a 90 min sustained-attention driving task in an immersive virtual driving environment demonstrates the reliability of the proposed system. Consistent with previous studies, power spectral analysis results confirm that the EEG activities correlate well with the variations in vigilance. Furthermore, the proposed system demonstrated the feasibility of predicting the driver's vigilance in real time.", "title": "" }, { "docid": "fc6e5b83900d87fd5d6eec6d84d47939", "text": "In this letter, we propose a low complexity linear precoding scheme for downlink multiuser MIMO precoding systems where there is no limit on the number of multiple antennas employed at both the base station and the users. In the proposed algorithm, we can achieve the precoder in two steps. In the first step, we balance the multiuser interference (MUI) and noise by carrying out a novel channel extension approach. In the second step, we further optimize the system performance assuming parallel SU MIMO channels. Simulation results show that the proposed algorithm can achieve elaborate performance while offering lower computational complexity.", "title": "" }, { "docid": "c8b36dd0f892c750f17bc714d177f3d1", "text": "A scheme for controlling parallel connected inverters in a stand-alone AC supply system is presented. A key feature of this scheme is that it uses only those variables which can be measured locally at the inverter, and does not need communication of control signals between the inverters. This feature is important in high reliability uninterruptible power supply (UPS) systems, and in large DC power sources connected to an AC distribution system. Real and reactive power sharing between inverters can be achieved by controlling two independent quantities at the inverter: the power angle and the fundamental inverter voltage magnitude.<<ETX>>", "title": "" }, { "docid": "ad6bb165620dafb7dcadaca91c9de6b0", "text": "This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression.", "title": "" }, { "docid": "85e6c9bc6f86560e45276df947db48aa", "text": "Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire DGX-1 to learn successful strategies in Atari games in mere minutes, using both synchronous and asynchronous algorithms.", "title": "" }, { "docid": "964437f82fc71cd9b3de4d2b70301f85", "text": "We describe WordSeer, a tool whose goal is to help scholars and analysts discover patterns and formulate and test hypotheses about the contents of text collections, midway between what humanities scholars call a traditional \"close read'' and the new \"distant read\" or \"culturomics\" approach. To this end, WordSeer allows for highly flexible \"slicing and dicing\" (hence \"sliding\") across a text collection. The tool allows users to view text from different angles by selecting subsets of data, viewing those as visualizations, moving laterally to view other subsets of data, slicing into another view, expanding the viewed data by relaxing constraints, and so on. We illustrate the text sliding capabilities of the tool with examples from a case study in the field of humanities and social sciences -- an analysis of how U.S. perceptions of China and Japan changed over the last 30 years.", "title": "" }, { "docid": "cebac1ab25aac9dab853be592cfaa214", "text": "Enterprise Architecture (EA) is an area within Information Management that deals with the alignment of IT and business in an organization. It is very recent and new discipline emerged in the new millennium as a result of the lack of comprehensive architecture that can describe the relationships among elements of the enterprise encompassing People, Processes, Business and Technology. The main objective of this study is to assess the level of implementation of EA in the designated organization. This study focuses on the four architecture domains listed in The Open Group Architecture Framework (TOGAF) namely: (1)Business Architecture; (2)Data Architecture; (3)Application Architecture; and (4)Technology Architecture. The outcome of this study is a set of guideline of an EA which should help the organization in aligning its business and IT strategy. This study should also benefit those who want to understand more on TOGAF and the implementation of EA.", "title": "" }, { "docid": "5d379223a7204a4074638f0d135ec59a", "text": "Photovoltaic (PV) is one of the most promising renewable energy sources. To ensure secure operation and economic integration of PV in smart grids, accurate forecasting of PV power is an important issue. In this paper, we propose the use of long short-term memory recurrent neural network (LSTM-RNN) to accurately forecast the output power of PV systems. The LSTM networks can model the temporal changes in PV output power because of their recurrent architecture and memory units. The proposed method is evaluated using hourly datasets of different sites for a year. We compare the proposed method with three PV forecasting methods. The use of LSTM offers a further reduction in the forecasting error compared with the other methods. The proposed forecasting method can be a helpful tool for planning and controlling smart grids.", "title": "" }, { "docid": "615dbb03f31acfce971a383fa54d7d12", "text": "Objectives\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTarget Audience\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nScope\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains.", "title": "" }, { "docid": "7208a2b257c7ba7122fd2e278dd1bf4a", "text": "Abstract—This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.", "title": "" }, { "docid": "709c06739d20fe0a5ba079b21e5ad86d", "text": "Bug triaging refers to the process of assigning a bug to the most appropriate developer to fix. It becomes more and more difficult and complicated as the size of software and the number of developers increase. In this paper, we propose a new framework for bug triaging, which maps the words in the bug reports (i.e., the term space) to their corresponding topics (i.e., the topic space). We propose a specialized topic modeling algorithm named <italic> multi-feature topic model (MTM)</italic> which extends Latent Dirichlet Allocation (LDA) for bug triaging. <italic>MTM </italic> considers product and component information of bug reports to map the term space to the topic space. Finally, we propose an incremental learning method named <italic>TopicMiner</italic> which considers the topic distribution of a new bug report to assign an appropriate fixer based on the affinity of the fixer to the topics. We pair <italic> TopicMiner</italic> with MTM (<italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math> <alternatives><inline-graphic xlink:href=\"xia-ieq1-2576454.gif\"/></alternatives></inline-formula></italic>). We have evaluated our solution on 5 large bug report datasets including GCC, OpenOffice, Mozilla, Netbeans, and Eclipse containing a total of 227,278 bug reports. We show that <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\"> $^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq2-2576454.gif\"/></alternatives></inline-formula> </italic> can achieve top-1 and top-5 prediction accuracies of 0.4831-0.6868, and 0.7686-0.9084, respectively. We also compare <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives> <inline-graphic xlink:href=\"xia-ieq3-2576454.gif\"/></alternatives></inline-formula></italic> with Bugzie, LDA-KL, SVM-LDA, LDA-Activity, and Yang et al.'s approach. The results show that <italic>TopicMiner<inline-formula> <tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq4-2576454.gif\"/> </alternatives></inline-formula></italic> on average improves top-1 and top-5 prediction accuracies of Bugzie by 128.48 and 53.22 percent, LDA-KL by 262.91 and 105.97 percent, SVM-LDA by 205.89 and 110.48 percent, LDA-Activity by 377.60 and 176.32 percent, and Yang et al.'s approach by 59.88 and 13.70 percent, respectively.", "title": "" }, { "docid": "b94e461c6ac7883b9cf7123e58d04ae0", "text": "a r t i c l e i n f o We introduce the term \" enclothed cognition \" to describe the systematic influence that clothes have on the wearer's psychological processes. We offer a potentially unifying framework to integrate past findings and capture the diverse impact that clothes can have on the wearer by proposing that enclothed cognition involves the co-occurrence of two independent factors—the symbolic meaning of the clothes and the physical experience of wearing them. As a first test of our enclothed cognition perspective, the current research explored the effects of wearing a lab coat. A pretest found that a lab coat is generally associated with atten-tiveness and carefulness. We therefore predicted that wearing a lab coat would increase performance on attention-related tasks. In Experiment 1, physically wearing a lab coat increased selective attention compared to not wearing a lab coat. In Experiments 2 and 3, wearing a lab coat described as a doctor's coat increased sustained attention compared to wearing a lab coat described as a painter's coat, and compared to simply seeing or even identifying with a lab coat described as a doctor's coat. Thus, the current research suggests a basic principle of enclothed cognition—it depends on both the symbolic meaning and the physical experience of wearing the clothes. \" What a strange power there is in clothing. \" ~Isaac Bashevis Singer Nobel Prize winning author Isaac Bashevis Singer asserts that the clothes we wear hold considerable power and sway. In line with this assertion, bestselling books such as Dress for Success by John T. Molloy and TV shows like TLC's What Not to Wear emphasize the power that clothes can have over others by creating favorable impressions. Indeed, a host of research has documented the effects that people's clothes have on the perceptions and reactions of others. High school students' clothing styles influence perceptions of academic prowess among peers and teachers (Behling & Williams, 1991). Teaching assistants who wear formal clothes are perceived as more intelligent, but as less interesting than teaching assistants who wear less formal clothes (Morris, Gorham, Cohen, & Huffman, 1996). When women dress in a masculine fashion during a recruitment interview, they are more likely to be hired (Forsythe, 1990), and when they dress sexily in prestigious jobs, they are perceived as less competent (Glick, Larsen, Johnson, & Branstiter, 2005). Clients are more likely to return to formally dressed therapists …", "title": "" }, { "docid": "277e738fde3fea142ff93497d0065b10", "text": "To construct a diversified search test collection, a set of possible subtopics (or intents) needs to be determined for each topic, in one way or another, and perintent relevance assessments need to be obtained. In the TREC Web Track Diversity Task, subtopics are manually developed at NIST, based on results of automatic click log analysis; in the NTCIR INTENT Task, intents are determined by manually clustering 'subtopics strings' returned by participating systems. In this study, we address the following research question: Does the choice of intents for a test collection affect relative performances of diversified search systems? To this end, we use the TREC 2012 Web Track Diversity Task data and the NTCIR-10 INTENT-2 Task data, which share a set of 50 topics but have different intent sets. Our initial results suggest that the choice of intents may affect relative performances, and that this choice may be far more important than how many intents are selected for each topic", "title": "" }, { "docid": "003d004f57d613ff78bf39a35e788bf9", "text": "Breast cancer is one of the most common cancer in women worldwide. It is typically diagnosed via histopathological microscopy imaging, for which image analysis can aid physicians for more effective diagnosis. Given a large variability in tissue appearance, to better capture discriminative traits, images can be acquired at different optical magnifications. In this paper, we propose an approach which utilizes joint colour-texture features and a classifier ensemble for classifying breast histopathology images. While we demonstrate the effectiveness of the proposed framework, an important objective of this work is to study the image classification across different optical magnification levels. We provide interesting experimental results and related discussions, demonstrating a visible classification invariance with cross-magnification training-testing. Along with magnification-specific model, we also evaluate the magnification independent model, and compare the two to gain some insights.", "title": "" }, { "docid": "0bcc5beb8bada39446c1dd32d0a65dec", "text": "Clustering is a powerful tool in data analysis, but it is often difficult to find a grouping that aligns with a user’s needs. To address this, several methods incorporate constraints obtained from users into clustering algorithms, but unfortunately do not apply to hierarchical clustering. We design an interactive Bayesian algorithm that incorporates user interaction into hierarchical clustering while still utilizing the geometry of the data by sampling a constrained posterior distribution over hierarchies. We also suggest several ways to intelligently query a user. The algorithm, along with the querying schemes, shows promising results on real data.", "title": "" }, { "docid": "3fd9fd52be3153fe84f2ea6319665711", "text": "The theories of supermodular optimization and games provide a framework for the analysis of systems marked by complementarity. We summarize the principal results of these theories and indicate their usefulness by applying them to study the shift to 'modern manufacturing'. We also use them to analyze the characteristic features of the Lincoln Electric Company's strategy and structure.", "title": "" }, { "docid": "a45c93e89cc3df3ebec59eb0c81192ec", "text": "We study a variant of the capacitated vehicle routing problem where the cost over each arc is defined as the product of the arc length and the weight of the vehicle when it traverses that arc. We propose two new mixed integer linear programming formulations for the problem: an arc-load formulation and a set partitioning formulation based on q-routes with additional constraints. A family of cycle elimination constraints are derived for the arc-load formulation. We then compare the linear programming (LP) relaxations of these formulations with the twoindex one-commodity flow formulation proposed in the literature. In particular, we show that the arc-load formulation with the new cycle elimination constraints gives the same LP bound as the set partitioning formulation based on 2-cycle-free q-routes, which is stronger than the LP bound given by the two-index one-commodity flow formulation. We propose a branchand-cut algorithm for the arc-load formulation, and a branch-cut-and-price algorithm for the set partitioning formulation strengthened by additional constraints. Computational results on instances from the literature demonstrate that a significant improvement can be achieved by the branch-cut-and-price algorithm over other methods.", "title": "" }, { "docid": "97968acf486f3f4bcdbccdfcd116dabb", "text": "Disruption of electric power operations can be catastrophic on national security and the economy. Due to the complexity of widely dispersed assets and the interdependences among computer, communication, and power infrastructures, the requirement to meet security and quality compliance on operations is a challenging issue. In recent years, the North American Electric Reliability Corporation (NERC) established a cybersecurity standard that requires utilities' compliance on cybersecurity of control systems. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). In this paper, a comprehensive survey on cybersecurity of critical infrastructures is reported. A supervisory control and data acquisition security framework with the following four major components is proposed: (1) real-time monitoring; (2) anomaly detection; (3) impact analysis; and (4) mitigation strategies. In addition, an attack-tree-based methodology for impact analysis is developed. The attack-tree formulation based on power system control networks is used to evaluate system-, scenario -, and leaf-level vulnerabilities by identifying the system's adversary objectives. The leaf vulnerability is fundamental to the methodology that involves port auditing or password strength evaluation. The measure of vulnerabilities in the power system control framework is determined based on existing cybersecurity conditions, and then, the vulnerability indices are evaluated.", "title": "" }, { "docid": "60da71841669948e0a57ba4673693791", "text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.", "title": "" } ]
scidocsrr
443a1cc4b0621c7fda63dc8820264f9b
What¿s in a Like? Attitudes and behaviors around receiving Likes on Facebook
[ { "docid": "821cefef9933d6a02ec4b9098f157062", "text": "Scientists debate whether people grow closer to their friends through social networking sites like Facebook, whether those sites displace more meaningful interaction, or whether they simply reflect existing ties. Combining server log analysis and longitudinal surveys of 3,649 Facebook users reporting on relationships with 26,134 friends, we find that communication on the site is associated with changes in reported relationship closeness, over and above effects attributable to their face-to-face, phone, and email contact. Tie strength increases with both one-on-one communication, such as posts, comments, and messages, and through reading friends' broadcasted content, such as status updates and photos. The effect is greater for composed pieces, such as comments, posts, and messages than for 'one-click' actions such as 'likes.' Facebook has a greater impact on non-family relationships and ties who do not frequently communicate via other channels.", "title": "" }, { "docid": "bb81541f9c87b51858ee76897e2a964e", "text": "Five studies tested hypotheses derived from the sociometer model of self-esteem according to which the self-esteem system monitors others' reactions and alerts the individual to the possibility of social exclusion. Study 1 showed that the effects of events on participants' state self-esteem paralleled their assumptions about whether such events would lead others to accept or reject them. In Study 2, participants' ratings of how included they felt in a real social situation correlated highly with their self-esteem feelings. In Studies 3 and 4, social exclusion caused decreases in self-esteem when respondents were excluded from a group for personal reasons, but not when exclusion was random, but this effect was not mediated by self-presentation. Study 5 showed that trait self-esteem correlated highly with the degree to which respondents generally felt included versus excluded by other people. Overall, results provided converging evidence for the sociometer model.", "title": "" }, { "docid": "d34d8dd7ba59741bb5e28bba3e870ac4", "text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.", "title": "" } ]
[ { "docid": "66d45a44eaa7596a35f9afc4424362ec", "text": "Agile methodologies are gaining popularity quickly, receiving increasing support from the software development community. Current requirements engineering practices have addressed traceability approaches for well defined phase-driven development models. Echo is a tool-based approach that provides for the implicit recording and management of relationships between conversations about requirements, specifications, and subsequent design decisions. By providing a means to capture requirements in an informal manner and later restructure the information to suit formal requirements specifications, Echo aims to solve the problems of applying traditional requirements engineering practices to agile development methods making available the demonstrated benefits of requirements traceability – a key enabler for large-scale change management.", "title": "" }, { "docid": "5fd10b2277918255133f2e37a55e1103", "text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.", "title": "" }, { "docid": "a0e14f5c359de4aa8e7640cf4ff5effa", "text": "In speech translation, we are faced with the problem of how to couple the speech recognition process and the translation process. Starting from the Bayes decision rule for speech translation, we analyze how the interaction between the recognition process and the translation process can be modelled. In the light of this decision rule, we discuss the already existing approaches to speech translation. None of the existing approaches seems to have addressed this direct interaction. We suggest two new methods, the local averaging approximation and the monotone alignments.", "title": "" }, { "docid": "4933f3f3007dab687fc852e9c2b1ab0a", "text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.", "title": "" }, { "docid": "9572809d8416cc7b78683e3686e83740", "text": "Lower-limb amputees have identified comfort and mobility as the two most important characteristics of a prosthesis. While these in turn depend on a multitude of factors, they are strongly influenced by the biomechanical performance of the prosthesis and the loading it imparts to the residual limb. Recent years have seen improvements in several prosthetic components that are designed to improve patient comfort and mobility. In this paper, we discuss two of these: VSAP and prosthetic foot-ankle systems; specifically, their mechanical properties and impact on amputee gait are presented.", "title": "" }, { "docid": "e4a63070a6cc367454182dbc8c564188", "text": "In this paper, we summarize hash functions and cellular automata based architectures, and discuss some pros and cons. We introduce the background knowledge of hash functions. The properties and theory of cellular automata are also presented with typical works. We show that cellular automata based schemes are very useful to design hash functions with a low hardware complexity because of its logical operation attributes and parallel properties.", "title": "" }, { "docid": "4f3fe8ea0487690b4a8f61b488e96d53", "text": "Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.", "title": "" }, { "docid": "6ef52ad99498d944e9479252d22be9c8", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "3cb0e324a5eb310c386c6801b0bcf2d9", "text": "BACKGROUND\nThe use of positive psychological interventions may be considered as a complementary strategy in mental health promotion and treatment. The present article constitutes a meta-analytical study of the effectiveness of positive psychology interventions for the general public and for individuals with specific psychosocial problems.\n\n\nMETHODS\nWe conducted a systematic literature search using PubMed, PsychInfo, the Cochrane register, and manual searches. Forty articles, describing 39 studies, totaling 6,139 participants, met the criteria for inclusion. The outcome measures used were subjective well-being, psychological well-being and depression. Positive psychology interventions included self-help interventions, group training and individual therapy.\n\n\nRESULTS\nThe standardized mean difference was 0.34 for subjective well-being, 0.20 for psychological well-being and 0.23 for depression indicating small effects for positive psychology interventions. At follow-up from three to six months, effect sizes are small, but still significant for subjective well-being and psychological well-being, indicating that effects are fairly sustainable. Heterogeneity was rather high, due to the wide diversity of the studies included. Several variables moderated the impact on depression: Interventions were more effective if they were of longer duration, if recruitment was conducted via referral or hospital, if interventions were delivered to people with certain psychosocial problems and on an individual basis, and if the study design was of low quality. Moreover, indications for publication bias were found, and the quality of the studies varied considerably.\n\n\nCONCLUSIONS\nThe results of this meta-analysis show that positive psychology interventions can be effective in the enhancement of subjective well-being and psychological well-being, as well as in helping to reduce depressive symptoms. Additional high-quality peer-reviewed studies in diverse (clinical) populations are needed to strengthen the evidence-base for positive psychology interventions.", "title": "" }, { "docid": "c66933af0fef1bcd1c3df364e4e8bb77", "text": "This study has its roots in a clinical application project, focusing on the development of a teaching-learning model enabling participants to understand compassion. During that project four clinical nursing teachers met for a total of 12 hours of experiential and reflective work. This study aimed at exploring participants' understanding of self-compassion as a source to compassionate care. It was carried out as a phenomenological and hermeneutic interpretation of participants' written and oral reflections on the topic. Data were interpreted in the light of Watson's Theory of Human Caring. Five themes were identified: Being there, with self and others; respect for human vulnerability; being nonjudgmental; giving voice to things needed to be said and heard; and being able to accept the gift of compassion from others. A main metaphorical theme, 'the Butterfly effect of Caring', was identified, addressing interdependency and the ethics of the face and hand when caring for Other - the ethical stance where the Other's vulnerable face elicits a call for compassionate actions. The findings reveal that the development of a compassionate self and the ability to be sensitive, nonjudgmental and respectful towards oneself contributes to a compassionate approach towards others. It is concluded that compassionate care is not only something the caregiver does, nor is compassion reduced to a way of being with another person or a feeling. Rather, it is a way of becoming and belonging together with another person where both are mutually engaged and where the caregiver compassionately is able to acknowledge both self and Other's vulnerability and dignity.", "title": "" }, { "docid": "5acf0ddd47893967e21386d99316a2a9", "text": "The Lucy-Richardson algorithm is a very well-known method for non-blind image deconvolution. It can also deal with space-variant problems, but it is seldom used in these cases because of its iterative nature and complexity of realization. In this paper we show that exploiting the sparse structure of the deconvolution matrix, and utilizing a specifically devised architecture, the restoration can be performed almost in real-time on VGA-size images.", "title": "" }, { "docid": "77c2843058856b8d7a582d3b0349b856", "text": "In this paper, an S-band dual circular polarized (CP) spherical conformal phased array antenna (SPAA) is designed. It has the ability to scan a beam within the hemisphere coverage. There are 23 elements uniformly arranged on the hemispherical dome. The design process of the SPAA is presented in detail. Three different kinds of antenna elements are compared. The gain of the SPAA is more than 13 dBi and the gain flatness is less than 1 dB within the scanning range. The measured result is consistent well with the simulated one.", "title": "" }, { "docid": "ba65c99adc34e05cf0cd1b5618a21826", "text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.", "title": "" }, { "docid": "4cc71db87682a96ddee09e49a861142f", "text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.", "title": "" }, { "docid": "402f790d1b2bf76d6129cd08d995fade", "text": "After briefly summarizing the mechanical design of the two joint prototypes for the new DLR variable compliance arm, the paper exemplifies the dynamic modelling of one of the prototypes and proposes a generic variable stiffness joint model for nonlinear control design. Based on this model, the design of a simple, gain scheduled state feedback controller for active vibration damping of the mechanically very weakly damped joint is presented. Moreover, the computation of the motor reference values out of the desired stiffness and position is addressed. Finally, simulation and experimental results validate the proposed methods.", "title": "" }, { "docid": "480c8d16f3e58742f0164f8c10a206dd", "text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.", "title": "" }, { "docid": "e648b97ead434fa9daadaec7fa850fac", "text": "Internet of Things (IoT) is now in its initial stage but very soon, it is going to influence almost every day-to-day items we use. The more it will be included in our lifestyle, more will be the threat of it being misused. There is an urgent need to make IoT devices secure from getting cracked. Very soon IoT is going to expand the area for the cyber-attacks on homes and businesses by transforming objects that were used to be offline into online systems. Existing security technologies are just not enough to deal with this problem. Blockchain has emerged as the possible solution for creating more secure IoT systems in the time to come. In this paper, first an overview of the blockchain technology and its implementation has been explained; then we have discussed the infrastructure of IoT which is based on Blockchain network and at last a model has been provided for the security of internet of things using blockchain.", "title": "" }, { "docid": "621840a3c2637841b9da1e74c99e98f1", "text": "Topic modeling is a type of statistical model for discovering the latent “topics” that occur in a collection of documents through machine learning. Currently, latent Dirichlet allocation (LDA) is a popular and common modeling approach. In this paper, we investigate methods, including LDA and its extensions, for separating a set of scientific publications into several clusters. To evaluate the results, we generate a collection of documents that contain academic papers from several different fields and see whether papers in the same field will be clustered together. We explore potential scientometric applications of such text analysis capabilities.", "title": "" }, { "docid": "dd32d5b0b53c855081c23595052f10d8", "text": "Yaumatei Dermatology Clinic, 12/F Yau Ma Tei Specialist Clinic Extension, 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 31-year-old Chinese male suffered from recalcitrant hidradenitis suppurativa for seven years causing disfiguring scars over the face and intertriginous areas, particularly the axillae and groins. Multiple medical treatments and surgical operation were tried but in vain. Infliximab infusion led to significant improvement. To our best knowledge, this is the first Chinese patient with hidradenitis suppurativa treated with infliximab in Hong Kong.", "title": "" }, { "docid": "d597b9229a3f9a9c680d25180a4b6308", "text": "Mental health problems are highly prevalent and increasing in frequency and severity among the college student population. The upsurge in mobile and wearable wireless technologies capable of intense, longitudinal tracking of individuals, provide enormously valuable opportunities in mental health research to examine temporal patterns and dynamic interactions of key variables. In this paper, we present an integrative framework for social anxiety and depression (SAD) monitoring, two of the most common disorders in the college student population. We have developed a smartphone application and the supporting infrastructure to collect both passive sensor data and active event-driven data. This supports intense, longitudinal, dynamic tracking of anxious and depressed college students to evaluate how their emotions and social behaviors change in the college campus environment. The data will provide critical information about how student mental health problems are maintained and, ultimately, how student patterns on campus shift following treatment.", "title": "" } ]
scidocsrr
c1ccf0ab2cc8b0ab4b6b4e749da4841e
Learning and Evaluating Musical Features with Deep Autoencoders
[ { "docid": "cff671af6a7a170fac2daf6acd9d1e3e", "text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.", "title": "" } ]
[ { "docid": "1d14030535d03f5ce7a593920e4af352", "text": "We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowdsourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens’ science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility.", "title": "" }, { "docid": "be66c05a023ea123a6f32614d2a8af93", "text": "During the past three decades, the issue of processing spectral phase has been largely neglected in speech applications. There is no doubt that the interest of speech processing community towards the use of phase information in a big spectrum of speech technologies, from automatic speech and speaker recognition to speech synthesis, from speech enhancement and source separation to speech coding, is constantly increasing. In this paper, we elaborate on why phase was believed to be unimportant in each application. We provide an overview of advancements in phase-aware signal processing with applications to speech, showing that considering phase-aware speech processing can be beneficial in many cases, while it can complement the possible solutions that magnitude-only methods suggest. Our goal is to show that phase-aware signal processing is an important emerging field with high potential in the current speech communication applications. The paper provides an extended and up-to-date bibliography on the topic of phase aware speech processing aiming at providing the necessary background to the interested readers for following the recent advancements in the area. Our review expands the step initiated by our organized special session and exemplifies the usefulness of spectral phase information in a wide range of speech processing applications. Finally, the overview will provide some future work directions.", "title": "" }, { "docid": "db53ffe2196586d570ad636decbf67de", "text": "We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.", "title": "" }, { "docid": "e882a33ff28c37b379c22d73e16147b3", "text": "Combining ant colony optimization (ACO) and multiobjective evolutionary algorithm based on decomposition (MOEA/D), this paper proposes a multiobjective evolutionary algorithm, MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single objective optimization problems. Each ant (i.e. agent) is responsible for solving one subproblem. All the ants are divided into a few groups and each ant has several neighboring ants. An ant group maintains a pheromone matrix and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group’s pheromone matrix, its own heuristic information matrix and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two set of test problems. On the multiobjective 0-1 knapsack problem, MOEA/D-ACO outperforms MOEA/D-GA on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the “learning while optimizing” principle, is effective in improving multiobjective optimization algorithms.", "title": "" }, { "docid": "a3b919ee9780c92668c0963f23983f82", "text": "A terrified woman called police because her ex-boyfriend was breaking into her home. Upon arrival, police heard screams coming from the basement. They stopped halfway down the stairs and found the ex-boyfriend pointing a rifle at the floor. Officers observed a strange look on the subject’s face as he slowly raised the rifle in their direction. Both officers fired their weapons, killing the suspect. The rifle was not loaded.", "title": "" }, { "docid": "4120db07953e7577ba6be77eef6ebca9", "text": "Previous works indicated that pairwise methods are stateofthe-art approaches to fit users’ taste from implicit feedback. In this paper, we argue that constructing item pairwise samples for a fixed user is insufficient, because taste differences between two users with respect to a same item can not be explicitly distinguished. Moreover, the rank position of positive items are not used as a metric to measure the learning magnitude in the next step. Therefore, we firstly define a confidence function to dynamically control the learning step-size for updating model parameters. Sequently, we introduce a generic way to construct mutual pairwise loss from both users’ and items’ perspective. Instead of useroriented pairwise sampling strategy alone, we incorporate item pairwise samples into a popular pairwise learning framework, bayesian personalized ranking (BPR), and propose mutual bayesian personalized ranking (MBPR) method. In addition, a rank-aware adaptively sampling strategy is proposed to come up with the final approach, called RankMBPR. Empirical studies are carried out on four real-world datasets, and experimental results in several metrics demonstrate the efficiency and effectiveness of our proposed method, comparing with other baseline algorithms.", "title": "" }, { "docid": "5ac0e1b30f3aeeb4e1f7ddae656f7dd5", "text": "The present paper describes an implementation of fast running motions involving a humanoid robot. Two important technologies are described: a motion generation and a balance control. The motion generation is a unified way to design both walking and running and can generate the trajectory with the vertical conditions of the Center Of Mass (COM) in short calculation time. The balance control enables a robot to maintain balance by changing the positions of the contact foot dynamically when the robot is disturbed. This control consists of 1) compliance control without force sensors, in which the joints are made compliant by feed-forward torques and adjustment of gains of position control, and 2) feedback control, which uses the measured orientation of the robot's torso used in the motion generation as an initial condition to decide the foot positions. Finally, a human-sized humanoid robot that can run forward at 7.0 [km/h] is presented.", "title": "" }, { "docid": "32c17e821ba1311be2b18d0303b2d1a3", "text": "We consider the problem of improving the efficiency of random ized Fourier feature maps to accelerate training and testing speed of kernel methods on large dat asets. These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e.g., Gaussian kernel). In this paper, we propose to use Quasi-Monte Carlo(QMC) approximations instead, where the relevant integrands are evaluated on a low-discrepancy sequence of points as opposed to random point sets as in the Monte Carlo approach. We derive a new disc repancy measure called box discrepancy based on theoretical characterizations of the integration error with respect to a given sequence. We then propose to learn QMC sequences adapted to our setting based o n explicit box discrepancy minimization. Our theoretical analyses are complemented with empirical r esults that demonstrate the effectiveness of classical and adaptive QMC techniques for this problem.", "title": "" }, { "docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3", "text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.", "title": "" }, { "docid": "6acc820f32c74ff30730faca2eff9f8f", "text": "The conventional Vivaldi antenna is known for its ultrawideband characteristic, but low directivity. In order to improve the directivity, a double-slot structure is proposed to design a new Vivaldi antenna. The two slots are excited in uniform amplitude and phase by using a T-junction power divider. The double-slot structure can generate plane-like waves in the E-plane of the antenna. As a result, directivity of the double-slot Vivaldi antenna is significantly improved by comparison to a conventional Vivaldi antenna of the same size. The measured results show that impedance bandwidth of the double-slot Vivaldi antenna is from 2.5 to 15 GHz. Gain and directivity of the proposed antenna is considerably improved at the frequencies above 6 GHz. Furthermore, the main beam splitting at high frequencies of the conventional Vivaldi antenna on thick dielectric substrates is eliminated by the double-slot structure.", "title": "" }, { "docid": "50c78e339e472f1b1814687f7d0ec8c6", "text": "Frontonasal dysplasia (FND) refers to a class of midline facial malformations caused by abnormal development of the facial primordia. The term encompasses a spectrum of severities but characteristic features include combinations of ocular hypertelorism, malformations of the nose and forehead and clefting of the facial midline. Several recent studies have drawn attention to the importance of Alx homeobox transcription factors during craniofacial development. Most notably, loss of Alx1 has devastating consequences resulting in severe orofacial clefting and extreme microphthalmia. In contrast, mutations of Alx3 or Alx4 cause milder forms of FND. Whilst Alx1, Alx3 and Alx4 are all known to be expressed in the facial mesenchyme of vertebrate embryos, little is known about the function of these proteins during development. Here, we report the establishment of a zebrafish model of Alx-related FND. Morpholino knock-down of zebrafish alx1 expression causes a profound craniofacial phenotype including loss of the facial cartilages and defective ocular development. We demonstrate for the first time that Alx1 plays a crucial role in regulating the migration of cranial neural crest (CNC) cells into the frontonasal primordia. Abnormal neural crest migration is coincident with aberrant expression of foxd3 and sox10, two genes previously suggested to play key roles during neural crest development, including migration, differentiation and the maintenance of progenitor cells. This novel function is specific to Alx1, and likely explains the marked clinical severity of Alx1 mutation within the spectrum of Alx-related FND.", "title": "" }, { "docid": "f90eebfcf87285efe711968c85f04d1b", "text": "Fouling is generally defined as the accumulation and formation of unwanted materials on the surfaces of processing equipment, which can seriously deteriorate the capacity of the surface to transfer heat under the temperature difference conditions for which it was designed. Fouling of heat transfer surfaces is one of the most important problems in heat transfer equipment. Fouling is an extremely complex phenomenon. Fundamentally, fouling may be characterized as a combined, unsteady state, momentum, mass and heat transfer problem with chemical, solubility, corrosion and biological processes may also taking place. It has been described as the major unresolved problem in heat transfer1. According to many [1-3], fouling can occur on any fluid-solid surface and have other adverse effects besides reduction of heat transfer. It has been recognized as a nearly universal problem in design and operation, and it affects the operation of equipment in two ways: Firstly, the fouling layer has a low thermal conductivity. This increases the resistance to heat transfer and reduces the effectiveness of heat exchangers. Secondly, as deposition occurs, the cross sectional area is reduced, which causes an increase in pressure drop across the apparatus. In industry, fouling of heat transfer surfaces has always been a recognized phenomenon, although poorly understood. Fouling of heat transfer surfaces occurs in most chemical and process industries, including oil refineries, pulp and paper manufacturing, polymer and fiber production, desalination, food processing, dairy industries, power generation and energy recovery. By many, fouling is considered the single most unknown factor in the design of heat exchangers. This situation exists despite the wealth of operating experience accumulated over the years and accumulation of the fouling literature. This lake of understanding almost reflects the complex nature of the phenomena by which fouling occurs in industrial equipment. The wide range of the process streams and operating conditions present in industry tends to make most fouling situations unique, thus rendering a general analysis of the problem difficult. In general, the ability to transfer heat efficiently remains a central feature of many industrial processes. As a consequence much attention has been paid to improving the understanding of heat transfer mechanisms and the development of suitable correlations and techniques that may be applied to the design of heat exchangers. On the other hand relatively little consideration has been given to the problem of surface fouling in heat exchangers. The", "title": "" }, { "docid": "70d8345da0193a048d3dff702834c075", "text": "Recurrent neural networks with various types of hidden units have been used to solve a diverse range of problems involving sequence data. Two of the most recent proposals, gated recurrent units (GRU) and minimal gated units (MGU), have shown comparable promising results on example public datasets. In this paper, we introduce three model variants of the minimal gated unit which further simplify that design by reducing the number of parameters in the forget-gate dynamic equation. These three model variants, referred to simply as MGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset and the real sequences from the Reuters Newswire Topics (RNT) dataset. Here, we report on the RNT results. The new models have shown similar accuracy to the MGU model while using fewer parameters and thus lower training expense. One model variant, namely MGU2, performed better than MGU on the datasets considered, and thus may be used as an alternate to MGU or GRU in recurrent neural networks.", "title": "" }, { "docid": "2126c47fe320af2d908ec01a426419ce", "text": "Stretching has long been used in many physical activities to increase range of motion (ROM) around a joint. Stretching also has other acute effects on the neuromuscular system. For instance, significant reductions in maximal voluntary strength, muscle power or evoked contractile properties have been recorded immediately after a single bout of static stretching, raising interest in other stretching modalities. Thus, the effects of dynamic stretching on subsequent muscular performance have been questioned. This review aimed to investigate performance and physiological alterations following dynamic stretching. There is a substantial amount of evidence pointing out the positive effects on ROM and subsequent performance (force, power, sprint and jump). The larger ROM would be mainly attributable to reduced stiffness of the muscle-tendon unit, while the improved muscular performance to temperature and potentiation-related mechanisms caused by the voluntary contraction associated with dynamic stretching. Therefore, if the goal of a warm-up is to increase joint ROM and to enhance muscle force and/or power, dynamic stretching seems to be a suitable alternative to static stretching. Nevertheless, numerous studies reporting no alteration or even performance impairment have highlighted possible mitigating factors (such as stretch duration, amplitude or velocity). Accordingly, ballistic stretching, a form of dynamic stretching with greater velocities, would be less beneficial than controlled dynamic stretching. Notwithstanding, the literature shows that inconsistent description of stretch procedures has been an important deterrent to reaching a clear consensus. In this review, we highlight the need for future studies reporting homogeneous, clearly described stretching protocols, and propose a clarified stretching terminology and methodology.", "title": "" }, { "docid": "3fcbff9e9dea1300edc5de7a764d7ae9", "text": "Optimization Involving Expensive Black-Box Objective and Constraint Functions Rommel G. Regis Mathematics Department, Saint Joseph’s University, Philadelphia, PA 19131, USA, [email protected] August 23, 2010 Abstract. This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed algorithm, called ConstrLMSRBF, uses radial basis function (RBF) surrogate models and is an extension of the Local Metric Stochastic RBF (LMSRBF) algorithm by Regis and Shoemaker (2007a) that can handle black-box inequality constraints. Previous algorithms for the optimization of expensive functions using surrogate models have mostly dealt with bound constrained problems where only the objective function is expensive, and so, the surrogate models are used to approximate the objective function only. In contrast, ConstrLMSRBF builds RBF surrogate models for the objective function and also for all the constraint functions in each iteration, and uses these RBF models to guide the selection of the next point where the objective and constraint functions will be evaluated. Computational results indicate that ConstrLMSRBF is better than alternative methods on 9 out of 14 test problems and on the MOPTA08 problem from the automotive industry (Jones 2008). The MOPTA08 problem has 124 decision variables and 68 inequality constraints and is considered a large-scale problem in the area of expensive black-box optimization. The alternative methods include a Mesh Adaptive Direct Search (MADS) algorithm (Abramson and Audet 2006, Audet and Dennis 2006) that uses a kriging-based surrogate model, the Multistart LMSRBF algorithm by Regis and Shoemaker (2007a) modified to handle black-box constraints via a penalty approach, a genetic algorithm, a pattern search algorithm, a sequential quadratic programming algorithm, and COBYLA (Powell 1994), which is a derivative-free trust-region algorithm. Based on the results of this study, the results in Jones (2008) and other approaches presented at the ISMP 2009 conference, ConstrLMSRBF appears to be among the best, if not the best, known algorithm for the MOPTA08 problem in the sense of providing the most improvement from an initial feasible solution within a very limited number of objective and constraint function evaluations.", "title": "" }, { "docid": "b999fe9bd7147ef9c555131d106ea43e", "text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.", "title": "" }, { "docid": "8fdfebc612ff46103281fcdd7c9d28c8", "text": "We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented. Wir entwickeln einen Algorithmus mit kürzesten alternierenden Wegen für das lineare Zuordnungsproblem. Er enthält neue Routinen für die Anfangswerte und eine spezielle Implementierung der Kürzesten-Wege-Methode von Dijkstra. Sowohl für dichte als auch für dünne Probleme zeigen Testläufe, daß unser Algorithmus gleichmäßig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.", "title": "" }, { "docid": "acd4de9f6324cc9d3fd9560094c71542", "text": "Similarity search is one of the fundamental problems for large scale multimedia applications. Hashing techniques, as one popular strategy, have been intensively investigated owing to the speed and memory efficiency. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised methods learn hashing function by treating each training example equally while ignoring the different semantic degree related to the label, i.e. semantic confidence, of different examples. In this paper, we propose a novel semi-supervised hashing framework by leveraging semantic confidence. Specifically, a confidence factor is first assigned to each example by neighbor voting and click count in the scenarios with label and click-through data, respectively. Then, the factor is incorporated into the pairwise and triplet relationship learning for hashing. Furthermore, the two learnt relationships are seamlessly encoded into semi-supervised hashing methods with pairwise and listwise supervision respectively, which are formulated as minimizing empirical error on the labeled data while maximizing the variance of hash bits or minimizing quantization loss over both the labeled and unlabeled data. In addition, the kernelized variant of semi-supervised hashing is also presented. We have conducted experiments on both CIFAR-10 (with label) and Clickture (with click data) image benchmarks (up to one million image examples), demonstrating that our approaches outperform the state-of-the-art hashing techniques.", "title": "" }, { "docid": "674da28b87322e7dfc7aad135d44ae55", "text": "As the technology migrates into the deep submicron manufacturing(DSM) era, the critical dimension of the circuits is getting smaller than the lithographic wavelength. The unavoidable light diffraction phenomena in the sub-wavelength technologies have become one of the major factors in the yield rate. Optical proximity correction (OPC) is one of the methods adopted to compensate for the light diffraction effect as a post layout process.However, the process is time-consuming and the results are still limited by the original layout quality. In this paper, we propose a maze routing method that considers the optical effect in the routing algorithm. By utilizing the symmetrical property of the optical system, the light diffraction is efficiently calculated and stored in tables. The costs that guide the router to minimize the optical interferences are obtained from these look-up tables. The problem is first formulated as a constrained maze routing problem, then it is shown to be a multiple constrained shortest path problem. Based on the Lagrangian relaxation method, an effective algorithm is designed to solve the problem.", "title": "" }, { "docid": "d80d52806cbbdd6148e3db094eabeed7", "text": "We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thought was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's illumination from its image, which would then allow correction of the image colors to those relative to a standard illuminant, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting `scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of he ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.", "title": "" } ]
scidocsrr
db3fa632649ce3300d1397b4b7f5efdc
An Analysis on Time- and Session-aware Diversification in Recommender Systems
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "8c07982729ca439c8e346cbe018a7198", "text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.", "title": "" }, { "docid": "841a5ecba126006e1deb962473662788", "text": "In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.", "title": "" }, { "docid": "539a25209bf65c8b26cebccf3e083cd0", "text": "We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.", "title": "" } ]
[ { "docid": "f69723ed73c7edd9856883bbb086ed0c", "text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.", "title": "" }, { "docid": "a6a98545230e6dd5c87948f5b000a076", "text": "The Traveling Salesman Problem (TSP) is one of the standard test problems used in performance analysis of discrete optimization algorithms. The Ant Colony Optimization (ACO) algorithm appears among heuristic algorithms used for solving discrete optimization problems. In this study, a new hybrid method is proposed to optimize parameters that affect performance of the ACO algorithm using Particle Swarm Optimization (PSO). In addition, 3-Opt heuristic method is added to proposed method in order to improve local solutions. The PSO algorithm is used for detecting optimum values of parameters ̨ and ˇ which are used for city selection operations in the ACO algorithm and determines significance of inter-city pheromone and distances. The 3-Opt algorithm is used for the purpose of improving city selection operations, which could not be improved due to falling in local minimums by the ACO algorithm. The performance of proposed hybrid method is investigated on ten different benchmark problems taken from literature and it is compared to the performance of some well-known algorithms. Experimental results show that the performance of proposed method by using fewer ants than the number of cities for the TSPs is better than the performance of compared methods in most cases in terms of solution quality and robustness. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "75fb9b4adf41c0a93f72084cc3a7444a", "text": "OBJECTIVE\nIn this study, we tested an expanded model of Kanter's structural empowerment, which specified the relationships among structural and psychological empowerment, job strain, and work satisfaction.\n\n\nBACKGROUND\nStrategies proposed in Kanter's empowerment theory have the potential to reduce job strain and improve employee work satisfaction and performance in current restructured healthcare settings. The addition to the model of psychological empowerment as an outcome of structural empowerment provides an understanding of the intervening mechanisms between structural work conditions and important organizational outcomes.\n\n\nMETHODS\nA predictive, nonexperimental design was used to test the model in a random sample of 404 Canadian staff nurses. The Conditions of Work Effectiveness Questionnaire, the Psychological Empowerment Questionnaire, the Job Content Questionnaire, and the Global Satisfaction Scale were used to measure the major study variables.\n\n\nRESULTS\nStructural equation modelling analyses revealed a good fit of the hypothesized model to the data based on various fit indices (chi 2 = 1140, df = 545, chi 2/df ratio = 2.09, CFI = 0.986, RMSEA = 0.050). The amount of variance accounted for in the model was 58%. Staff nurses felt that structural empowerment in their workplace resulted in higher levels of psychological empowerment. These heightened feelings of psychological empowerment in turn strongly influenced job strain and work satisfaction. However, job strain did not have a direct effect on work satisfaction.\n\n\nCONCLUSIONS\nThese results provide initial support for an expanded model of organizational empowerment and offer a broader understanding of the empowerment process.", "title": "" }, { "docid": "f3e5941be4543d5900d56c1a7d93d0ea", "text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.", "title": "" }, { "docid": "425c96a3ed2d88bbc9324101626c992d", "text": "Nonlocal image representation or group sparsity has attracted considerable interest in various low-level vision tasks and has led to several state-of-the-art image denoising techniques, such as BM3D, learned simultaneous sparse coding. In the past, convex optimization with sparsity-promoting convex regularization was usually regarded as a standard scheme for estimating sparse signals in noise. However, using convex regularization cannot still obtain the correct sparsity solution under some practical problems including image inverse problems. In this letter, we propose a nonconvex weighted <inline-formula><tex-math notation=\"LaTeX\">$\\ell _p$</tex-math></inline-formula> minimization based group sparse representation framework for image denoising. To make the proposed scheme tractable and robust, the generalized soft-thresholding algorithm is adopted to solve the nonconvex <inline-formula><tex-math notation=\"LaTeX\"> $\\ell _p$</tex-math></inline-formula> minimization problem. In addition, to improve the accuracy of the nonlocal similar patch selection, an adaptive patch search scheme is proposed. Experimental results demonstrate that the proposed approach not only outperforms many state-of-the-art denoising methods such as BM3D and weighted nuclear norm minimization, but also results in a competitive speed.", "title": "" }, { "docid": "4dfb5d8dfb09f510427aa6400b1f330f", "text": "In this paper, a permanent magnet synchronous motor for ship propulsion is designed. The appropriate number of poles and slots are selected and the cogging torque is minimized in order to reduce noise and vibrations. To perform high efficiency and reliability, the inverter system consists of multiple modules and the stator coil has multi phases and groups. Because of the modular structure, the motor can be operated with some damaged inverters. In order to maintain high efficiency at low speed operation, same phase coils of different group are connected in series and excited by the half number of inverters than at high speed operation. A MW-class motor is designed and the performances with the proposed inverter control method are calculated.", "title": "" }, { "docid": "be447131554900aaba025be449944613", "text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.", "title": "" }, { "docid": "19b8acf4e5c68842a02e3250c346d09b", "text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.", "title": "" }, { "docid": "ada7b43edc18b321c57a978d7a3859ae", "text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.", "title": "" }, { "docid": "ffdee20af63d50f39f9cc5077a14dc87", "text": "Recent advancement in remote sensing facilitates collection of hyperspectral images (HSIs) in hundreds of bands which provides a potential platform to detect and identify the unique trends in land and atmospheric datasets with high accuracy. But along with the detailed information, HSIs also pose several processing problems such as1) increase in computational complexity due to high dimensionality. So dimension reduction without losing information is one of the major concerns in this area and 2) limited availability of labeled training sets causes the ill posed problem which is needed to be addressed by the classification algorithms. Initially classification techniques of HSIs were based on spectral information only. Gradually researchers started utilizing both spectral and spatial information to increase classification accuracy. Also the classification algorithms have evolved from supervised to semi supervised mode. This paper presents a survey about the techniques available in the field of HSI processing to provide a seminal view of how the field of HSI analysis has evolved over the last few decades and also provides a snapshot of the state of the art techniques used in this area. General Terms Classification algorithms, image processing, supervised, semi supervised techniques.", "title": "" }, { "docid": "38a1ed4d7147a48758c1a03c5c136457", "text": "The Penrose inequality gives a lower bound for the total mass of a spacetime in terms of the area of suitable surfaces that represent black holes. Its validity is supported by the cosmic censorship conjecture and therefore its proof (or disproof) is an important problem in relation with gravitational collapse. The Penrose inequality is a very challenging problem in mathematical relativity and it has received continuous attention since its formulation by Penrose in the early seventies. Important breakthroughs have been made in the last decade or so, with the complete resolution of the so-called Riemannian Penrose inequality and a very interesting proposal to address the general case by Bray and Khuri. In this paper, the most important results on this field will be discussed and the main ideas behind their proofs will be summarized, with the aim of presenting what is the status of our present knowledge in this topic.", "title": "" }, { "docid": "ebea79abc60a5d55d0397d21f54cc85e", "text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.", "title": "" }, { "docid": "3d1fa2e999a2cc54b3c1ec98d121e9fb", "text": "Model-based design is a powerful design technique for cyber-physical systems, but too often literature assumes knowledge of a methodology without reference to an explicit design process, instead focusing on isolated steps such as simulation, software synthesis, or verification. We combine these steps into an explicit and holistic methodology for model-based design of cyber-physical systems from abstraction to architecture, and from concept to realization. We decompose model-based design into ten fundamental steps, describe and evaluate an iterative design methodology, and evaluate this methodology in the development of a cyber-physical system.", "title": "" }, { "docid": "46ea713c4206d57144350a7871433392", "text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.", "title": "" }, { "docid": "253b2696bb52f43528f02e85d1070e96", "text": "Prosocial behavior consists of behaviors regarded as beneficial to others, including helping, sharing, comforting, guiding, rescuing, and defending others. Although women and men are similar in engaging in extensive prosocial behavior, they are different in their emphasis on particular classes of these behaviors. The specialty of women is prosocial behaviors that are more communal and relational, and that of men is behaviors that are more agentic and collectively oriented as well as strength intensive. These sex differences, which appear in research in various settings, match widely shared gender role beliefs. The origins of these beliefs lie in the division of labor, which reflects a biosocial interaction between male and female physical attributes and the social structure. The effects of gender roles on behavior are mediated by hormonal processes, social expectations, and individual dispositions.", "title": "" }, { "docid": "abed12088956b9b695a0d5a158dc1f71", "text": "Neural encoding of pitch in the auditory brainstem is known to be shaped by long-term experience with language or music, implying that early sensory processing is subject to experience-dependent neural plasticity. In language, pitch patterns consist of sequences of continuous, curvilinear contours; in music, pitch patterns consist of relatively discrete, stair-stepped sequences of notes. The primary aim was to determine the influence of domain-specific experience (language vs. music) on the encoding of pitch in the brainstem. Frequency-following responses were recorded from the brainstem in native Chinese, English amateur musicians, and English nonmusicians in response to iterated rippled noise homologues of a musical pitch interval (major third; M3) and a lexical tone (Mandarin tone 2; T2) from the music and language domains, respectively. Pitch-tracking accuracy (whole contour) and pitch strength (50 msec sections) were computed from the brainstem responses using autocorrelation algorithms. Pitch-tracking accuracy was higher in the Chinese and musicians than in the nonmusicians across domains. Pitch strength was more robust across sections in musicians than in nonmusicians regardless of domain. In contrast, the Chinese showed larger pitch strength, relative to nonmusicians, only in those sections of T2 with rapid changes in pitch. Interestingly, musicians exhibited greater pitch strength than the Chinese in one section of M3, corresponding to the onset of the second musical note, and two sections within T2, corresponding to a note along the diatonic musical scale. We infer that experience-dependent plasticity of brainstem responses is shaped by the relative saliency of acoustic dimensions underlying the pitch patterns associated with a particular domain.", "title": "" }, { "docid": "7d0fb12fce0ef052684a8664a3f5c543", "text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.", "title": "" }, { "docid": "3d0b507f18dca7e2710eab5fdaa9a20b", "text": "This paper is designed to illustrate and consider the relations between three types of metarepresentational ability used in verbal comprehension: the ability to metarepresent attributed thoughts, the ability to metarepresent attributed utterances, and the ability to metarepresent abstract, non-attributed representations (e.g. sentence types, utterance types, propositions). Aspects of these abilities have been separ at ly considered in the literatures on “theory of mind”, Gricean pragmatics and quotation. The aim of this paper is to show how the results of these separate strands of research might be integrated with an empirically plausible pragmatic theory.", "title": "" }, { "docid": "6f845762227f11525173d6d0869f6499", "text": "We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement the Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.", "title": "" }, { "docid": "f37d9a57fd9100323c70876cf7a1d7ad", "text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
3565944ef240d2406c5c6fc3079a2caf
BA-Net: Dense Bundle Adjustment Network
[ { "docid": "cd73d3acb274d179b52ec6930f6f26bd", "text": "We present the design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems. We explore the use of multicore CPU as well as multicore GPUs for this purpose. We show that overcoming the severe memory and bandwidth limitations of current generation GPUs not only leads to more space efficient algorithms, but also to surprising savings in runtime. Our CPU based system is up to ten times and our GPU based system is up to thirty times faster than the current state of the art methods [1], while maintaining comparable convergence behavior. The code and additional results are available at http://grail.cs. washington.edu/projects/mcba.", "title": "" }, { "docid": "92cc028267bc3f8d44d11035a8212948", "text": "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.", "title": "" }, { "docid": "45d6863e54b343d7a081e79c84b81e65", "text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim", "title": "" } ]
[ { "docid": "56642ffad112346186a5c3f12133e59b", "text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.", "title": "" }, { "docid": "b915033fd3f8fdea3fc7bf9e3f95146d", "text": "Software traceability is a required element in the development and certification of safety-critical software systems. However, trace links, which are created at significant cost and effort, are often underutilized in practice due primarily to the fact that project stakeholders often lack the skills needed to formulate complex trace queries. To mitigate this problem, we present a solution which transforms spoken or written natural language queries into structured query language (SQL). TiQi includes a general database query mechanism and a domain-specific model populated with trace query concepts, project-specific terminology, token disambiguators, and query transformation rules. We report results from four different experiments exploring user preferences for natural language queries, accuracy of the generated trace queries, efficacy of the underlying disambiguators, and stability of the trace query concepts. Experiments are conducted against two different datasets and show that users have a preference for written NL queries. Queries were transformed at accuracy rates ranging from 47 to 93 %.", "title": "" }, { "docid": "3b2607bda35e535c2c4410e4c2b21a4f", "text": "There has been recent interest in designing systems that use the tongue as an input interface. Prior work however either require surgical procedures or in-mouth sensor placements. In this paper, we introduce TongueSee, a non-intrusive tongue machine interface that can recognize a rich set of tongue gestures using electromyography (EMG) signals from the surface of the skin. We demonstrate the feasibility and robustness of TongueSee with experimental studies to classify six tongue gestures across eight participants. TongueSee achieves a classification accuracy of 94.17% and a false positive probability of 0.000358 per second using three-protrusion preamble design.", "title": "" }, { "docid": "1f52dc0ee257b56b24c49b9520cf38da", "text": "We extend approaches for skinning characters to the general setting of skinning deformable mesh animations. We provide an automatic algorithm for generating progressive skinning approximations, that is particularly efficient for pseudo-articulated motions. Our contributions include the use of nonparametric mean shift clustering of high-dimensional mesh rotation sequences to automatically identify statistically relevant bones, and robust least squares methods to determine bone transformations, bone-vertex influence sets, and vertex weight values. We use a low-rank data reduction model defined in the undeformed mesh configuration to provide progressive convergence with a fixed number of bones. We show that the resulting skinned animations enable efficient hardware rendering, rest pose editing, and deformable collision detection. Finally, we present numerous examples where skins were automatically generated using a single set of parameter values.", "title": "" }, { "docid": "f14eeb6dff3f865bc65427210dd49aae", "text": "Although the most intensively studied mammalian olfactory system is that of the mouse, in which olfactory chemical cues of one kind or another are detected in four different nasal areas [the main olfactory epithelium (MOE), the septal organ (SO), Grüneberg's ganglion, and the sensory epithelium of the vomeronasal organ (VNO)], the extraordinarily sensitive olfactory system of the dog is also an important model that is increasingly used, for example in genomic studies of species evolution. Here we describe the topography and extent of the main olfactory and vomeronasal sensory epithelia of the dog, and we report finding no structures equivalent to the Grüneberg ganglion and SO of the mouse. Since we examined adults, newborns, and fetuses we conclude that these latter structures are absent in dogs, possibly as the result of regression or involution. The absence of a vomeronasal component based on VR2 receptors suggests that the VNO may be undergoing a similar involutionary process.", "title": "" }, { "docid": "c7a73ab57087752d50d79d38a84c0775", "text": "In this paper, we address the problem of model-free online object tracking based on color representations. According to the findings of recent benchmark evaluations, such trackers often tend to drift towards regions which exhibit a similar appearance compared to the object of interest. To overcome this limitation, we propose an efficient discriminative object model which allows us to identify potentially distracting regions in advance. Furthermore, we exploit this knowledge to adapt the object representation beforehand so that distractors are suppressed and the risk of drifting is significantly reduced. We evaluate our approach on recent online tracking benchmark datasets demonstrating state-of-the-art results. In particular, our approach performs favorably both in terms of accuracy and robustness compared to recent tracking algorithms. Moreover, the proposed approach allows for an efficient implementation to enable online object tracking in real-time.", "title": "" }, { "docid": "8a21ff7f3e4d73233208d5faa70eb7ce", "text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.", "title": "" }, { "docid": "641a51f9a5af9fc9dba4be3d12829fd5", "text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.", "title": "" }, { "docid": "f7b5312646e5a847a47c460619184d92", "text": "Introduction to the Values Theory When we think of our values, we think of what is important to us in our lives (e.g., security, independence, wisdom, success, kindness, pleasure). Each of us holds numerous values with varying degrees of importance. A particular value may be very important to one person, but unimportant to another. Consensus regarding the most useful way to conceptualize basic values has emerged gradually since the 1950’s. We can summarize the main features of the conception of basic values implicit in the writings of many theorists and researchers as follows:", "title": "" }, { "docid": "d302bfb7c2b95def93525050016ac07c", "text": "Face recognition remains a challenge today as recognition performance is strongly affected by variability such as illumination, expressions and poses. In this work we apply Convolutional Neural Networks (CNNs) on the challenging task of both 2D and 3D face recognition. We constructed two CNN models, namely CNN-1 (two convolutional layers) and CNN-2 (one convolutional layer) for testing on 2D and 3D dataset. A comprehensive parametric study of two CNN models on face recognition is represented in which different combinations of activation function, learning rate and filter size are investigated. We find that CNN-2 has a better accuracy performance on both 2D and 3D face recognition. Our experimental results show that an accuracy of 85.15% was accomplished using CNN-2 on depth images with FRGCv2.0 dataset (4950 images with 557 objectives). An accuracy of 95% was achieved using CNN-2 on 2D raw image with the AT&T dataset (400 images with 40 objectives). The results indicate that the proposed CNN model is capable to handle complex information from facial images in different dimensions. These results provide valuable insights into further application of CNN on 3D face recognition.", "title": "" }, { "docid": "5706118011df482fdd1e3690c638e963", "text": "This paper proposes a novel approach for segmenting primary video objects by using Complementary Convolutional Neural Networks (CCNN) and neighborhood reversible flow. The proposed approach first pre-trains CCNN on massive images with manually annotated salient objects in an end-to-end manner, and the trained CCNN has two separate branches that simultaneously handle two complementary tasks, i.e., foregroundness and backgroundness estimation. By applying CCNN on each video frame, the spatial foregroundness and backgroundness maps can be initialized, which are then propagated between various frames so as to segment primary video objects and suppress distractors. To enforce efficient temporal propagation, we divide each frame into superpixels and construct neighborhood reversible flow that reflects the most reliable temporal correspondences between superpixels in far-away frames. Within such flow, the initialized foregroundness and backgroundness can be efficiently and accurately propagated along the temporal axis so that primary video objects gradually pop-out and distractors are well suppressed. Extensive experimental results on three video datasets show that the proposed approach achieves impressive performance in comparisons with 18 state-of-the-art models.", "title": "" }, { "docid": "6b25852df72c26b1467d4c51213ca122", "text": "This paper presents a study of spectral clustering-based approaches to acoustic segment modeling (ASM). ASM aims at finding the underlying phoneme-like speech units and building the corresponding acoustic models in the unsupervised setting, where no prior linguistic knowledge and manual transcriptions are available. A typical ASM process involves three stages, namely initial segmentation, segment labeling, and iterative modeling. This work focuses on the improvement of segment labeling. Specifically, we use posterior features as the segment representations, and apply spectral clustering algorithms on the posterior representations. We propose a Gaussian component clustering (GCC) approach and a segment clustering (SC) approach. GCC applies spectral clustering on a set of Gaussian components, and SC applies spectral clustering on a large number of speech segments. Moreover, to exploit the complementary information of different posterior representations, a multiview segment clustering (MSC) approach is proposed. MSC simultaneously utilizes multiple posterior representations to cluster speech segments. To address the computational problem of spectral clustering in dealing with large numbers of speech segments, we use inner product similarity graph and make reformulations to avoid the explicit computation of the affinity matrix and Laplacian matrix. We carried out two sets of experiments for evaluation. First, we evaluated the ASM accuracy on the OGI-MTS dataset, and it was shown that our approach could yield 18.7% relative purity improvement and 15.1% relative NMI improvement compared with the baseline approach. Second, we examined the performances of our approaches in the real application of zero-resource query-by-example spoken term detection on SWS2012 dataset, and it was shown that our approaches could provide consistent improvement on four different testing scenarios with three evaluation metrics.", "title": "" }, { "docid": "9d7623afe7b3ef98f81e1de0f2f2806d", "text": "The fashion industry faces the increasing complexity of its activities such as the globalization of the market, the proliferation of information, the reduced time to market, the increasing distance between industrial partners and pressures related to costs. Digital prototype in the textile and clothing industry enables technologies in the process of product development where various operators are involved in the different stages, with various skills and competencies, and different necessity of formalizing and defining in a deterministic way the result of their activities. Taking into account the recent trends in the industry, the product development cycle and the use of new digital technologies cannot be restricted in the “typical cycle” but additional tools and skills are required to be integrated taking into account these developments [1].", "title": "" }, { "docid": "19c24a77726f9095e53ae792556c2a30", "text": "and Applied Analysis 3 The addition and scalar multiplication of fuzzy number in E are defined as follows: (1) ?̃? ⊕ Ṽ = (?̃? + Ṽ, ?̃? + Ṽ) ,", "title": "" }, { "docid": "113cf34bf2a86a8f1a041cfd366c00b7", "text": "People perceive and conceive of activity in terms of discrete events. Here the authors propose a theory according to which the perception of boundaries between events arises from ongoing perceptual processing and regulates attention and memory. Perceptual systems continuously make predictions about what will happen next. When transient errors in predictions arise, an event boundary is perceived. According to the theory, the perception of events depends on both sensory cues and knowledge structures that represent previously learned information about event parts and inferences about actors' goals and plans. Neurological and neurophysiological data suggest that representations of events may be implemented by structures in the lateral prefrontal cortex and that perceptual prediction error is calculated and evaluated by a processing pathway, including the anterior cingulate cortex and subcortical neuromodulatory systems.", "title": "" }, { "docid": "cd4e2e3af17cd84d4ede35807e71e783", "text": "A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.", "title": "" }, { "docid": "30a0b6c800056408b32e9ed013565ae0", "text": "This case report presents the successful use of palatal mini-implants for rapid maxillary expansion and mandibular distalization in a skeletal Class III malocclusion. The patient was a 13-year-old girl with the chief complaint of facial asymmetry and a protruded chin. Camouflage orthodontic treatment was chosen, acknowledging the possibility of need for orthognathic surgery after completion of her growth. A bone-borne rapid expander (BBRME) was used to correct the transverse discrepancy and was then used as indirect anchorage for distalization of the lower dentition with Class III elastics. As a result, a Class I occlusion with favorable inclination of the upper teeth was achieved without any adverse effects. The total treatment period was 25 months. Therefore, BBRME can be considered an alternative treatment in skeletal Class III malocclusion.", "title": "" }, { "docid": "ea29b3421c36178680ae63c16b9cecad", "text": "Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.", "title": "" }, { "docid": "473968c14db4b189af126936fd5486ca", "text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.", "title": "" }, { "docid": "44cad643330467a07beb81ce22d86371", "text": "Distributed ledger technologies are rising in popularity, mainly for the host of financial applications they potentially enable, through smart contracts. Several implementations of distributed ledgers have been proposed, and different languages for the development of smart contracts have been suggested. A great deal of attention is given to the practice of development, i.e. programming, of smart contracts. In this position paper, we argue that more attention should be given to the “traditional developers” of contracts, namely the lawyers, and we propose a list of requirements for a human and machine-readable contract authoring language, friendly to lawyers, serving as a common (and a specification) language, for programmers, and the parties to a contract.", "title": "" } ]
scidocsrr
910063110cc07ecad68ee0586ad2a2c4
High-inductive short-circuit Type IV in multi-level converter protection schemes
[ { "docid": "5cc929181c4a8ab7538b7bfc68015cf9", "text": "The IGBT can run into different short-circuit types (SC I, SC II, SC III). Especially in SC II and III, an interaction between the gate drive unit and the IGBT takes place. A self-turn-off mechanism after short-circuit turn on can occur. Parasitic elements in the connection between the IGBT and the gate unit as well as asymmetrical wiring of devices connected in parallel are of effect to the short-circuit capability. In high-voltage IGBTs, filament formation can occur at short-circuit condition. Destructive measurements with its failure patterns and short-circuit protection methods are shown.", "title": "" } ]
[ { "docid": "5369b1f53fe492e07eaafe8979fc6a31", "text": "MOTIVATION\nDNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. We have developed a new method to analyse this kind of data using support vector machines (SVMs). This analysis consists of both classification of the tissue samples, and an exploration of the data for mis-labeled or questionable tissue results.\n\n\nRESULTS\nWe demonstrate the method in detail on samples consisting of ovarian cancer tissues, normal ovarian tissues, and other normal tissues. The dataset consists of expression experiment results for 97,802 cDNAs for each tissue. As a result of computational analysis, a tissue sample is discovered and confirmed to be wrongly labeled. Upon correction of this mistake and the removal of an outlier, perfect classification of tissues is achieved, but not with high confidence. We identify and analyse a subset of genes from the ovarian dataset whose expression is highly differentiated between the types of tissues. To show robustness of the SVM method, two previously published datasets from other types of tissues or cells are analysed. The results are comparable to those previously obtained. We show that other machine learning methods also perform comparably to the SVM on many of those datasets.\n\n\nAVAILABILITY\nThe SVM software is available at http://www.cs. columbia.edu/ approximately bgrundy/svm.", "title": "" }, { "docid": "46579940eac63ef355f8e79ef4358306", "text": "In this paper socially intelligent agents (SIA) are understood as agents which do not only from an observer point of view behave socially but which are able to recognize and identify other agents and establish and maintain relationships to other agents. The process of building socially intelligent agents is innuenced by what the human as the designer considers`social', and conversely agent tools which are behaving socially can innuence human conceptions of sociality. A Cognitive Technology (CT) approach towards designing SIA aaords as an opportunity to study the process of 1) how new forms of interactions and func-tionalities and use of technology can emerge at the human-tool interface, 2) how social agents can constrain their cognitive and social potential, and 3) how social agent technology and human (social) cognition can co-evolve and co-adapt and result in new forms of sociality. Agent-human interaction requires a cognitive t between SIA technology and the human-in-the-loop as designer of, user of, and participant in social interactions. Aspects of human social psychology, e.g. story-telling, empathy, embodiment, historical and ecological grounding can contribute to a believable and cognitively well-balanced design of SIA technology, in order to further the relationship between humans and agent tools. It is hoped that approaches to believability based on these concepts can avoid thèshallowness' that merely take advantage of the anthromorphizing tendency in humans. This approach is put into the general framework of Embodied Artiicial Life (EAL) research. The paper concludes with a terminology and list of guidelines useful for SIA design.", "title": "" }, { "docid": "fa1440ce586681326b18807e41e5465a", "text": "Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target’s ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.", "title": "" }, { "docid": "102bec350390b46415ae07128cb4e77f", "text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "title": "" }, { "docid": "ac3f7a9557988101fb9e2eea0c1aa652", "text": "Against the background of increasing awareness and appreciation of issues such as global warming and the impact of mankind's activities such as agriculture on the global environment, this paper updates previous assessments of some key environmental impacts that crop biotechnology has had on global agriculture. It focuses on the environmental impacts associated with changes in pesticide use and greenhouse gas emissions arising from the use of GM crops. The adoption of the technology has reduced pesticide spraying by 503 million kg (-8.8%) and, as a result, decreased the environmental impact associated with herbicide and insecticide use on these crops (as measured by the indicator the Environmental Impact Quotient [EIQ]) by 18.7%. The technology has also facilitated a significant reduction in the release of greenhouse gas emissions from this cropping area, which, in 2012, was equivalent to removing 11.88 million cars from the roads.", "title": "" }, { "docid": "cba5c85ee9a9c4f97f99c1fcb35d0623", "text": "Virtualized Cloud platforms have become increasingly common and the number of online services hosted on these platforms is also increasing rapidly. A key problem faced by providers in managing these services is detecting the performance anomalies and adjusting resources accordingly. As online services generate a very large amount of monitored data in the form of time series, it becomes very difficult to process this complex data by traditional approaches. In this work, we present a novel distributed parallel approach for performance anomaly detection. We build upon Holt-Winters forecasting for automatic aberrant behavior detection in time series. First, we extend the technique to work with MapReduce paradigm. Next, we correlate the anomalous metrics with the target Service Level Objective (SLO) in order to locate the suspicious metrics. We implemented and evaluated our approach on a production Cloud encompassing IaaS and PaaS service models. Experimental results confirm that our approach is efficient and effective in capturing the metrics causing performance anomalies in large time series datasets.", "title": "" }, { "docid": "c702c4dbde96a024fac6fe4cbb052ce9", "text": "Vehicular communications, referring to information exchange among vehicles, infrastructures, etc., have attracted a lot of attention recently due to great potential to support intelligent transportation, various safety applications, and on-road infotainment. In this paper, we provide a comprehensive overview of a recent research on enabling efficient and reliable vehicular communications from the network layer perspective. First, we introduce general applications and unique characteristics of vehicular communication networks and the corresponding classifications. Based on different driving patterns, we categorize vehicular networks into manual driving vehicular networks and automated driving vehicular networks, and then discuss the available communication techniques, network structures, routing protocols, and handoff strategies applied in these vehicular networks. Finally, we identify the challenges confronted by the current vehicular networks and present the corresponding research opportunities.", "title": "" }, { "docid": "452c9eb3b5d411b1f32d6cf6a230b3e2", "text": "The core vector machine (CVM) is a recent approach for scaling up kernel methods based on the notion of minimum enclosing ball (MEB). Though conceptually simple, an efficient implementation still requires a sophisticated numerical solver. In this paper, we introduce the enclosing ball (EB) problem where the ball's radius is fixed and thus does not have to be minimized. We develop efficient (1 + e)-approximation algorithms that are simple to implement and do not require any numerical solver. For the Gaussian kernel in particular, a suitable choice of this (fixed) radius is easy to determine, and the center obtained from the (1 + e)-approximation of this EB problem is close to the center of the corresponding MEB. Experimental results show that the proposed algorithm has accuracies comparable to the other large-scale SVM implementations, but can handle very large data sets and is even faster than the CVM in general.", "title": "" }, { "docid": "becadf8b9d86457d9691e580b17366b5", "text": "Failure of granular media under natural and laboratory loading conditions involves a variety of micromechanical processes producing several geometrically, kinematically, and texturally distinct types of structures. This paper provides a geological framework for failure processes as well as a mathematical model to analyze these processes. Of particular interest is the formation of tabular deformation bands in granular rocks, which could exhibit distinct localized deformation features including simple shearing, pure compaction/dilation, and various possible combinations thereof. The analysis is carried out using classical bifurcation theory combined with non-linear continuum mechanics and theoretical/computational plasticity. For granular media, yielding and plastic flow are known to be influenced by all three stress invariants, and thus we formulate a family of three-invariant plasticity models with a compression cap to capture the entire spectrum of yielding of geomaterials. We then utilize a return mapping algorithm in principal stress directions to integrate the stresses over discrete load increments, allowing the solution to find the critical bifurcation point for a given loading path. The formulation covers both the infinitesimal and finite deformation regimes, and comparisons are made of the localization criteria in the two regimes. In the accompanying paper, we demonstrate with numerical examples the role that the constitutive model and finite deformation effects play on the prediction of the onset of deformation bands in geomaterials. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "911d69eddd085d7642cdefbc658c821f", "text": "The paper proposes an 8-bit AMOLED driver IC with a polynomial interpolation DAC. This architecture maintains high-accuracy AMOLED panels with 8-bit compensated gamma correction and supporting low-complex configuration which results in additional occupied die area. The proposed driver consists of a 6-bit gamma correction resistor-string DAC and a 2-bit polynomial interpolation current-modulation sub-DAC. The two-stage DAC leads to a compact die size compared with conventional 8-bit resister-string DAC, and the polynomial interpolation method provides high accurate grey level voltages than linear one. The AMOLED driver was realized in 0.35-μm CMOS process with DNL and INL of 0.43 LSB and 0.43 LSB.", "title": "" }, { "docid": "43398874a34c7346f41ca7a18261e878", "text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "38b8ce180e17f9a20189feeeb3d4410f", "text": "In this paper, we present a Stochastic Scene Grammar (SSG) for parsing 2D indoor images into 3D scene layouts. Our grammar model integrates object functionality, 3D object geometry, and their 2D image appearance in a Function-Geometry-Appearance (FGA) hierarchy. In contrast to the prevailing approach in the literature which recognizes scenes and detects objects through appearance-based classification using machine learning techniques, our method takes a different perspective to scene understanding and recognizes objects and scenes by reasoning their functionality. Functionality is an essential property which often defines the categories of objects and scenes, and decides the design of geometry and scene layout. For example, a sofa is for people to sit comfortably, and a kitchen is a space for people to prepare food with various objects. Our SSG formulates object functionality and contextual relations between objects and imagined human poses in a joint probability distribution in the FGA hierarchy. The latter includes both functional concepts (the scene category, functional groups, functional objects, functional parts) and geometric entities (3D/2D/1D shape primitives). The decomposition of the grammar is terminated on the bottom-up detected lines and regions. We use a Markov chain Monte Carlo (MCMC) algorithm to optimize the Bayesian a posteriori probability and the output parse tree includes a 3D description of the 2D image in the FGA hierarchy. Experimental results on two Yibiao Zhao University of California, Los Angeles (UCLA), USA E-mail: [email protected] www.yibiaozhao.com Song-Chun Zhu University of California, Los Angeles (UCLA), USA E-mail: [email protected] http://www.stat.ucla.edu/~sczhu challenging indoor datasets demonstrate that the proposed approach not only significantly widens the scope of indoor scene parsing from traditional scene segmentation, labeling, and 3D reconstruction to functional object recognition, but also yields improved overall performance.", "title": "" }, { "docid": "9cd82478c45179f354ab591bff44d59b", "text": "License Plate Recognition (LPR) is a well known image processing technology. LPR system consists of four steps: capture the image from digital camera, pre-processing, character segmentation and character recognition. License plates are available in various styles and colors in various countries. Every country has their own license plate format. So each country develops the LPR system appropriate for the vehicle license plate format. Difficulties that the LPR systems face are the environmental and non-uniform outdoor illumination conditions. Therefore, most of the systems work under restricted environmental conditions like fixed illumination, limited vehicle speed, designated routes, and stationary backgrounds. Each LPR system use different combination of algorithms. From the papers being surveyed, it is realized that a good success rate of 93. 7% is obtained by the combination of fuzzy logic for license plate detection and Self Organizing (SO) neural network for character recognition. Comparisons of different LPR systems are discussed in this paper.", "title": "" }, { "docid": "4d0f926c0b097f7b253db787e0c76b5c", "text": "The processing and interpretation of pain signals is a complex process that entails excitation of peripheral nerves, local interactions within the spinal dorsal horn, and the activation of ascending and descending circuits that comprise a loop from the spinal cord to supraspinal structures and finally exciting nociceptive inputs at the spinal level. Although the \"circuits\" described here appear to be part of normal pain processing, the system demonstrates a remarkable ability to undergo neuroplastic transformations when nociceptive inputs are extended over time, and such adaptations function as a pronociceptive positive feedback loop. Manipulations directed to disrupt any of the nodes of this pain facilitatory loop may effectively disrupt the maintenance of the sensitized pain state and diminish or abolish neuropathic pain. Understanding the ascending and descending pain facilitatory circuits may provide for the design of rational therapies that do not interfere with normal sensory processing.", "title": "" }, { "docid": "e3a766bad255bc3f4ad095cece45c637", "text": "We introduce a new task called Multimodal Named Entity Recognition (MNER) for noisy user-generated data such as tweets or Snapchat captions, which comprise short text with accompanying images. These social media posts often come in inconsistent or incomplete syntax and lexical notations with very limited surrounding textual contexts, bringing significant challenges for NER. To this end, we create a new dataset for MNER called SnapCaptions (Snapchat image-caption pairs submitted to public and crowd-sourced stories with fully annotated named entities). We then build upon the state-of-the-art Bi-LSTM word/character based NER models with 1) a deep image network which incorporates relevant visual context to augment textual information, and 2) a generic modality-attention module which learns to attenuate irrelevant modalities while amplifying the most informative ones to extract contexts from, adaptive to each sample and token. The proposed MNER model with modality attention significantly outperforms the state-of-the-art text-only NER models by successfully leveraging provided visual contexts, opening up potential applications of MNER on myriads of social media platforms.", "title": "" }, { "docid": "e322a4f6d36ccc561b6b793ef85db9c2", "text": "Abdominal bracing is often adopted in fitness and sports conditioning programs. However, there is little information on how muscular activities during the task differ among the muscle groups located in the trunk and from those during other trunk exercises. The present study aimed to quantify muscular activity levels during abdominal bracing with respect to muscle- and exercise-related differences. Ten healthy young adult men performed five static (abdominal bracing, abdominal hollowing, prone, side, and supine plank) and five dynamic (V- sits, curl-ups, sit-ups, and back extensions on the floor and on a bench) exercises. Surface electromyogram (EMG) activities of the rectus abdominis (RA), external oblique (EO), internal oblique (IO), and erector spinae (ES) muscles were recorded in each of the exercises. The EMG data were normalized to those obtained during maximal voluntary contraction of each muscle (% EMGmax). The % EMGmax value during abdominal bracing was significantly higher in IO (60%) than in the other muscles (RA: 18%, EO: 27%, ES: 19%). The % EMGmax values for RA, EO, and ES were significantly lower in the abdominal bracing than in some of the other exercises such as V-sits and sit-ups for RA and EO and back extensions for ES muscle. However, the % EMGmax value for IO during the abdominal bracing was significantly higher than those in most of the other exercises including dynamic ones such as curl-ups and sit-ups. These results suggest that abdominal bracing is one of the most effective techniques for inducing a higher activation in deep abdominal muscles, such as IO muscle, even compared to dynamic exercises involving trunk flexion/extension movements. Key PointsTrunk muscle activities during abdominal bracing was examined with regard to muscle- and exercise-related differences.Abdominal bracing preferentially activates internal oblique muscles even compared to dynamic exercises involving trunk flexion/extension movements.Abdominal bracing should be included in exercise programs when the goal is to improve spine stability.", "title": "" }, { "docid": "107cad2d86935768e9401495d2241b20", "text": "A method is presented for using an extended Kalman filter with state noise compensation to estimate the trajectory, orientation, and slip variables for a small-scale robotic tracked vehicle. The principal goal of the method is to enable terrain property estimation. The methodology requires kinematic and dynamic models for skid-steering, as well as tractive force models parameterized by key soil parameters. Simulation studies initially used to verify the model basis are described, and results presented from application of the estimation method to both simulated and experimental study of a 60-kg robotic tracked vehicle. Preliminary results show the method can effectively estimate vehicle trajectory relying only on the model-based estimation and onboard sensor information. Estimates of slip on the left and right track as well as slip angle are essential for ongoing work in vehicle-based soil parameter estimation. The favorable comparison against motion capture data suggests this approach will be useful for laboratory and field-based application.", "title": "" }, { "docid": "908716e7683bdc78283600f63bd3a1b0", "text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.", "title": "" }, { "docid": "570eca9884edb7e4a03ed95763be20aa", "text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.", "title": "" }, { "docid": "9e3a7af7b8773f43ba32d30f3610af40", "text": "Several attempts to enhance statistical parametric speech synthesis have contemplated deep-learning-based postfil-ters, which learn to perform a mapping of the synthetic speech parameters to the natural ones, reducing the gap between them. In this paper, we introduce a new pre-training approach for neural networks, applied in LSTM-based postfilters for speech synthesis, with the objective of enhancing the quality of the synthesized speech in a more efficient manner. Our approach begins with an auto-regressive training of one LSTM network, whose is used as an initialization for postfilters based on a denoising autoencoder architecture. We show the advantages of this initialization on a set of multi-stream postfilters, which encompass a collection of denoising autoencoders for the set of MFCC and fundamental frequency parameters of the artificial voice. Results show that the initialization succeeds in lowering the training time of the LSTM networks and achieves better results in enhancing the statistical parametric speech in most cases, when compared to the common random-initialized approach of the networks.", "title": "" } ]
scidocsrr
9299a8cf1708072bc8f5a59f35361a16
Measuring thin-client performance using slow-motion benchmarking
[ { "docid": "014f1369be6a57fb9f6e2f642b3a4926", "text": "VNC is platform-independent – a VNC viewer on one operating system may connect to a VNC server on the same or any other operating system. There are clients and servers for many GUIbased operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.", "title": "" } ]
[ { "docid": "a48ada0e9d835f26a484d90c62ffc4cf", "text": "Plastics have become an important part of modern life and are used in different sectors of applications like packaging, building materials, consumer products and much more. Each year about 100 million tons of plastics are produced worldwide. Demand for plastics in India reached about 4.3 million tons in the year 2001-02 and would increase to about 8 million tons in the year 2006-07. Degradation is defined as reduction in the molecular weight of the polymer. The Degradation types are (a).Chain end degradation/de-polymerization (b).Random degradation/reverse of the poly condensation process. Biodegradation is defined as reduction in the molecular weight by naturally occurring microorganisms such as bacteria, fungi, and actinomycetes. That is involved in the degradation of both natural and synthetic plastics. Examples of Standard Testing for Polymer Biodegradability in Various Environments. ASTM D5338: Standard Test Method for Determining the Aerobic Biodegradation of Plastic Materials under Controlled Composting Conditions, ASTM D5210: Standard Test Method for Determining the Anaerobic Biodegradation of Plastic Materials in the Presence of Municipal Sewage Sludge, ASTM D5526: Standard Test Method for Determining Anaerobic Biodegradation of Plastic Materials under Accelerated Landfill Conditions, ASTM D5437: Standard Practice for Weathering of Plastics under Marine Floating Exposure. Plastics are biodegraded, (1).In wild nature by aerobic conditions CO2, water are produced,(2).In sediments & landfills by anaerobic conditions CO2, water, methane are produced, (3).In composts and soil by partial aerobic & anaerobic conditions. This review looks at the technological advancement made in the development of more easily biodegradable plastics and the biodegradation of conventional plastics by microorganisms. Additives, such as pro-oxidants and starch, are applied in synthetic materials to modify and make plastics biodegradable. Reviewing published and ongoing studies on plastic biodegradation, this paper attempts to make conclusions on potentially viable methods to reduce impacts of plastic waste on the", "title": "" }, { "docid": "6c11b5d9ec8a89f843b08fe998de194c", "text": "As large-scale multimodal data are ubiquitous in many real-world applications, learning multimodal representations for efficient retrieval is a fundamental problem. Most existing methods adopt shallow structures to perform multimodal representation learning. Due to a limitation of learning ability of shallow structures, they fail to capture the correlation of multiple modalities. Recently, multimodal deep learning was proposed and had proven its superiority in representing multimodal data due to its high nonlinearity. However, in order to learn compact and accurate representations, how to reduce the redundant information lying in the multimodal representations and incorporate different complexities of different modalities in the deep models is still an open problem. In order to address the aforementioned problem, in this paper we propose a hashing-based orthogonal deep model to learn accurate and compact multimodal representations. The method can better capture the intra-modality and inter-modality correlations to learn accurate representations. Meanwhile, in order to make the representations compact, the hashing-based model can generate compact hash codes and the proposed orthogonal structure can reduce the redundant information lying in the codes by imposing orthogonal regularizer on the weighting matrices. We also theoretically prove that, in this case, the learned codes are guaranteed to be approximately orthogonal. Moreover, considering the different characteristics of different modalities, effective representations can be attained with different number of layers for different modalities. Comprehensive experiments on three real-world datasets demonstrate a substantial gain of our method on retrieval tasks compared with existing algorithms.", "title": "" }, { "docid": "ce08ae4dd55bb290900f49010e219513", "text": "BACKGROUND\nCurrent antipsychotics have only a limited effect on 2 core aspects of schizophrenia: negative symptoms and cognitive deficits. Minocycline is a second-generation tetracycline that has a beneficial effect in various neurologic disorders. Recent findings in animal models and human case reports suggest its potential for the treatment of schizophrenia. These findings may be linked to the effect of minocycline on the glutamatergic system, through inhibition of nitric oxide synthase and blocking of nitric oxide-induced neurotoxicity. Other proposed mechanisms of action include effects of minocycline on the dopaminergic system and its inhibition of microglial activation.\n\n\nOBJECTIVE\nTo examine the efficacy of minocycline as an add-on treatment for alleviating negative and cognitive symptoms in early-phase schizophrenia.\n\n\nMETHOD\nA longitudinal double-blind, randomized, placebo-controlled design was used, and patients were followed for 6 months from August 2003 to March 2007. Seventy early-phase schizophrenia patients (according to DSM-IV) were recruited and 54 were randomly allocated in a 2:1 ratio to minocycline 200 mg/d. All patients had been initiated on treatment with an atypical antipsychotic < or = 14 days prior to study entry (risperidone, olanzapine, quetiapine, or clozapine; 200-600 mg/d chlorpromazine-equivalent doses). Clinical, cognitive, and functional assessments were conducted, with the Scale for the Assessment of Negative Symptoms (SANS) as the primary outcome measure.\n\n\nRESULTS\nMinocycline was well tolerated, with few adverse events. It showed a beneficial effect on negative symptoms and general outcome (evident in SANS, Clinical Global Impressions scale). A similar pattern was found for cognitive functioning, mainly in executive functions (working memory, cognitive shifting, and cognitive planning).\n\n\nCONCLUSIONS\nMinocycline treatment was associated with improvement in negative symptoms and executive functioning, both related to frontal-lobe activity. Overall, the findings support the beneficial effect of minocycline add-on therapy in early-phase schizophrenia.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00733057.", "title": "" }, { "docid": "7c1af982b6ac6aa6df4549bd16c1964c", "text": "This paper deals with the problem of estimating the position of emitters using only direction of arrival information. We propose an improvement of newly developed algorithm for position finding of a stationary emitter called sensitivity analysis. The proposed method uses Taylor series expansion iteratively to enhance the estimation of the emitter location and reduce position finding error. Simulation results show that our proposed method makes a great improvement on accuracy of position finding with respect to sensitivity analysis method.", "title": "" }, { "docid": "f334f49a1e21e3278c25ca0d63b2ef8a", "text": "We show that if (J,,} is a sequence of uniformly LI-bounded functions on a measure space, and if.f, -fpointwise a.e., then lim,,_(I{lf,, 1 -IIf,, fII) If I,' for all 0 < p < oc. This result is also generalized in Theorem 2 to some functionals other than the L P norm, namely I. /( J,, -(f, f) f ) -1 0 for suitablej: C -C and a suitable sequence (fJ}. A brief discussion is given of the usefulness of this result in variational problems.", "title": "" }, { "docid": "b3bd600c56bf65171cfc6c2d62cfb665", "text": "GaN is now providing solid-state power amplifiers of higher efficiency, bandwidth, and power density than could be achieved only a few years ago. Novel circuit topologies combined with GaN's high-voltage capabilities and linearization are allowing GaN high-power amplifiers to simultaneously achieve both linearity and record high efficiency. GaN high-power amplifiers have been produced with more than 100 W of power over multioctave bandwidths and with PAEs of more than 60%. Narrower-band high-power amplifiers have been produced with PAEs of more than 90%.", "title": "" }, { "docid": "0811f0768e8112b40bbcd38625db2526", "text": "The Alfred Mann Foundation is completing development of a coordinated network of BION/spl reg/ microstimulator/sensor (hereinafter implant) that has broad stimulating, sensing and communication capabilities. The network consists of a master control unit (MCU) in communication with a group of BION implants. Each implant is powered by a custom lithium-ion rechargeable 10 mW-hr battery. The charging, discharging, safety, stimulating, sensing, and communication circuits are designed to be highly efficient to minimize energy use and maximize battery life and time between charges. The stimulator can be programmed to deliver pulses in any value in the following range: 5 /spl mu/A to 20 mA in 3.3% constant current steps, 7 /spl mu/s to 2000 /spl mu/s in 7 /spl mu/s pulse width steps, and 1 to 4000 Hz in frequency. The preamp voltage sensor covers the range 10 /spl mu/V to 1.0 V with bandpass filtering and several forms of data analysis. The implant also contains sensors that can read out pressure, temperature, DC magnetic field, and distance (via a low frequency magnetic field) up to 20 cm between any two BION implants. The MCU contains a microprocessor, user interface, two-way communication system, and a rechargeable battery. The MCU can command and interrogate in excess of 800 BlON implants every 10 ms, i.e., 100 times a second.", "title": "" }, { "docid": "8e53336bb4d216d78a6ab79faacb48fc", "text": "Pattern glare is characterised by symptoms of visual perceptual distortions and visual stress on viewing striped patterns. People with migraine or Meares-Irlen syndrome (visual stress) are especially prone to pattern glare. The literature on pattern glare is reviewed, and the goal of this study was to develop clinical norms for the Wilkins and Evans Pattern Glare Test. This comprises three test plates of square wave patterns of spatial frequency 0.5, 3 and 12 cycles per degree (cpd). Patients are shown the 0.5 cpd grating and the number of distortions that are reported in response to a list of questions is recorded. This is repeated for the other patterns. People who are prone to pattern glare experience visual perceptual distortions on viewing the 3 cpd grating, and pattern glare can be quantified as either the sum of distortions reported with the 3 cpd pattern or as the difference between the number of distortions with the 3 and 12 cpd gratings, the '3-12 cpd difference'. In study 1, 100 patients consulting an optometrist performed the Pattern Glare Test and the 95th percentile of responses was calculated as the limit of the normal range. The normal range for the number of distortions was found to be <4 on the 3 cpd grating and <2 for the 3-12 cpd difference. Pattern glare was similar in both genders but decreased with age. In study 2, 30 additional participants were given the test in the reverse of the usual testing order and were compared with a sub-group from study 1, matched for age and gender. Participants experienced more distortions with the 12 cpd grating if it was presented after the 3 cpd grating. However, the order did not influence the two key measures of pattern glare. In study 3, 30 further participants who reported a medical diagnosis of migraine were compared with a sub-group of the participants in study 1 who did not report migraine or frequent headaches, matched for age and gender. The migraine group reported more symptoms on viewing all gratings, particularly the 3 cpd grating. The only variable to be significantly different between the groups was the 3-12 cpd difference. In conclusion, people have an abnormal degree of pattern glare if they have a Pattern Glare Test score of >3 on the 3 cpd grating or a score of >1 on the 3-12 cpd difference. The literature suggests that these people are likely to have visual stress in everyday life and may therefore benefit from interventions designed to alleviate visual stress, such as precision tinted lenses.", "title": "" }, { "docid": "8eb6c74d678235a6fd4df755a133115e", "text": "We have demonstrated a 70-nm n-channel tunneling field-effect transistor (TFET) which has a subthreshold swing (SS) of 52.8 mV/dec at room temperature. It is the first experimental result that shows a sub-60-mV/dec SS in the silicon-based TFETs. Based on simulation results, the gate oxide and silicon-on-insulator layer thicknesses were scaled down to 2 and 70 nm, respectively. However, the ON/ OFF current ratio of the TFET was still lower than that of the MOSFET. In order to increase the on current further, the following approaches can be considered: reduction of effective gate oxide thickness, increase in the steepness of the gradient of the source to channel doping profile, and utilization of a lower bandgap channel material", "title": "" }, { "docid": "787979d6c1786f110ff7a47f09b82907", "text": "Imbalance settlement markets are managed by the system operators and provide a mechanism for settling the inevitable discrepancies between contractual agreements and physical delivery. In European power markets, settlements schemes are mainly based on heuristic penalties. These arrangements have disadvantages: First, they do not provide transparency about the cost of the reserve capacity that the system operator may have obtained ahead of time, nor about the cost of the balancing energy that is actually deployed. Second, they can be gamed if market participants use the imbalance settlement as an opportunity for market arbitrage, for example if market participants use balancing energy to avoid higher costs through regular trade on illiquid energy markets. Third, current practice hinders the market-based integration of renewable energy and the provision of financial incentives for demand response through rigid penalty rules. In this paper we try to remedy these disadvantages by proposing an imbalance settlement procedure with an incentive compatible cost allocation scheme for reserve capacity and deployed energy. Incentive compatible means that market participants voluntarily and truthfully state their valuation of ancillary services. We show that this approach guarantees revenue sufficiency for the system operator and provides financial incentives for balance responsible parties to keep imbalances close to zero.", "title": "" }, { "docid": "5093e3d152d053a9f3322b34096d3e4e", "text": "To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.", "title": "" }, { "docid": "e4ed62511669cb333b1ab97d095fda46", "text": "This paper reports a four-element array tag antenna close to a human body for UHF Radio frequency identification (RFID) applications. The four-element array is based on PIFA grounded by vias, which can enhance the directive gain. The array antenna is fed by a four-port microstrip-line power divider. The input impedance of the power divider is designed to match with that of a Monza® 4 microchip. The parametric analysis of conjugate matching was performed and prototypes were fabricated to verify the simulated results. Experimental tests show that the maximum reading range achieved by an RFID tag equipped with the array antenna achieves about 3.9 m when the tag was mounted on a human body.", "title": "" }, { "docid": "8eca353064d3b510b32c486e5f26c264", "text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.", "title": "" }, { "docid": "8106e11ecb11ffc131a36917a60dce33", "text": "Augmented Reality, Architecture and Ubiquity: Technologies, Theories and Frontiers", "title": "" }, { "docid": "dcd919590e0b6b52ea3a6be7378d5d25", "text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.", "title": "" }, { "docid": "17a475b655134aafde0f49db06bec127", "text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.", "title": "" }, { "docid": "e327e992a6973a91d84573390920c48f", "text": "The research regarding Web information extraction focuses on learning rules to extract some selected information from Web documents. Many proposals are ad hoc and cannot benefit from the advances in machine learning; furthermore, they are likely to fade away as the Web evolves, and their intrinsic assumptions are not satisfied. Some authors have explored transforming Web documents into relational data and then using techniques that got inspiration from inductive logic programming. In theory, such proposals should be easier to adapt as the Web evolves because they build on catalogues of features that can be adapted without changing the proposals themselves. Unfortunately, they are difficult to scale as the number of documents or features increases. In the general field of machine learning, there are propositio-relational proposals that attempt to provide effective and efficient means to learn from relational data using propositional techniques, but they have seldom been explored regarding Web information extraction. In this article, we present a new proposal called Roller: it relies on a search procedure that uses a dynamic flattening technique to explore the context of the nodes that provide the information to be extracted; it is configured with an open catalogue of features, so that it can adapt to the evolution of the Web; it also requires a base learner and a rule scorer, which helps it benefit from the continuous advances in machine learning. Our experiments confirm that it outperforms other state-of-the-art proposals in terms of effectiveness and that it is very competitive in terms of efficiency; we have also confirmed that our conclusions are solid from a statistical point of view.", "title": "" }, { "docid": "65e3890edd57a0a6de65b4e38f3cea1c", "text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.", "title": "" }, { "docid": "f840350d14a99f3da40729cfe6d56ef5", "text": "This paper presents a sub-radix-2 redundant architecture to improve the performance of switched-capacitor successive-approximation-register (SAR) analog-to-digital converters (ADCs). The redundancy not only guarantees digitally correctable static nonlinearities of the converter, it also offers means to combat dynamic errors in the conversion process, and thus, accelerating the speed of the SAR architecture. A perturbation-based digital calibration technique is also described that closely couples with the architecture choice to accomplish simultaneous identification of multiple capacitor mismatch errors of the ADC, enabling the downsizing of all sampling capacitors to save power and silicon area. A 12-bit prototype measured a Nyquist 70.1-dB signal-to-noise-plus-distortion ratio (SNDR) and a Nyquist 90.3-dB spurious free dynamic range (SFDR) at 22.5 MS/s, while dissipating 3.0-mW power from a 1.2-V supply and occupying 0.06-mm2 silicon area in a 0.13-μm CMOS process. The figure of merit (FoM) of this ADC is 51.3 fJ/step measured at 22.5 MS/s and 36.7 fJ/step at 45 MS/s.", "title": "" } ]
scidocsrr
2a8be2c15aa2ccd0c22908c8e305952e
Whoo.ly: facilitating information seeking for hyperlocal communities using social media
[ { "docid": "3a4da0cf9f4fdcc1356d25ea1ca38ca4", "text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.", "title": "" }, { "docid": "81387b0f93b68e8bd6a56a4fd81477e9", "text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.", "title": "" } ]
[ { "docid": "7e848e98909c69378f624ce7db31dbfa", "text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.", "title": "" }, { "docid": "3d8fb085a0470b2c06336642436e9523", "text": "The recent changes in climate have increased the importance of environmental monitoring, making it a topical and highly active research area. This field is based on remote sensing and on wireless sensor networks for gathering data about the environment. Recent advancements, such as the vision of the Internet of Things (IoT), the cloud computing model, and cyber-physical systems, provide support for the transmission and management of huge amounts of data regarding the trends observed in environmental parameters. In this context, the current work presents three different IoT-based wireless sensors for environmental and ambient monitoring: one employing User Datagram Protocol (UDP)-based Wi-Fi communication, one communicating through Wi-Fi and Hypertext Transfer Protocol (HTTP), and a third one using Bluetooth Smart. All of the presented systems provide the possibility of recording data at remote locations and of visualizing them from every device with an Internet connection, enabling the monitoring of geographically large areas. The development details of these systems are described, along with the major differences and similarities between them. The feasibility of the three developed systems for implementing monitoring applications, taking into account their energy autonomy, ease of use, solution complexity, and Internet connectivity facility, was analyzed, and revealed that they make good candidates for IoT-based solutions.", "title": "" }, { "docid": "01ccb35abf3eed71191dc8638e58f257", "text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.", "title": "" }, { "docid": "13da78e7868baf04fce64ff02690b0f0", "text": "Industrial IoT (IIoT) refers to the application of IoT in industrial management to improve the overall operational efficiency. With IIoT that accelerates the industrial automation process by enrolling thousands of IoT devices, strong security foundations are to be deployed befitting the distributed connectivity and constrained functionalities of the IoT devices. Recent years witnessed severe attacks exploiting the vulnerabilities in the devices of IIoT networks. Moreover, attackers can use the relations among the vulnerabilities to penetrate deep into the network. This paper addresses the security issues in IIoT network because of the vulnerabilities existing in its devices. As graphs are efficient in representing relations among entities, we propose a graphical model representing the vulnerability relations in the IIoT network. This helps to formulate the security issues in the network as graph-theoretic problems. The proposed model acts as a security framework for the risk assessment of the network. Furthermore, we propose a set of risk mitigation strategies to improve the overall security of the network. The strategies include detection and removal of the attack paths with high risk and low hop-length. We also discuss a method to identify the strongly connected vulnerabilities referred as hot-spots. A use-case is discussed and various security parameters are evaluated. The simulation results with graphs of different sizes and structures are presented for the performance evaluation of the proposed techniques against the changing dynamics of the IIoT networks.", "title": "" }, { "docid": "8709706ffafdadfc2fb9210794dfa782", "text": "The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.", "title": "" }, { "docid": "d34cc5c09e882c167b3ff273f5c52159", "text": "Received: 23 May 2011 Revised: 20 February 2012 2nd Revision: 7 September 2012 3rd Revision: 6 November 2012 Accepted: 7 November 2012 Abstract Competitive pressures are forcing organizations to be flexible. Being responsive to changing environmental conditions is an important factor in determining corporate performance. Earlier research, focusing primarily on IT infrastructure, has shown that organizational flexibility is closely related to IT infrastructure flexibility. Using real-world cases, this paper explores flexibility in the broader context of the IS function. An empirically derived framework for better understanding and managing IS flexibility is developed using grounded theory and content analysis. A process model for managing flexibility is presented; it includes steps for understanding contextual factors, recognizing reasons why flexibility is important, evaluating what needs to be flexible, identifying flexibility categories and stakeholders, diagnosing types of flexibility needed, understanding synergies and tradeoffs between them, and prescribing strategies for proactively managing IS flexibility. Three major flexibility categories, flexibility in IS operations, flexibility in IS systems & services development and deployment, and flexibility in IS management, containing 10 IS flexibility types are identified and described. European Journal of Information Systems (2014) 23, 151–184. doi:10.1057/ejis.2012.53; published online 8 January 2013", "title": "" }, { "docid": "f0bbe4e6d61a808588153c6b5fc843aa", "text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.", "title": "" }, { "docid": "0024e332c0ce1adee2d29a0d2b4b6408", "text": "Vehicles equipped with intelligent systems designed to prevent accidents, such as collision warning systems (CWSs) or lane-keeping assistance (LKA), are now on the market. The next step in reducing road accidents is to coordinate such vehicles in advance not only to avoid collisions but to improve traffic flow as well. To this end, vehicle-to-infrastructure (V2I) communications are essential to properly manage traffic situations. This paper describes the AUTOPIA approach toward an intelligent traffic management system based on V2I communications. A fuzzy-based control algorithm that takes into account each vehicle's safe and comfortable distance and speed adjustment for collision avoidance and better traffic flow has been developed. The proposed solution was validated by an IEEE-802.11p-based communications study. The entire system showed good performance in testing in real-world scenarios, first by computer simulation and then with real vehicles.", "title": "" }, { "docid": "67fe4b931c2495c6833da493707e58d1", "text": "Alan N. Steinberg Technical Director, Data Fusion ERIM International, Inc. 1101 Wilson Blvd Arlington, VA 22209 (703)528-5250 x4109 [email protected] Christopher L. Bowman Data Fusion and Neural Networks 1643 Hemlock Way Broomfield, CO 80020 (303)469-9828 [email protected] Franklin E. White Director, Program Development SPAWAR Systems Center San Diego, CA 92152 Chair, Data Fusion Group (619) 553-4036 [email protected]", "title": "" }, { "docid": "f7f5a0bedb0cae6f2d9fda528dfffcb9", "text": "This paper focuses on the recognition of Activities of Daily Living (ADL) applying pattern recognition techniques to the data acquired by the accelerometer available in the mobile devices. The recognition of ADL is composed by several stages, including data acquisition, data processing, and artificial intelligence methods. The artificial intelligence methods used are related to pattern recognition, and this study focuses on the use of Artificial Neural Networks (ANN). The data processing includes data cleaning, and the feature extraction techniques to define the inputs for the ANN. Due to the low processing power and memory of the mobile devices, they should be mainly used to acquire the data, applying an ANN previously trained for the identification of the ADL. The main purpose of this paper is to present a new method implemented with ANN for the identification of a defined set of ADL with a reliable accuracy. This paper also presents a comparison of different types of ANN in order to choose the type for the implementation of the final method. Results of this research probes that the best accuracies are achieved with Deep Learning techniques with an accuracy higher than 80%.", "title": "" }, { "docid": "66876eb3710afda075b62b915a2e6032", "text": "In this paper we analyze the CS Principles project, a proposed Advanced Placement course, by focusing on the second pilot that took place in 2011-2012. In a previous publication the first pilot of the course was explained, but not in a context related to relevant educational research and philosophy. In this paper we analyze the content and the pedagogical approaches used in the second pilot of the project. We include information about the third pilot being conducted in 2012-2013 and the portfolio exam that is part of that pilot. Both the second and third pilots provide evidence that the CS Principles course is succeeding in changing how computer science is taught and to whom it is taught.", "title": "" }, { "docid": "b08023089abd684d26fabefb038cc9fa", "text": "IMSI catching is a problem on all generations of mobile telecommunication networks, i.e., 2G (GSM, GPRS), 3G (HDSPA, EDGE, UMTS) and 4G (LTE, LTE+). Currently, the SIM card of a mobile phone has to reveal its identity over an insecure plaintext transmission, before encryption is enabled. This identifier (the IMSI) can be intercepted by adversaries that mount a passive or active attack. Such identity exposure attacks are commonly referred to as 'IMSI catching'. Since the IMSI is uniquely identifying, unauthorized exposure can lead to various location privacy attacks. We propose a solution, which essentially replaces the IMSIs with changing pseudonyms that are only identifiable by the home network of the SIM's own network provider. Consequently, these pseudonyms are unlinkable by intermediate network providers and malicious adversaries, and therefore mitigate both passive and active attacks, which we also formally verified using ProVerif. Our solution is compatible with the current specifications of the mobile standards and therefore requires no change in the infrastructure or any of the already massively deployed network equipment. The proposed method only requires limited changes to the SIM and the authentication server, both of which are under control of the user's network provider. Therefore, any individual (virtual) provider that distributes SIM cards and controls its own authentication server can deploy a more privacy friendly mobile network that is resilient against IMSI catching attacks.", "title": "" }, { "docid": "700d3e2cb64624df33ef411215d073ab", "text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.", "title": "" }, { "docid": "70c33dda7076e182ab2440e1f37186f7", "text": "A loss of subchannel orthogonality due to timevariant multipath channels in orthogonal frequency-division multiplexing (OFDM) systems leads to interchannel interference (ICI) which increases the error floor in proportion to the Doppler frequency. In this paper, a simple frequency-domain equalization technique which can compensate for the effect of ICI in a multipath fading channel is proposed. In this technique, the equalization of the received OFDM signal is achieved by using the assumption that the channel impulse response (CIR) varies in a linear fashion during a block period and by compensating for the ICI terms that significantly affect the bit-error rate (BER) performance.", "title": "" }, { "docid": "f1f574734a9a3ba579067e3ef7ce9649", "text": "This paper presents an integrated control approach for autonomous driving comprising a corridor path planner that determines constraints on vehicle position, and a linear time-varying model predictive controller combining path planning and tracking in a road-aligned coordinate frame. The capabilities of the approach are illustrated in obstacle-free curved road-profile tracking, in an application coupling adaptive cruise control (ACC) with obstacle avoidance (OA), and in a typical driving maneuver on highways. The vehicle is modeled as a nonlinear dynamic bicycle model with throttle, brake pedal position, and steering angle as control inputs. Proximity measurements are assumed to be available within a given range field surrounding the vehicle. The proposed general feedback control architecture includes an estimator design for fusion of database information (maps), exteroceptive as well as proprioceptive measurements, a geometric corridor planner based on graph theory for the avoidance of multiple, potentially dynamically moving objects, and a spatial-based predictive controller. Switching rules for transitioning between four different driving modes, i.e., ACC, OA, obstacle-free road tracking (RT), and controlled braking (Brake), are discussed. The proposed method is evaluated on test cases, including curved and highway two-lane road tracks with static as well as moving obstacles.", "title": "" }, { "docid": "cfa58ab168beb2d52fe6c2c47488e93a", "text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.", "title": "" }, { "docid": "22654d2ed4c921c7bceb22ce9f9dc892", "text": "xv", "title": "" }, { "docid": "8ddfa95b1300959ab5e84a0b66dac593", "text": "Do you need the book of Network Science and Cybersecurity pdf with ISBN of 9781461475965? You will be glad to know that right now Network Science and Cybersecurity pdf is available on our book collections. This Network Science and Cybersecurity comes PDF and EPUB document format. If you want to get Network Science and Cybersecurity pdf eBook copy, you can download the book copy here. The Network Science and Cybersecurity we think have quite excellent writing style that make it easy to comprehend.", "title": "" }, { "docid": "181a3d68fd5b5afc3527393fc3b276f9", "text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.", "title": "" }, { "docid": "0683dbfa548d90b1fcbd3d793d194e6c", "text": "Ayurvedic medicine is an ancient Indian form of healing. It is gaining popularity as part of the growing interest in New Age spirituality and in complementary and alternative medicine (CAM). There is no cure for Asthma as per the Conventional Medical Science. Ayurvedic medicines can be a potential and effective alternative for the treatment against the bronchial asthma. Ayurvedic medicines are used for the treatment of diseases globally. The present study was a review on the management of Tamaka-Shwasa based on Ayurvedic drugs including the respiratory tonics and naturally occurring bronchodilator and immune-modulators. This study result concluded that a systematic combination of herbal and allopathic medicines is required for management of asthma.", "title": "" } ]
scidocsrr
80c4f4c108fd6c075a1d8e50ee7b0fb8
Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey
[ { "docid": "83355e7d2db67e42ec86f81909cfe8c1", "text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.", "title": "" }, { "docid": "4d66a85651a78bfd4f7aba290c21f9a7", "text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.", "title": "" } ]
[ { "docid": "5c3358aa3d9a931ba7c9186b1f5a2362", "text": "Compared with word-level and sentence-level convolutional neural networks (ConvNets), the character-level ConvNets has a better applicability for misspellings and typos input. Due to this, recent researches for text classification mainly focus on character-level ConvNets. However, while the majority of these researches employ English corpus for the character-level text classification, few researches have been done using Chinese corpus. This research hopes to bridge this gap, exploring character-level ConvNets for Chinese corpus test classification. We have constructed a large-scale Chinese dataset, and the result shows that character-level ConvNets works better on Chinese character dataset than its corresponding pinyin format dataset, which is the general solution in previous researches. This is the first time that character-level ConvNets has been applied to Chinese character dataset for text classification problem.", "title": "" }, { "docid": "8147143579de86a5eeb668037c2b8c5d", "text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.", "title": "" }, { "docid": "f63da8e7659e711bcb7a148ea12a11f2", "text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.", "title": "" }, { "docid": "1a9670cc170343073fba2a5820619120", "text": "Occlusions present a great challenge for pedestrian detection in practical applications. In this paper, we propose a novel approach to simultaneous pedestrian detection and occlusion estimation by regressing two bounding boxes to localize the full body as well as the visible part of a pedestrian respectively. For this purpose, we learn a deep convolutional neural network (CNN) consisting of two branches, one for full body estimation and the other for visible part estimation. The two branches are treated differently during training such that they are learned to produce complementary outputs which can be further fused to improve detection performance. The full body estimation branch is trained to regress full body regions for positive pedestrian proposals, while the visible part estimation branch is trained to regress visible part regions for both positive and negative pedestrian proposals. The visible part region of a negative pedestrian proposal is forced to shrink to its center. In addition, we introduce a new criterion for selecting positive training examples, which contributes largely to heavily occluded pedestrian detection. We validate the effectiveness of the proposed bi-box regression approach on the Caltech and CityPersons datasets. Experimental results show that our approach achieves promising performance for detecting both non-occluded and occluded pedestrians, especially heavily occluded ones.", "title": "" }, { "docid": "209203c297898a2251cfd62bdfc37296", "text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.", "title": "" }, { "docid": "aecaa8c028c4d1098d44d755344ad2fc", "text": "It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the wellaccepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.", "title": "" }, { "docid": "c45b962006b2bb13ab57fe5d643e2ca6", "text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.", "title": "" }, { "docid": "c330e97f4c7c3478670e55991ac2293c", "text": "The MoveLab was an educational research intervention centering on a community of African American and Hispanic girls as they began to transform their self-concept in relation to computing and dance while creating technology enhanced dance performances. Students within underrepresented populations in computing often do not perceive the identity of a computer scientist as aligning with their interests or value system, leading to rejection of opportunities to participate within the discipline. To engage diverse populations in computing, we need to better understand how to support students in navigating conflicts between identities with computing and their personal interest and values. Using the construct of self-concept, we observed students in the workshop creating both congruence and dissension between their self-concept and computing. We found that creating multiple roles for participation, fostering a socially supportive community, and integrating student values within the curriculum led to students forming congruence between their self-concept and the disciplines of computing and dance.", "title": "" }, { "docid": "f7792dbc29356711c2170d5140030142", "text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.", "title": "" }, { "docid": "01c6476bfa806af6c35898199ad9c169", "text": "This paper presents nonlinear tracking control systems for a quadrotor unmanned aerial vehicle under the influence of uncertainties. Assuming that there exist unstructured disturbances in the translational dynamics and the attitude dynamics, a geometric nonlinear adaptive controller is developed directly on the special Euclidean group. In particular, a new form of an adaptive control term is proposed to guarantee stability while compensating the effects of uncertainties in quadrotor dynamics. A rigorous mathematical stability proof is given. The desirable features are illustrated by numerical example and experimental results of aggressive maneuvers.", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "2f8f1f2db01eeb9a47591e77bb1c835a", "text": "We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.", "title": "" }, { "docid": "ec0d1addabab76d9c2bd044f0bfe3153", "text": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author’s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.", "title": "" }, { "docid": "76c7b343d2f03b64146a0d6ed2d60668", "text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.", "title": "" }, { "docid": "f8ba12d3fd6ebf65429a2ce5f5143dbd", "text": "The contour-guided color palette (CCP) is proposed for robust image segmentation. It efficiently integrates contour and color cues of an image. To find representative colors of an image, color samples along long contours between regions, similar in spirit to machine learning methodology that focus on samples near decision boundaries, are collected followed by the mean-shift (MS) algorithm in the sampled color space to achieve an image-dependent color palette. This color palette provides a preliminary segmentation in the spatial domain, which is further fine-tuned by post-processing techniques such as leakage avoidance, fake boundary removal, and small region mergence. Segmentation performances of CCP and MS are compared and analyzed. While CCP offers an acceptable standalone segmentation result, it can be further integrated into the framework of layered spectral segmentation to produce a more robust segmentation. The superior performance of CCP-based segmentation algorithm is demonstrated by experiments on the Berkeley Segmentation Dataset.", "title": "" }, { "docid": "a21f04b6c8af0b38b3b41f79f2661fa6", "text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.", "title": "" }, { "docid": "34ba1323c4975a566f53e2873231e6ad", "text": "This paper describes the motivation, the realization, and the experience of incorporating simulation and hardware implementation into teaching computer organization and architecture to computer science students. It demonstrates that learning by doing has helped students to truly understand how a computer is constructed and how it really works in practice. Correlated with textbook material, a set of simulation and implementation projects were created on the basis of the work that students had done in previous homework and laboratory activities. Students can thus use these designs as building blocks for completing more complex projects at a later time. The projects cover a wide range of topics from simple adders up to ALU's and CPU's. These processors operate in a virtual manner on certain short assembly-language programs. Specifically, this paper shares the experience of using simulation tools (Alterareg Quartus II) and reconfigurable hardware prototyping platforms (Alterareg UP2 development boards)", "title": "" }, { "docid": "8e1befc4318a2dd32d59acac49e2374c", "text": "The use of Social Network Sites (SNS) is increasing nowadays especially by the younger generations. The availability of SNS allows users to express their interests, feelings and share daily routine. Many researchers prove that using user-generated content (UGC) in a correct way may help determine people's mental health levels. Mining the UGC could help to predict the mental health levels and depression. Depression is a serious medical illness, which interferes most with the ability to work, study, eat, sleep and having fun. However, from the user profile in SNS, we can collect all the information that relates to person's mood, and negativism. In this research, our aim is to investigate how SNS user's posts can help classify users according to mental health levels. We propose a system that uses SNS as a source of data and screening tool to classify the user using artificial intelligence according to the UGC on SNS. We created a model that classify the UGC using two different classifiers: Support Vector Machine (SVM), and Naïve Bayes.", "title": "" }, { "docid": "601488a8e576d465a0bddd65a937c5c8", "text": "Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.", "title": "" }, { "docid": "6c3f320eda59626bedb2aad4e527c196", "text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.", "title": "" } ]
scidocsrr
5c834f5f0c836067419cae60d9fbdede
Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations
[ { "docid": "4ac3c3fb712a1121e0990078010fe4b0", "text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is", "title": "" }, { "docid": "e4dd72a52d4961f8d4d8ee9b5b40d821", "text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.", "title": "" }, { "docid": "7641f8f3ed2afd0c16665b44c1216e79", "text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.", "title": "" }, { "docid": "f2478e4b1156e112f84adbc24a649d04", "text": "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", "title": "" } ]
[ { "docid": "bdadf0088654060b3f1c749ead0eea6e", "text": "This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.", "title": "" }, { "docid": "9bdee31e49213cd33d157b61ea788230", "text": "Situational understanding (SU) requires a combination of insight — the ability to accurately perceive an existing situation — and foresight — the ability to anticipate how an existing situation may develop in the future. SU involves information fusion as well as model representation and inference. Commonly, heterogenous data sources must be exploited in the fusion process: often including both hard and soft data products. In a coalition context, data and processing resources will also be distributed and subjected to restrictions on information sharing. It will often be necessary for a human to be in the loop in SU processes, to provide key input and guidance, and to interpret outputs in a way that necessitates a degree of transparency in the processing: systems cannot be “black boxes”. In this paper, we characterize the Coalition Situational Understanding (CSU) problem in terms of fusion, temporal, distributed, and human requirements. There is currently significant interest in deep learning (DL) approaches for processing both hard and soft data. We analyze the state-of-the-art in DL in relation to these requirements for CSU, and identify areas where there is currently considerable promise, and key gaps.", "title": "" }, { "docid": "9592fc0ec54a5216562478414dc68eb4", "text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.", "title": "" }, { "docid": "1ca692464d5d7f4e61647bf728941519", "text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.", "title": "" }, { "docid": "83c0e0c81a809314e93471e9bcd6aabe", "text": "A rail-to-rail amplifier with an offset cancellation, which is suitable for high color depth and high-resolution liquid crystal display (LCD) drivers, is proposed. The amplifier incorporates dual complementary differential pairs, which are classified as main and auxiliary transconductance amplifiers, to obtain a full input voltage swing and an offset canceling capability. Both offset voltage and injection-induced error, due to the device mismatch and charge injection, respectively, are greatly reduced. The offset cancellation and charge conservation, which is used to reduce the dynamic power consumption, are operated during the same time slot so that the driving period does not need to increase. An experimental prototype amplifier is implemented with 0.35m CMOS technology. The circuit draws 7.5 A static current and exhibits the settling time of 3 s, for a voltage swing of 5 V under a 3.4 k resistance, and a 140 pF capacitance load with a power supply of 5 V. The offset voltage of the amplifier with offset cancellation is 0.48 mV.", "title": "" }, { "docid": "f1773b7fcd2ab70273f096b6da77b7a4", "text": "The senses we call upon when interacting with technology are restricted. We mostly rely on vision and hearing, and increasingly touch, but taste and smell remain largely unused. Although our knowledge about sensory systems and devices has grown rapidly over the past few decades, there is still an unmet challenge in understanding people's multisensory experiences in HCI. The goal is that by understanding the ways in which our senses process information and how they relate to one another, it will be possible to create richer experiences for human-­‐ technology interactions. To meet this challenge, we need specific actions within the HCI community. First, we must determine which tactile, gustatory, and olfactory experiences we can design for, and how to meaningfully stimulate them when people interact with technology. Second, we need to build on previous frameworks for multisensory design while also creating new ones. Third, we need to design interfaces that allow the stimulation of unexplored sensory inputs (e.g., digital smell), as well as interfaces that take into account the relationships between the senses (e.g., integration of taste and smell into flavor). Finally, it is vital to understand what limitations come into play when users need to monitor information from more than one sense simultaneously. Though much development is needed, in recent years we have witnessed progress in multisensory experiences involving touch. It is key for HCI to leverage the full range of tactile sensations (vibrations, pressure, force, balance, heat, coolness/wetness, electric shocks, pain and itch, etc.), taking into account the active and passive modes of touch and its integration with the other senses. This will undoubtedly provide new tools for interactive experience design, and will help to uncover the fine granularity of sensory stimulation and emotional responses.", "title": "" }, { "docid": "d00691959822087a1bddc3b411d27239", "text": "We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.", "title": "" }, { "docid": "6e00567c5c33d899af9b5a67e37711a3", "text": "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as computeor memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work. 1 Research Direction The ability to turn programmed functions or methods into ready-to-use cloud services is leading to a seemingly serverless development and deployment experience for application software engineers [1]. Without the necessity to allocate resources beforehand, prototyping new features and workflows becomes faster and more convenient to application service providers. These advantages have given boost to an industry trend consequently called Serverless Computing. The more precise, almost overlapping term in accordance with Everything-asa-Service (XaaS) cloud computing taxonomies is Function-as-a-Service (FaaS) [4]. In the FaaS layer, functions, either on the programming language level or as abstract concept around binary implementations, are executed synchronously or asynchronously through multi-protocol triggers. Function instances are provisioned on demand through coldstart or warmstart of the implementation in conjunction with an associated configuration in few milliseconds, elastically scaled as needed, and charged per invocation and per product of period of time and resource usage, leading to an almost perfect pay-as-you-go utility pricing model [11]. FaaS is gaining traction primarily in three areas. First, in Internet-of-Things applications where connected devices emit data sporadically. Second, for web applications with light-weight backend tasks. Third, as glue code between other cloud computing services. In contrast to the industrial popularity, no work is known to us which explores its potential for scientific and high-performance computing applications with more demanding execution requirements. From a cloud economics and strategy perspective, FaaS is a refinement of the platform layer (PaaS) with particular tools and interfaces. Yet from a software engineering and deployment perspective, functions are complementing other artefact types which are deployed into PaaS or underlying IaaS environments. Fig. 1 explains this positioning within the layered IaaS, PaaS and SaaS service classes, where the FaaS runtime itself is subsumed under runtime stacks. Performing experimental or computational science research with FaaS implies that the two roles shown, end user and application engineer, are adopted by a single researcher or a team of researchers, which is the setting for our research. Fig. 1. Positioning of FaaS in cloud application development The necessity to conduct research on FaaS for further application domains stems from the unique execution characteristics. Service instances are heuristically stateless, ephemeral, and furthermore limited in resource allotment and execution time. They are moreover isolated from each other and from the function management and control plane. In public commercial offerings, they are billed in subsecond intervals and terminated after few minutes, but as with any cloud application, private deployments are also possible. Hence, there is a trade-off between advantages and drawbacks which requires further analysis. For example, existing parallelisation frameworks cannot easily be used at runtime as function instances can only, in limited ways, invoke other functions without the ability to configure their settings. Instead, any such parallelisation needs to be performed before deployment with language-specific tools such as Pydron for Python [10] or Calvert’s compiler for Java [3]. For resourceand time-demanding applications, no special-purpose FaaS instances are offered by commercial cloud providers. This is a surprising observation given the multitude of options in other cloud compute services beyond general-purpose offerings, especially on the infrastructure level (IaaS). These include instance types optimised for data processing (with latest-generation processors and programmable GPUs), for memory allocation, and for non-volatile storage (with SSDs). Amazon Web Services (AWS) alone offers 57 different instance types. Our work is therefore concerned with the assessment of how current generic one-size-fits-all FaaS offerings handle scientific computing workloads, whether the proliferation of specialised FaaS instance types can be expected and how they would differ from commonly offered IaaS instance types. In this paper, we contribute specifically (i) a refined view on how software can be made fitting into special-purpose FaaS contexts with a high degree of automation through a process named FaaSification, and (ii) concepts and tools to execute such functions in constrained environments. In the remainder of the paper, we first present background information about FaaS runtimes, including our own prototypes which allow for providerindependent evaluations. Subsequently, we present four domain-specific scientific experiments conducted using FaaS to gain broad knowledge about resource requirements beyond general-purpose instances. We summarise the findings and reason about the implications for future scientific computing infrastructures. 2 Background on Function-as-a-Service 2.1 Programming Models and Runtimes The characteristics of function execution depend primarily on the FaaS runtime in use. There are broadly three categories of runtimes: 1. Proprietary commercial services, such as AWS Lambda, Google Cloud Functions, Azure Functions and Oracle Functions. 2. Open source alternatives with almost matching interfaces and functionality, such as Docker-LambCI, Effe, Google Cloud Functions Emulator and OpenLambda [6], some of which focus on local testing rather than operation. 3. Distinct open source implementations with unique designs, such as Apache OpenWhisk, Kubeless, IronFunctions and Fission, some of which are also available as commercial services, for instance IBM Bluemix OpenWhisk [5]. The uniqueness is a consequence of the integration with other cloud stacks (Kubernetes, OpenStack), the availability of web and command-line interfaces, the set of triggers and the level of isolation in multi-tenant operation scenarios, which is often achieved through containers. In addition, due to the often non-trivial configuration of these services, a number of mostly service-specific abstraction frameworks have become popular among developers, such as PyWren, Chalice, Zappa, Apex and the Serverless Framework [8]. The frameworks and runtimes differ in their support for programming languages, but also in the function signatures, parameters and return values. Hence, a comparison of the entire set of offerings requires a baseline. The research in this paper is congruously conducted with the mentioned commercial FaaS providers as well as with our open-source FaaS tool Snafu which allows for managing, executing and testing functions across provider-specific interfaces [14]. The service ecosystem relationship between Snafu and the commercial FaaS providers is shown in Fig. 2. Snafu is able to import services from three providers (AWS Lambda, IBM Bluemix OpenWhisk, Google Cloud Functions) and furthermore offers a compatible control plane to all three of them in its current implementation version. At its core, it contains a modular runtime environment with prototypical maturity for functions implemented in JavaScript, Java, Python and C. Most importantly, it enables repeatable research as it can be deployed as a container, in a virtual machine or on a bare metal workstation. Notably absent from the categories above are FaaS offerings in e-science infrastructures and research clouds, despite the programming model resembling widely used job submission systems. We expect our practical research contributions to overcome this restriction in a vendor-independent manner. Snafu, for instance, is already available as an alpha-version launch profile in the CloudLab testbed federated across several U.S. installations with a total capacity of almost 15000 cores [12], as well as in EGI’s federated cloud across Europe. Fig. 2. Snafu and its ecosystem and tooling Using Snafu, it is possible to adhere to the diverse programming conventions and execution conditions at commercial services while at the same time controlling and lifting the execution restrictions as necessary. In particular, it is possible to define memory-optimised, storage-optimised and compute-optimised execution profiles which serve to conduct the anticipated research on generic (general-purpose) versus specialised (special-purpose) cloud offerings for scientific computing. Snafu can execute in single process mode as well as in a loadbalancing setup where each request is forwarded by the master instance to a slave instance which in turn executes the function natively, through a languagespecific interpreter or through a container. Table 1 summarises the features of selected FaaS runtimes. Table 1. FaaS runtimes and their features Runtime Languages Programming model Import/Export AWS Lambda JavaScript, Python, Java, C# Lambda – Google Cloud Functions JavaScrip", "title": "" }, { "docid": "088cb7992c1d7910151b1008a70e5cd1", "text": "Cable-actuated parallel manipulators (CPMs) rely on cables instead of rigid links to manipulate the moving platform in the taskspace. Upper and lower bounds imposed on the cable tensions limit the force capability in CPMs and render certain forces infeasible at the end effector. This paper presents a geometrical analysis of the problems to 1) determine whether a CPM is capable of balancing a given wrench within the cable tension limits (feasibility check); 2) minimize the 2-norm of the cable tensions that balance feasible wrenches; and 3) check for the existence of an all-positive nullspace vector, which is a necessary condition to have a wrench-closure configuration in CPMs. The unified approach used in this analysis is systematic and geometrically intuitive that is based on the formulation of the static force equilibrium problem as an intersection between two convex sets and the application of Dykstra's alternating projection algorithm to find the projection of a point onto that intersection. In the case of infeasible wrenches, the algorithm can determine whether the infeasibility is because of the cable tension limits or the non-wrench-closure configuration. For the former case, a method was developed by which this algorithm can be used to extend the cable tension limits to balance infeasible wrenches. In addition, the performance of the algorithm is explained in the case of incompletely restrained cable-driven manipulators and the case of manipulators at singular poses. This paper also discusses the algorithm convergence and termination rule. This geometrical and systematic approach is intended for use as a convenient tool for cable tension analysis during design.", "title": "" }, { "docid": "0472c8c606024aaf2700dee3ad020c07", "text": "Any discussion on exchange rate movements and forecasting should include explanatory variables from both the current account and the capital account of the balance of payments. In this paper, we include such factors to forecast the value of the Indian rupee vis a vis the US Dollar. Further, factors reflecting political instability and lack of mechanism for enforcement of contracts that can affect both direct foreign investment and also portfolio investment, have been incorporated. The explanatory variables chosen are the 3 month Rupee Dollar futures exchange rate (FX4), NIFTY returns (NIFTYR), Dow Jones Industrial Average returns (DJIAR), Hang Seng returns (HSR), DAX returns (DR), crude oil price (COP), CBOE VIX (CV) and India VIX (IV). To forecast the exchange rate, we have used two different classes of frameworks namely, Artificial Neural Network (ANN) based models and Time Series Econometric models. Multilayer Feed Forward Neural Network (MLFFNN) and Nonlinear Autoregressive models with Exogenous Input (NARX) Neural Network are the approaches that we have used as ANN models. Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential Generalized Autoregressive Conditional Heteroskedastic (EGARCH) techniques are the ones that we have used as Time Series Econometric methods. Within our framework, our results indicate that, although the two different approaches are quite efficient in forecasting the exchange rate, MLFNN and NARX are the most efficient. Journal of Insurance and Financial Management ARTICLE INFO JEL Classification: C22 C45 C63 F31 F47", "title": "" }, { "docid": "d6b87f5b6627f1a1ac5cc951c7fe0f28", "text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.", "title": "" }, { "docid": "66c57a94a5531b36199bd52521a56ccb", "text": "This project describes design and experimental analysis of composite leaf spring made of glass fiber reinforced polymer. The objective is to compare the load carrying capacity, stiffness and weight savings of composite leaf spring with that of steel leaf spring. The design constraints are stresses and deflections. The dimensions of an existing conventional steel leaf spring of a light commercial vehicle are taken. Same dimensions of conventional leaf spring are used to fabricate a composite multi leaf spring using E-Glass/Epoxy unidirectional laminates. Static analysis of 2-D model of conventional leaf spring is also performed using ANSYS 10 and compared with experimental results. Finite element analysis with full load on 3-D model of composite multi leaf spring is done using ANSYS 10 and the analytical results are compared with experimental results. Compared to steel spring, the composite leaf spring is found to have 67.35% lesser stress, 64.95% higher stiffness and 126.98% higher natural frequency than that of existing steel leaf spring. A weight reduction of 76.4% is achieved by using optimized composite leaf spring.", "title": "" }, { "docid": "b07f858d08f40f61f3ed418674948f12", "text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.", "title": "" }, { "docid": "d63946a096b9e8a99be6d5ddfe4097da", "text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.", "title": "" }, { "docid": "0b6ce2e4f3ef7f747f38068adef3da54", "text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.", "title": "" }, { "docid": "e858a3bda1ac2568afa328cd4352c804", "text": "Bilingual advantages in executive control tasks are well documented, but it is not yet clear what degree or type of bilingualism leads to these advantages. To investigate this issue, we compared the performance of two bilingual groups and monolingual speakers in task-switching and language-switching paradigms. Spanish-English bilinguals, who reported switching between languages frequently in daily life, exhibited smaller task-switching costs than monolinguals after controlling for between-group differences in speed and parent education level. By contrast, Mandarin-English bilinguals, who reported switching languages less frequently than Spanish-English bilinguals, did not exhibit a task-switching advantage relative to monolinguals. Comparing the two bilingual groups in language-switching, Spanish-English bilinguals exhibited smaller costs than Mandarin-English bilinguals, even after matching for fluency in the non-dominant language. These results demonstrate an explicit link between language-switching and bilingual advantages in task-switching, while also illustrating some limitations on bilingual advantages.", "title": "" }, { "docid": "8109594325601247cdb253dbb76b9592", "text": "Disturbance compensation is one of the major problems in control system design. Due to external disturbance or model uncertainty that can be treated as disturbance, all control systems are subject to disturbances. When it comes to networked control systems, not only disturbances but also time delay is inevitable where controllers are remotely connected to plants through communication network. Hence, simultaneous compensation for disturbance and time delay is important. Prior work includes a various combinations of smith predictor, internal model control, and disturbance observer tailored to simultaneous compensation of both time delay and disturbance. In particular, simplified internal model control simultaneously compensates for time delay and disturbances. But simplified internal model control is not applicable to the plants that have two poles at the origin. We propose a modified simplified internal model control augmented with disturbance observer which simultaneously compensates time delay and disturbances for the plants with two poles at the origin. Simulation results are provided.", "title": "" }, { "docid": "81126b57a29b4c9aee46ecb04c7f43ca", "text": "Within the field of bibliometrics, there is sustained interest in how nations “compete” in terms of academic disciplines, and what determinants explain why countries may have a specific advantage in one discipline over another. However, this literature has not, to date, presented a comprehensive structured model that could be used in the interpretation of a country’s research profile and aca‐ demic output. In this paper, we use frameworks from international business and economics to pre‐ sent such a model. Our study makes four major contributions. First, we include a very wide range of countries and disci‐ plines, explicitly including the Social Sciences, which unfortunately are excluded in most bibliometrics studies. Second, we apply theories of revealed comparative advantage and the competitive ad‐ vantage of nations to academic disciplines. Third, we cluster our 34 countries into five different groups that have distinct combinations of revealed comparative advantage in five major disciplines. Finally, based on our empirical work and prior literature, we present an academic diamond that de‐ tails factors likely to explain a country’s research profile and competitiveness in certain disciplines.", "title": "" }, { "docid": "5aee510b62d8792a38044fc8c68a57e4", "text": "In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles.", "title": "" }, { "docid": "87a11f6097cb853b7c98e17cdf97801e", "text": "Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/", "title": "" } ]
scidocsrr
5004442e422d51a134d3efc6492c3189
Security in Automotive Networks: Lightweight Authentication and Authorization
[ { "docid": "3f8e4ddfe56737508ec2222d110291fc", "text": "We present a new verification algorithm for security protocols that allows for unbounded verification, falsification, and complete characterization. The algorithm provides a number of novel features, including: (1) Guaranteed termination, after which the result is either unbounded correctness, falsification, or bounded correctness. (2) Efficient generation of a finite representation of an infinite set of traces in terms of patterns, also known as a complete characterization. (3) State-of-the-art performance, which has made new types of protocol analysis feasible, such as multi-protocol analysis.", "title": "" } ]
[ { "docid": "e28feb56ebc33a54d13452a2ea3a49f7", "text": "Ping Yan, Hsinchun Chen, and Daniel Zeng Department of Management Information Systems University of Arizona, Tucson, Arizona [email protected]; {hchen, zeng}@eller.arizona.edu", "title": "" }, { "docid": "470ecc2bc4299d913125d307c20dd48d", "text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.", "title": "" }, { "docid": "0f4d91623a7b9893d24c9dc9354f3dce", "text": "We derive experimentally based estimates of the energy used by neural mechanisms to code known quantities of information. Biophysical measurements from cells in the blowfly retina yield estimates of the ATP required to generate graded (analog) electrical signals that transmit known amounts of information. Energy consumption is several orders of magnitude greater than the thermodynamic minimum. It costs 104 ATP molecules to transmit a bit at a chemical synapse, and 106 - 107 ATP for graded signals in an interneuron or a photoreceptor, or for spike coding. Therefore, in noise-limited signaling systems, a weak pathway of low capacity transmits information more economically, which promotes the distribution of information among multiple pathways.", "title": "" }, { "docid": "1597874bef5c515e038584b3bf72f148", "text": "This paper presents an overview of Text Summarization. Text Summarization is a challenging problem these days. Due to the great amount of information we are provided with and thanks to the development of Internet technologies, needs of producing summaries have become more and more widespread. Summarization is a very interesting and useful task that gives support to many other tasks as well as it takes advantage of the techniques developed for related Natural Language Processing tasks. The paper we present here may help us to have an idea of what Text Summarization is and how it can be useful for.", "title": "" }, { "docid": "9237b82f1d127ab59a1a5e8f9fa7f86c", "text": "Purpose: Enterprise social media platforms provide new ways of sharing knowledge and communicating within organizations to benefit from the social capital and valuable knowledge that employees have. Drawing on social dilemma and self‐determination theory, the aim of the study is to understand what factors drive employees’ participation and what factors hamper their participation in enterprise social media. Methodology: Based on a literature review, a unified research model is derived integrating demographic, individual, organizational and technological factors that influence the motivation of employees to share knowledge. The model is tested using statistical methods on a sample of 114 respondents in Denmark. Qualitative data is used to elaborate and explain quantitative results‘ findings. Practical implications: The proposed knowledge sharing framework helps to understand what factors impact engagement on social media. Furthermore the article suggests different types of interventions to overcome the social dilemma of knowledge sharing. Findings: Our findings pinpoint towards the general drivers and barriers to knowledge sharing within organizations. The significant drivers are: enjoy helping others, monetary rewards, management support, change of knowledge sharing behavior and recognition. The significant identified barriers to knowledge sharing are: change of behavior, lack of trust and lack of time. Originality: The study contributes to an understanding of factors leading to the success or failure of enterprise social media drawing on self‐determination and social dilemma theory.", "title": "" }, { "docid": "e27575b8d7a7455f1a8f941adb306a04", "text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Inyong Ha Robotis, Seoul, Korea e-mail: [email protected] Jeakweon Han Robotis, Seoul, Korea e-mail: [email protected] Hyunjong Song Robotis, Seoul, Korea e-mail: [email protected] Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: [email protected] Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: [email protected] Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: [email protected] Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected] Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: [email protected]", "title": "" }, { "docid": "2e99e535f2605e88571407142e4927ee", "text": "Stability is a common tool to verify the validity of sample based algorithms. In clustering it is widely used to tune the parameters of the algorithm, such as the number k of clusters. In spite of the popularity of stability in practical applications, there has been very little theoretical analysis of this notion. In this paper we provide a formal definition of stability and analyze some of its basic properties. Quite surprisingly, the conclusion of our analysis is that for large sample size, stability is fully determined by the behavior of the objective function which the clustering algorithm is aiming to minimize. If the objective function has a unique global minimizer, the algorithm is stable, otherwise it is unstable. In particular we conclude that stability is not a well-suited tool to determine the number of clusters it is determined by the symmetries of the data which may be unrelated to clustering parameters. We prove our results for center-based clusterings and for spectral clustering, and support our conclusions by many examples in which the behavior of stability is counter-intuitive.", "title": "" }, { "docid": "717ea3390ffe3f3132d4e2230e645ee5", "text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.", "title": "" }, { "docid": "f554af0d260de70f6efbc8fe8d64a357", "text": "Hypocretin deficiency causes narcolepsy and may affect neuroendocrine systems and body composition. Additionally, growth hormone (GH) alterations my influence weight in narcolepsy. Symptoms can be treated effectively with sodium oxybate (SXB; γ-hydroxybutyrate) in many patients. This study compared growth hormone secretion in patients and matched controls and established the effect of SXB administration on GH and sleep in both groups. Eight male hypocretin-deficient patients with narcolepsy and cataplexy and eight controls matched for sex, age, BMI, waist-to-hip ratio, and fat percentage were enrolled. Blood was sampled before and on the 5th day of SXB administration. SXB was taken two times 3 g/night for 5 consecutive nights. Both groups underwent 24-h blood sampling at 10-min intervals for measurement of GH concentrations. The GH concentration time series were analyzed with AutoDecon and approximate entropy (ApEn). Basal and pulsatile GH secretion, pulse regularity, and frequency, as well as ApEn values, were similar in patients and controls. Administration of SXB caused a significant increase in total 24-h GH secretion rate in narcolepsy patients, but not in controls. After SXB, slow-wave sleep (SWS) and, importantly, the cross-correlation between GH levels and SWS more than doubled in both groups. In conclusion, SXB leads to a consistent increase in nocturnal GH secretion and strengthens the temporal relation between GH secretion and SWS. These data suggest that SXB may alter somatotropic tone in addition to its consolidating effect on nighttime sleep in narcolepsy. This could explain the suggested nonsleep effects of SXB, including body weight reduction.", "title": "" }, { "docid": "690a2b067af8810d5da7d3389b7b4d78", "text": "Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 3314,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum `1 adversarial distortion of a ReLU network with a 0.99 lnn approximation ratio unless NP=P, where n is the number of neurons in the network. Equal contribution Massachusetts Institute of Technology, Cambridge, MA UC Davis, Davis, CA Harvard University, Cambridge, MA UT Austin, Austin, TX. Source code is available at https://github.com/huanzhang12/CertifiedReLURobustness. Correspondence to: Tsui-Wei Weng <[email protected]>, Huan Zhang <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).", "title": "" }, { "docid": "4e23bf1c89373abaf5dc096f76c893f3", "text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.", "title": "" }, { "docid": "feeeb7bd9ed07917048cfd6bf0c3c6c7", "text": "Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of crossdomain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domainspecific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-ofthe-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.", "title": "" }, { "docid": "b04ae75e4f444b97976962a397ac413c", "text": "In this paper the new topology DC/DC Boost power converter-inverter-DC motor that allows bidirectional rotation of the motor shaft is presented. In this direction, the system mathematical model is developed considering its different operation modes. Afterwards, the model validation is performed via numerical simulations by using Matlab-Simulink.", "title": "" }, { "docid": "0b1310ac9630fa4a1c90dcf90d4ae327", "text": "The Mirai Distributed Denial-of-Service (DDoS) attack exploited security vulnerabilities of Internet-of-Things (IoT) devices and thereby clearly signaled that attackers have IoT on their radar. Securing IoT is therefore imperative, but in order to do so it is crucial to understand the strategies of such attackers. For that purpose, in this paper, a novel IoT honeypot called ThingPot is proposed and deployed. Honeypot technology mimics devices that might be exploited by attackers and logs their behavior to detect and analyze the used attack vectors. ThingPot is the first of its kind, since it focuses not only on the IoT application protocols themselves, but on the whole IoT platform. A Proof-of-Concept is implemented with XMPP and a REST API, to mimic a Philips Hue smart lighting system. ThingPot has been deployed for 1.5 months and through the captured data we have found five types of attacks and attack vectors against smart devices. The ThingPot source code is made available as open source.", "title": "" }, { "docid": "a01965406575363328f4dae4241a05b7", "text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.", "title": "" }, { "docid": "322d23354a9bf45146e4cb7c733bf2ec", "text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: [email protected] Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: [email protected]", "title": "" }, { "docid": "a3308e4df796a74112b70c3244bd4d34", "text": "Creative insight occurs with an “Aha!” experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21–69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.", "title": "" }, { "docid": "a496f2683f49573132e5b57f7e3accf0", "text": "Automatically generated databases of English paraphrases have the drawback that they return a single list of paraphrases for an input word or phrase. This means that all senses of polysemous words are grouped together, unlike WordNet which partitions different senses into separate synsets. We present a new method for clustering paraphrases by word sense, and apply it to the Paraphrase Database (PPDB). We investigate the performance of hierarchical and spectral clustering algorithms, and systematically explore different ways of defining the similarity matrix that they use as input. Our method produces sense clusters that are qualitatively and quantitatively good, and that represent a substantial improvement to the PPDB resource.", "title": "" }, { "docid": "2b8296f8760e826046cd039c58026f83", "text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.", "title": "" } ]
scidocsrr
3c0dd8a974108c66a96b721e34450223
Cartesian impedance control of redundant manipulators for human-robot co-manipulation
[ { "docid": "56316a77e260d8122c4812d684f4d223", "text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.", "title": "" }, { "docid": "aab75b349485fe8a626b9d6dad286b0f", "text": "Impedance and Admittance Control are two distinct implementations of the same control goal. It is well known that their stability and performance properties are complementary. In this paper, we present a hybrid system approach, which incorporates Impedance and Admittance Control as two extreme cases of one family of controllers. This approach allows to continuously switch and interpolate between Impedance and Admittance Control. We compare the basic stability and performance properties of the resulting controllers by means of an extensive case study of a one-dimensional system and present an experimental evaluation using the KUKA-DLR-lightweight arm.", "title": "" } ]
[ { "docid": "93c9751cda2db3aa44e732abdf4bc82e", "text": "The current study was motivated by a need for a self-report questionnaire that assesses a broad range of subthreshold autism traits, is brief and easily administered, and is relevant to the general population. An initial item pool was administered to 1,709 students. Structural validity analysis resulted in a 24-item questionnaire termed the Subthreshold Autism Trait Questionnaire (SATQ; Cronbach's alpha coefficient = .73, test-retest reliability = .79). An exploratory factor analysis suggested 5 factors. Confirmatory factor analysis indicated the 5 factor solution was an adequate fit and outperformed two other models. The SATQ successfully differentiated between an ASD and student group and demonstrated convergent validity with other ASD measures. Thus, the current study introduces and provides initial psychometric support for the SATQ.", "title": "" }, { "docid": "7deac3cbb3a30914412db45f69fb27f1", "text": "This paper presents the design, numerical analysis and measurements of a planar bypass balun that provides 1:4 impedance transformations between the unbalanced microstrip (MS) and balanced coplanar strip line (CPS). This type of balun is suitable for operation with small antennas fed with balanced a (parallel wire) transmission line, i.e. wire, planar dipoles and loop antennas. The balun has been applied to textile CPS-fed loop antennas, designed for operations below 1GHz. The performance of a loop antenna with the balun is described, as well as an idea of incorporating rigid circuits with flexible textile structures.", "title": "" }, { "docid": "d049a1779a8660f689f1da5daada69dc", "text": "Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.", "title": "" }, { "docid": "4313c87376e6ea9fac7dc32f359c2ae9", "text": "Game engines are specialized middleware which facilitate rapid game development. Until now they have been highly optimized to extract maximum performance from single processor hardware. In the last couple of years improvements in single processor hardware have approached physical limits and performance gains have slowed to become incremental. As a consequence, improvements in game engine performance have also become incremental. Currently, hardware manufacturers are shifting to dual and multi-core processor architectures, and the latest game consoles also feature multiple processors. This presents a challenge to game engine developers because of the unfamiliarity and complexity of concurrent programming. The next generation of game engines must address the issues of concurrency if they are to take advantage of the new hardware. This paper discusses the issues, approaches, and tradeoffs that need to be considered in the design of a multi-threaded game engine.", "title": "" }, { "docid": "309dee96492cf45ed2887701b27ad3ee", "text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.", "title": "" }, { "docid": "1d8a8f6f95a729a44486f89ffb07b63a", "text": "MicroRNAs are short, noncoding RNA transcripts that post-transcriptionally regulate gene expression. Several hundred microRNA genes have been identified in Caenorhabditis elegans, Drosophila, plants and mammals. MicroRNAs have been linked to developmental processes in C. elegans, plants and humans and to cell growth and apoptosis in Drosophila. A major impediment in the study of microRNA function is the lack of quantitative expression profiling methods. To close this technological gap, we have designed dual-channel microarrays that monitor expression levels of 124 mammalian microRNAs. Using these tools, we observed distinct patterns of expression among adult mouse tissues and embryonic stem cells. Expression profiles of staged embryos demonstrate temporal regulation of a large class of microRNAs, including members of the let-7 family. This microarray technology enables comprehensive investigation of microRNA expression, and furthers our understanding of this class of recently discovered noncoding RNAs.", "title": "" }, { "docid": "1904d8b3c45bc24acdc0294d84d66c79", "text": "The propagation of unreliable information is on the rise in many places around the world. This expansion is facilitated by the rapid spread of information and anonymity granted by the Internet. The spread of unreliable information is a well-studied issue and it is associated with negative social impacts. In a previous work, we have identified significant differences in the structure of news articles from reliable and unreliable sources in the US media. Our goal in this work was to explore such differences in the Brazilian media. We found significant features in two data sets: one with Brazilian news in Portuguese and another one with US news in English. Our results show that features related to the writing style were prominent in both data sets and, despite the language difference, some features have a universal behavior, being significant to both US and Brazilian news articles. Finally, we combined both data sets and used the universal features to build a machine learning classifier to predict the source type of a news article as reliable or unreliable.", "title": "" }, { "docid": "ff8f909eb2212a032781c795ee483954", "text": "We investigate the market for news under two assumptions: that readers hold beliefs which they like to see confirmed, and that newspapers can slant stories toward these beliefs. We show that, on the topics where readers share common beliefs, one should not expect accuracy even from competitive media: competition results in lower prices, but common slanting toward reader biases. On topics where reader beliefs diverge (such as politically divisive issues), however, newspapers segment the market and slant toward extreme positions. Yet in the aggregate, a reader with access to all news sources could get an unbiased perspective. Generally speaking, reader heterogeneity is more important for accuracy in media than competition per se. (JEL D23, L82)", "title": "" }, { "docid": "0f208f41314384a1c34d32224e790664", "text": "BACKGROUND\nThe Rey 15-Item Memory Test (RMT) is frequently used to detect malingering. Many objections to the test have been raised. Nevertheless, the test is still widely used.\n\n\nOBJECTIVE\nTo provide a meta-analysis of the available studies using the RMT and provide an overall assessment of the sensitivity and specificity of the test, based on the cumulative data.\n\n\nRESULTS\nThe results show that, excluding patients with mental retardation, the RMT has a low sensitivity but an excellent specificity.\n\n\nCONCLUSIONS\nThese results provide the basis for the ongoing use of the test, given that it is acceptable to miss some cases of malingering with such a screening test, but one does not want to have many false positives.", "title": "" }, { "docid": "2784de025936e2c9a9a0e86753281f8b", "text": "Cardiovascular disease remains the leading cause of disease burden globally, which underlies the continuing need to identify new complementary targets for prevention. Over the past 5–10 years, the pooling of multiple data sets into 'mega-studies' has accelerated progress in research on stress as a risk and prognostic factor for cardiovascular disease. Severe stressful experiences in childhood, such as physical abuse and household substance abuse, can damage health and increase the risk of multiple chronic conditions in adulthood. Compared with childhood stress and adulthood classic risk factors, such as smoking, high blood pressure, and high serum cholesterol levels, the harmful effects of stress in adulthood are generally less marked. However, adulthood stress has an important role as a disease trigger in individuals who already have a high atherosclerotic plaque burden, and as a determinant of prognosis and outcome in those with pre-existing cardiovascular or cerebrovascular disease. In real-life settings, mechanistic studies have corroborated earlier laboratory-based observations on stress-related pathophysiological changes that underlie triggering, such as lowered arrhythmic threshold and increased sympathetic activation with related increases in blood pressure, as well as pro-inflammatory and procoagulant responses. In some clinical guidelines, stress is already acknowledged as a target for prevention for people at high overall risk of cardiovascular disease or with established cardiovascular disease. However, few scalable, evidence-based interventions are currently available.", "title": "" }, { "docid": "dd057cd10948a7c894523c5f0b452965", "text": "This paper presents an approach to learn meaningful spatial relationships in an unsupervised fashion from the distribution of 3D object poses in the real world. Our approach begins by extracting an over-complete set of features to describe the relative geometry of two objects. Each relationship type is modeled using a relevance-weighted distance over this feature space. This effectively ignores irrelevant feature dimensions. Our algorithm RANSEM for determining subsets of data that share a relationship as well as the model to describe each relationship is based on robust sample-based clustering. This approach combines the search for consistent groups of data with the extraction of models that precisely capture the geometry of those groups. An iterative refinement scheme has shown to be an effective approach for finding concepts of differing degrees of geometric specificity. Our results show that the models learned by our approach correlate strongly with the English labels that have been given by a human annotator to a set of validation data drawn from the NYUv2 real-world Kinect dataset, demonstrating that these concepts can be automatically acquired given sufficient experience. Additionally, the results of our method significantly out-perform K-means, a standard baseline for unsupervised cluster extraction.", "title": "" }, { "docid": "ec2acfbe9020b9a136a14c2be7d517dd", "text": "Cricket is a popular sport played by 16 countries, is the second most watched sport in the world after soccer, and enjoys a multi-million dollar industry. There is tremendous interest in simulating cricket and more importantly in predicting the outcome of games, particularly in their one-day international format. The complex rules governing the game, along with the numerous natural parameters affecting the outcome of a cricket match present significant challenges for accurate prediction. Multiple diverse parameters, including but not limited to cricketing skills and performances, match venues and even weather conditions can significantly affect the outcome of a game. The sheer number of parameters, along with their interdependence and variance create a non-trivial challenge to create an accurate quantitative model of a game Unlike other sports such as basketball and baseball which are well researched from a sports analytics perspective, for cricket, these tasks have yet to be investigated in depth. In this paper, we build a prediction system that takes in historical match data as well as the instantaneous state of a match, and predicts future match events culminating in a victory or loss. We model the game using a subset of match parameters, using a combination of linear regression and nearestneighbor clustering algorithms. We describe our model and algorithms and finally present quantitative results, demonstrating the performance of our algorithms in predicting the number of runs scored, one of the most important determinants of match outcome.", "title": "" }, { "docid": "542117c3e27d15163b809a528952fb79", "text": "Predicting the gap between taxi demand and supply in taxi booking apps is completely new and important but challenging. However, manually mining gap rule for different conditions may become impractical because of massive and sparse taxi data. Existing works unilaterally consider demand or supply, used only few simple features and verified by little data, but not predict the gap value. Meanwhile, none of them dealing with missing values. In this paper, we introduce a Double Ensemble Gradient Boosting Decision Tree Model(DEGBDT) to predict taxi gap. (1) Our approach specifically considers demand and supply to predict the gap between them. (2) Also, our method provides a greedy feature ranking and selecting method to exploit most reliable feature. (3) To deal with missing value, our model takes the lead in proposing a double ensemble method, which secondarily integrates different Gradient Boosting Decision Tree(GBDT) model at the different data sparse situation. Experiments on real large-scale dataset demonstrate that our approach can effectively predict the taxi gap than state-of-the-art methods, and shows that double ensemble method is efficacious for sparse data.", "title": "" }, { "docid": "c9ae0fca2ddd718b905283741a93a254", "text": "A unified power control strategy is proposed for the permanent magnet synchronous generator-based wind energy conversion system (WECS) operating under different grid conditions. In the strategy, the generator-side converter is used to control the dc-link voltage and the grid-side converter is responsible for the control of power flow injected into the grid. The generator-side controller has inherent damping capability of the torsional oscillations caused by drive-train characteristics. The grid-side control is utilized to satisfy the active and reactive current (power) requirements defined in the grid codes, and at the same time mitigates the current distortions even with unsymmetrical grid fault. During grid faults, the generator-side converter automatically reduces the generator current to maintain the dc voltage and the resultant generator acceleration is counteracted by pitch regulation. Compared with the conventional strategy, the dc chopper, which is intended to assist the fault ride through of the WECS, can be eliminated if the proposed scheme is employed. Compared with the variable-structured control scheme, the proposed strategy has quicker and more precise power responses, which is beneficial to the grid recovery. The simulation results verify the effectiveness of the proposed strategy.", "title": "" }, { "docid": "bb2b3944f72c0d1a530f971ddf6dc6fb", "text": "UNLABELLED\nAny suture material, absorbable or nonabsorbable, elicits a kind of inflammatory reaction within the tissue. Nonabsorbable black silk suture and absorbable polyglycolic acid suture were compared clinically and histologically on various parameters.\n\n\nMATERIALS AND METHODS\nThis study consisted of 50 patients requiring minor surgical procedure, who were referred to the Department of Oral and Maxillofacial Surgery. Patients were selected randomly and sutures were placed in the oral cavity 7 days preoperatively. Polyglycolic acid was placed on one side and black silk suture material on the other. Seven days later, prior to surgical procedure the sutures will be assessed. After the surgical procedure the sutures will be placed postoperatively in the same way for 7 days, after which the sutures will be assessed clinically and histologically.\n\n\nRESULTS\nThe results of this study showed that all the sutures were retained in case of polyglycolic acid suture whereas four cases were not retained in case of black silk suture. As far as polyglycolic acid suture is concerned 25 cases were mild, 18 cases moderate and seven cases were severe. Black silk showed 20 mild cases, 21 moderate cases and six severe cases. The histological results showed that 33 cases showed mild, 14 cases moderate and three cases severe in case of polyglycolic acid suture. Whereas in case of black silk suture 41 cases were mild. Seven cases were moderate and two cases were severe. Black silk showed milder response than polyglycolic acid suture histologically.\n\n\nCONCLUSION\nThe polyglycolic acid suture was more superior because in all 50 patients the suture was retained. It had less tissue reaction, better handling characteristics and knotting capacity.", "title": "" }, { "docid": "a3b4e8b4a54921da210b42e43fc2e7ff", "text": "CONTEXT\nRecent reports show that obesity and diabetes have increased in the United States in the past decade.\n\n\nOBJECTIVE\nTo estimate the prevalence of obesity, diabetes, and use of weight control strategies among US adults in 2000.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThe Behavioral Risk Factor Surveillance System, a random-digit telephone survey conducted in all states in 2000, with 184 450 adults aged 18 years or older.\n\n\nMAIN OUTCOME MEASURES\nBody mass index (BMI), calculated from self-reported weight and height; self-reported diabetes; prevalence of weight loss or maintenance attempts; and weight control strategies used.\n\n\nRESULTS\nIn 2000, the prevalence of obesity (BMI >/=30 kg/m(2)) was 19.8%, the prevalence of diabetes was 7.3%, and the prevalence of both combined was 2.9%. Mississippi had the highest rates of obesity (24.3%) and of diabetes (8.8%); Colorado had the lowest rate of obesity (13.8%); and Alaska had the lowest rate of diabetes (4.4%). Twenty-seven percent of US adults did not engage in any physical activity, and another 28.2% were not regularly active. Only 24.4% of US adults consumed fruits and vegetables 5 or more times daily. Among obese participants who had had a routine checkup during the past year, 42.8% had been advised by a health care professional to lose weight. Among participants trying to lose or maintain weight, 17.5% were following recommendations to eat fewer calories and increase physical activity to more than 150 min/wk.\n\n\nCONCLUSIONS\nThe prevalence of obesity and diabetes continues to increase among US adults. Interventions are needed to improve physical activity and diet in communities nationwide.", "title": "" }, { "docid": "534554ae5913f192d32efd93256488d6", "text": "Several unclassified web services are available in the internet which is difficult for the user to choose the correct web services. This raises service discovery cost, transforming data time between services and service searching time. Adequate methods, tools, technologies for clustering the web services have been developed. The clustering of web services is done manually. This survey is organized based on clustering of web service discovery methods, tools and technologies constructed on following list of parameters. The parameters are clustering model, graphs and environment, different technologies, advantages and disadvantages, theory and proof of concepts. Based on the user requirements results are different and better than one another. If the web service clustering is done automatically that can create an impact in the service discovery and fulfills the user requirements. This article gives the overview of the significant issues of the different methods and discusses the lack of technologies and automatic tools of the web service discovery.", "title": "" }, { "docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2", "text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.", "title": "" }, { "docid": "b908987c5bae597683f177beb2bba896", "text": "This paper presents a novel task of cross-language authorship attribution (CLAA), an extension of authorship attribution task to multilingual settings: given data labelled with authors in language X , the objective is to determine the author of a document written in language Y , where X 6= Y . We propose a number of cross-language stylometric features for the task of CLAA, such as those based on sentiment and emotional markers. We also explore an approach based on machine translation (MT) with both lexical and cross-language features. We experimentally show that MT could be used as a starting point to CLAA, since it allows good attribution accuracy to be achieved. The cross-language features provide acceptable accuracy while using jointly with MT, though do not outperform lexical", "title": "" } ]
scidocsrr
a415503ceb55bfe061cf67864f66da36
Insight and reduction of MapReduce stragglers in heterogeneous environment
[ { "docid": "8222f36e2aa06eac76085fb120c8edab", "text": "Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than the median task in that job. Such stragglers increase the average job duration by 47%. This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34% to 46% after state-of-the-art mitigation techniques have been applied, using just 5% extra resources for cloning.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "5183794d8bef2d8f2ee4048d75a2bd3c", "text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.", "title": "" }, { "docid": "d612ca22b9895c0e85f2b64327a1b22c", "text": "Physical inactivity has been associated with increasing prevalence and mortality of cardiovascular and other diseases. The purpose of this study is to identify if there is an association between, self–efficacy, mental health, and physical inactivity among university students. The study comprises of 202 males and 692 females age group 18-25 years drawn from seven faculties selected using a table of random numbers. Questionnaires were used for the data collection. The findings revealed that the prevalence of physical inactivity among the respondents was 41.4%. Using a univariate analysis, the study showed that there was an association between gender (female), low family income, low self-efficacy, respondents with mental health probable cases and physical inactivity (p<0.05).Using a multivariate analysis, physical inactivity was higher among females(OR = 3.72, 95% CI = 2.399-5.788), low family income (OR = 4.51, 95% CI = 3.266 – 6.241), respondents with mental health probable cases (OR = 1.58, 95% CI = 1.1362.206) and low self-efficacy for pysical activity(OR = 1.86, 95% CI = 1.350 2.578).Conclusively there is no significant decrease in physical inactivity among university students when compared with previous studies in this population, it is therefore recommended that counselling on mental health, physical activity awareness among new university students should be encouraged. Keyword:Exercise,Mental Health, Self-Efficacy,Physical Inactivity, University students", "title": "" }, { "docid": "0188eb4ef8a87b6cee8657018360fa69", "text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.", "title": "" }, { "docid": "7b989f3da78e75d9616826644d210b79", "text": "BACKGROUND\nUse of cannabis is often an under-reported activity in our society. Despite legal restriction, cannabis is often used to relieve chronic and neuropathic pain, and it carries psychotropic and physical adverse effects with a propensity for addiction. This article aims to update the current knowledge and evidence of using cannabis and its derivatives with a view to the sociolegal context and perspectives for future research.\n\n\nMETHODS\nCannabis use can be traced back to ancient cultures and still continues in our present society despite legal curtailment. The active ingredient, Δ9-tetrahydrocannabinol, accounts for both the physical and psychotropic effects of cannabis. Though clinical trials demonstrate benefits in alleviating chronic and neuropathic pain, there is also significant potential physical and psychotropic side-effects of cannabis. Recent laboratory data highlight synergistic interactions between cannabinoid and opioid receptors, with potential reduction of drug-seeking behavior and opiate sparing effects. Legal rulings also have changed in certain American states, which may lead to wider use of cannabis among eligible persons.\n\n\nCONCLUSIONS\nFamily physicians need to be cognizant of such changing landscapes with a practical knowledge on the pros and cons of medical marijuana, the legal implications of its use, and possible developments in the future.", "title": "" }, { "docid": "969c83b4880879f1137284f531c9f94a", "text": "The extant literature on cross-national differences in approaches to corporate social responsibility (CSR) has mostly focused on developed countries. Instead, we offer two interrelated studies into corporate codes of conduct issued by developing country multinational enterprises (DMNEs). First, we analyse code adoption rates and code content through a mixed methods design. Second, we use multilevel analyses to examine country-level drivers of", "title": "" }, { "docid": "ad004dd47449b977cd30f2454c5af77a", "text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.", "title": "" }, { "docid": "037df2435ae0f995a40d5cce429af5cb", "text": "Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract important information to help advance healthcare, make our cities smarter, and innovate in smart home technology. Deep convolutional neural networks, which are at the heart of many emerging Internet-of-Things (IoT) applications, achieve remarkable performance in audio and visual recognition tasks, at the expense of high computational complexity in convolutional layers, limiting their deployability. In this paper, we present an easy-to-implement acceleration scheme, named ADaPT, which can be applied to already available pre-trained networks. Our proposed technique exploits redundancy present in the convolutional layers to reduce computation and storage requirements. Additionally, we also decompose each convolution layer into two consecutive one-dimensional stages to make full use of the approximate model. This technique can easily be applied to existing low power processors, GPUs or new accelerators. We evaluated this technique using four diverse and widely used benchmarks, on hardware ranging from embedded CPUs to server GPUs. Our experiments show an average 3-5x speed-up in all deep models and a maximum 8-9x speed-up on many individual convolutional layers. We demonstrate that unlike iterative pruning based methodology, our approximation technique is mathematically well grounded, robust, does not require any time-consuming retraining, and still achieves speed-ups solely from convolutional layers with no loss in baseline accuracy.", "title": "" }, { "docid": "3df95e4b2b1bb3dc80785b25c289da92", "text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.", "title": "" }, { "docid": "d4766ccd502b9c35ee83631fadc69aaf", "text": "The approach proposed by Śliwerski, Zimmermann, and Zeller (SZZ) for identifying bug-introducing changes is at the foundation of several research areas within the software engineering discipline. Despite the foundational role of SZZ, little effort has been made to evaluate its results. Such an evaluation is a challenging task because the ground truth is not readily available. By acknowledging such challenges, we propose a framework to evaluate the results of alternative SZZ implementations. The framework evaluates the following criteria: (1) the earliest bug appearance, (2) the future impact of changes, and (3) the realism of bug introduction. We use the proposed framework to evaluate five SZZ implementations using data from ten open source projects. We find that previously proposed improvements to SZZ tend to inflate the number of incorrectly identified bug-introducing changes. We also find that a single bug-introducing change may be blamed for introducing hundreds of future bugs. Furthermore, we find that SZZ implementations report that at least 46 percent of the bugs are caused by bug-introducing changes that are years apart from one another. Such results suggest that current SZZ implementations still lack mechanisms to accurately identify bug-introducing changes. Our proposed framework provides a systematic mean for evaluating the data that is generated by a given SZZ implementation.", "title": "" }, { "docid": "5c8ab947856945b32d4d3e0edc89a9e0", "text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.", "title": "" }, { "docid": "5ca75490c015685a1fc670b2ee5103ff", "text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.", "title": "" }, { "docid": "ac3d9b8a93cb18449b76b2f2ef818d76", "text": "Slotless brushless dc motors find more and more applications due to their high performance and their low production cost. This paper focuses on the windings inserted in the air gap of these motors and, in particular, to an original production technique that consists in printing them on a flexible printed circuit board. It theoretically shows that this technique, when coupled with an optimization of the winding shape, can improve the power density of about 23% compared with basic skewed and rhombic windings made of round wire. It also presents a first prototype of a winding realized using this technique and an experimental characterization aimed at identifying the importance and the origin of the differences between theory and practice.", "title": "" }, { "docid": "dffc11786d4a0d9247e22445f48d8fca", "text": "Tuberization in potato (Solanum tuberosum L.) is a complex biological phenomenon which is affected by several environmental cues, genetic factors and plant nutrition. Understanding the regulation of tuber induction is essential to devise strategies to improve tuber yield and quality. It is well established that short-day photoperiods promote tuberization, whereas long days and high-temperatures inhibit or delay tuberization. Worldwide research on this complex biological process has yielded information on the important bio-molecules (proteins, RNAs, plant growth regulators) associated with the tuberization process in potato. Key proteins involved in the regulation of tuberization include StSP6A, POTH1, StBEL5, StPHYB, StCONSTANS, Sucrose transporter StSUT4, StSP5G, etc. Biomolecules that become transported from \"source to sink\" have also been suggested to be important signaling candidates regulating the tuberization process in potatos. Four molecules, namely StSP6A protein, StBEL5 RNA, miR172 and GAs, have been found to be the main candidates acting as mobile signals for tuberization. These biomolecules can be manipulated (overexpressed/inhibited) for improving the tuberization in commercial varieties/cultivars of potato. In this review, information about the genes/proteins and their mechanism of action associated with the tuberization process is discussed.", "title": "" }, { "docid": "926734e0a379f678740d07c1042a5339", "text": "The increasing pervasiveness of digital technologies, also refered to as \"Internet of Things\" (IoT), offers a wealth of business model opportunities, which often involve an ecosystem of partners. In this context, companies are required to look at business models beyond a firm-centric lens and respond to changed dynamics. However, extant literature has not yet provided actionable approaches for business models for IoT-driven environments. Our research therefore addresses the need for a business model framework that captures the specifics of IoT-driven ecosystems. Applying an iterative design science research approach, the present paper describes (a) the methodology, (b) the requirements, (c) the design and (d) the evaluation of a business model framework that enables researchers and practitioners to visualize, analyze and design business models in the IoT context in a structured and actionable way. The identified dimensions in the framework include the value network of collaborating partners (who); sources of value creation (where); benefits from collaboration (why). Evidence from action research and multiple case studies indicates that the framework is able to depict business models in IoT.", "title": "" }, { "docid": "35c08abd57d2700164373c688c24b2a6", "text": "Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.", "title": "" }, { "docid": "a870b0b347d15d8e8c788ede7ff5fa4a", "text": "On the twentieth anniversary of the original publication [10], following ten years of intense activity in the research literature, hardware support for transactional memory (TM) has finally become a commercial reality, with HTM-enabled chips currently or soon-to-be available from many hardware vendors. In this paper we describe architectural support for TM added to a future version of the Power ISA#8482;. Two imperatives drove the development: the desire to complement our weakly-consistent memory model with a more friendly interface to simplify the development and porting of multithreaded applications, and the need for robustness beyond that of some early implementations. In the process of commercializing the feature, we had to resolve some previously unexplored interactions between TM and existing features of the ISA, for example translation shootdown, interrupt handling, atomic read-modify-write primitives, and our weakly consistent memory model. We describe these interactions, the overall architecture, and discuss the motivation and rationale for our choices of architectural semantics, beyond what is typically found in reference manuals.", "title": "" }, { "docid": "9c008dc2f3da4453317ce92666184da0", "text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.", "title": "" }, { "docid": "5d9106a06f606cefb3b24fb14c72d41a", "text": "Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs. In this paper, we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-theart relation extraction models when such clues are applicable to the datasets. And, we find that the clues learnt automatically from existing knowledge bases perform comparably to those refined by human.", "title": "" } ]
scidocsrr
eb97c4e814cfff02c7fc273eab5218f0
3D region segmentation using topological persistence
[ { "docid": "6ed624fa056d1f92cc8e58401ab3036e", "text": "In this paper, we present an approach to segment 3D point cloud data using ideas from persistent homology theory. The proposed algorithms first generate a simplicial complex representation of the point cloud dataset. Next, we compute the zeroth homology group of the complex which corresponds to the number of connected components. Finally, we extract the clusters of each connected component in the dataset. We show that this technique has several advantages over state of the art methods such as the ability to provide a stable segmentation of point cloud data under noisy or poor sampling conditions and its independence of a fixed distance metric.", "title": "" } ]
[ { "docid": "548ca7ecd778bc64e4a3812acd73dcfb", "text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.", "title": "" }, { "docid": "40128351f90abde13925799756dc1511", "text": "A new field of forensic accounting has emerged as current practices have been changed in electronic business environment and rapidly increasing fraudulent activities. Despite taking many forms, the fraud is usually theft of funds and information or misuse of someone's information assets. As financial frauds prevail in digital environment, accountants are the most helpful people to investigate them. However, forensic accountants in digital environment, usually called fraud investigators or fraud examiners, must be specially trained to investigate and report digital evidences in the courtroom. In this paper, the authors researched the case of financial fraud forensic analysis of the Microsoft Excel file, as it is very often used in financial reporting. We outlined some of the well-known difficulties involved in tracing the fraudster activities throughout extracted Excel file metadata, and applied a different approach from that well-described in classic postmortem computer system forensic analysis or in data mining techniques application. In the forensic examination steps we used open source code, Deft 7.1 (Digital evidence & forensic toolkit) and verified results by the other forensic tools, Meld a visual diff and merge tool to compare files and directories and KDiff tool, too. We proposed an integrated forensic accounting, functional model as a combined accounting, auditing and digital forensic investigative process. Before this approach can be properly validated some future work needs to be done, too.", "title": "" }, { "docid": "e2302f7cd00b4c832a6a708dc6775739", "text": "This article provides theoretically and practically grounded assistance to companies that are today engaged primarily in non‐digital industries in the development and implementation of business models that use the Internet of Things. To that end, we investigate the role of the Internet in business models in general in the first section. We conclude that the significance of the Internet in business model innovation has increased steadily since the 1990s, that each new Internet wave has given rise to new digital business model patterns, and that the biggest breakthroughs to date have been made in digital industries. In the second section, we show that digital business model patterns have now become relevant in physical industries as well. The separation between physical and digital industries is now consigned to the past. The key to this transformation is the Internet of Things which makes possible hybrid solutions that merge physical products and digital services. From this, we derive very general business model logic for the Internet of Things and some specific components and patterns for business models. Finally we sketch out the central challenges faced in implementing such hybrid business models and point to possible solutions. The Influence of the Internet on Business Models to Date", "title": "" }, { "docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff", "text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0b18f7966a57e266487023d3a2f3549d", "text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful", "title": "" }, { "docid": "ae167d6e1ff2b1ee3bd23e3e02800fab", "text": "The aim of this paper is to improve the classification performance based on the multiclass imbalanced datasets. In this paper, we introduce a new resampling approach based on Clustering with sampling for Multiclass Imbalanced classification using Ensemble (C-MIEN). C-MIEN uses the clustering approach to create a new training set for each cluster. The new training sets consist of the new label of instances with similar characteristics. This step is applied to reduce the number of classes then the complexity problem can be easily solved by C-MIEN. After that, we apply two resampling techniques (oversampling and undersampling) to rebalance the class distribution. Finally, the class distribution of each training set is balanced and ensemble approaches are used to combine the models obtained with the proposed method through majority vote. Moreover, we carefully design the experiments and analyze the behavior of C-MIEN with different parameters (imbalance ratio and number of classifiers). The experimental results show that C-MIEN achieved higher performance than state-of-the-art methods.", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "b4c395b97f0482f3c1224ed6c8623ac2", "text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.", "title": "" }, { "docid": "2f4a4c223c13c4a779ddb546b3e3518c", "text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.", "title": "" }, { "docid": "81a45cb4ca02c38839a81ad567eb1491", "text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.", "title": "" }, { "docid": "8aadc690d86ad4c015a4a82a32336336", "text": "The complexities of various search algorithms are considered in terms of time, space, and cost of the solution paths. • Brute-force search . Breadth-first search (BFS) . Depth-first search (DFS) . Depth-first Iterative-deepening (DFID) . Bi-directional search • Heuristic search: best-first search . A∗ . IDA∗ The issue of storing information in DISK instead of main memory. Solving 15-puzzle. TCG: DFID, 20121120, Tsan-sheng Hsu c © 2", "title": "" }, { "docid": "e740e5ff2989ce414836c422c45570a9", "text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.", "title": "" }, { "docid": "459de602bf6e46ad4b752f2e51c81ffa", "text": "Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs.", "title": "" }, { "docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2", "text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.", "title": "" }, { "docid": "ba4600c9c8e4c1bfcec9fa8fcde0f05c", "text": "While things (i.e., technologies) play a crucial role in creating and shaping meaningful, positive experiences, their true value lies only in the resulting experiences. It is about what we can do and experience with a thing, about the stories unfolding through using a technology, not about its styling, material, or impressive list of features. This paper explores the notion of \"experiences\" further: from the link between experiences, well-being, and people's developing post-materialistic stance to the challenges of the experience market and the experience-driven design of technology.", "title": "" }, { "docid": "1ed9f257129a45388fcf976b87e37364", "text": "Mobile cloud computing is an extension of cloud computing that allow the users to access the cloud service via their mobile devices. Although mobile cloud computing is convenient and easy to use, the security challenges are increasing significantly. One of the major issues is unauthorized access. Identity Management enables to tackle this issue by protecting the identity of users and controlling access to resources. Although there are several IDM frameworks in place, they are vulnerable to attacks like timing attacks in OAuth, malicious code attack in OpenID and huge amount of information leakage when user’s identity is compromised in Single Sign-On. Our proposed framework implicitly authenticates a user based on user’s typing behavior. The authentication information is encrypted into homomorphic signature before being sent to IDM server and tokens are used to authorize users to access the cloud resources. Advantages of our proposed framework are: user’s identity protection and prevention from unauthorized access.", "title": "" }, { "docid": "03b2876a4b62a6e10e8523cccc32452a", "text": "Millions of people regularly report the details of their real-world experiences on social media. This provides an opportunity to observe the outcomes of common and critical situations. Identifying and quantifying these outcomes may provide better decision-support and goal-achievement for individuals, and help policy-makers and scientists better understand important societal phenomena. We address several open questions about using social media data for open-domain outcome identification: Are the words people are more likely to use after some experience relevant to this experience? How well do these words cover the breadth of outcomes likely to occur for an experience? What kinds of outcomes are discovered? Studying 3-months of Twitter data capturing people who experienced 39 distinct situations across a variety of domains, we find that these outcomes are generally found to be relevant (55-100% on average) and that causally related concepts are more likely to be discovered than conceptual or semantically related concepts.", "title": "" }, { "docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a", "text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.", "title": "" }, { "docid": "8cbd4a4adf82c385a6c821fde08d16e9", "text": "The internet of things (IOT) is the new revolution of internet after PCS and ServersClients communication now sensors, smart object, wearable devices, and smart phones are able to communicate. Everything surrounding us can talk to each other. life will be easier and smarter with smart environment, smart homes,smart cities and intelligent transport and healthcare.Billions of devices will be communicating wirelessly is a real huge challenge to our security and privacy.IOT requires efficient and effective security solutions which satisfies IOT requirements, The low power, small memory and limited computational capabilities . This paper addresses various standards, protocols and technologies of IOT and different security attacks which may compromise IOT security and privacy.", "title": "" }, { "docid": "de4d14afaf6a24fcd831e2a293c30fc3", "text": "Artistic style transfer can be thought as a process to generate different versions of abstraction of the original image. However, most of the artistic style transfer operators are not optimized for human faces thus mainly suffers from two undesirable features when applying them to selfies. First, the edges of human faces may unpleasantly deviate from the ones in the original image. Second, the skin color is far from faithful to the original one which is usually problematic in producing quality selfies. In this paper, we take a different approach and formulate this abstraction process as a gradient domain learning problem. We aim to learn a type of abstraction which not only achieves the specified artistic style but also circumvents the two aforementioned drawbacks thus highly applicable to selfie photography. We also show that our method can be directly generalized to videos with high inter-frame consistency. Our method is also robust to non-selfie images, and the generalization to various kinds of real-life scenes is discussed. We will make our code publicly available.", "title": "" } ]
scidocsrr
a4c43a33e3dc764786144dd80184562f
The Impact of Observational Learning and Electronic Word of Mouth on Consumer Purchase Decisions: The Moderating Role of Consumer Expertise and Consumer Involvement
[ { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "ddad5569efe76dca3445e7e4d4aceafc", "text": "This study evaluates the current status of electronic word-of-mouth (eWOM) research through an exhaustive literature review of relevant articles. We have identified a total of 83 eWOM research articles published from 2001 through 2010. Published research into eWOM first appeared in peerreviewed journals about ten years ago, and research has been steadily increasing. Among research topic area, the impact of eWOM communication was the most researched topic in the last decade. We also found that individual and message were the two mostly used unit of analysis in eWOM studies. Survey, secondary data analysis, and mathematical modeling were the three main streams of research method. Finally, we found diverse theoretical approaches in understanding eWOM communication. We conclude this paper by identifying important trends in the eWOM literature to provide future research directions.", "title": "" }, { "docid": "c57cbe432fdab3f415d2c923bea905ff", "text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.", "title": "" }, { "docid": "b445de6f864c345d90162cb8b2527240", "text": "he growing popularity of online product review forums invites the development of models and metrics that allow firms to harness these new sources of information for decision support. Our work contributes in this direction by proposing a novel family of diffusion models that capture some of the unique aspects of the entertainment industry and testing their performance in the context of very early postrelease motion picture revenue forecasting. We show that the addition of online product review metrics to a benchmark model that includes prerelease marketing, theater availability and professional critic reviews substantially increases its forecasting accuracy; the forecasting accuracy of our best model outperforms that of several previously published models. In addition to its contributions in diffusion theory, our study reconciles some inconsistencies among previous studies with respect to what online review metrics are statistically significant in forecasting entertainment good sales. CHRYSANTHOS DELLAROCAS, XIAOQUAN (MICHAEL) ZHANG, AND NEVEEN F. AWAD", "title": "" } ]
[ { "docid": "ff0d24ef13efa2853befdd89ca123611", "text": "In Information Systems research there are a growing number of studies that must necessarily draw upon the contexts, experiences and narratives of practitioners. This calls for research approaches that are qualitative and may also be interpretive. These may include case studies or action research projects. For some researchers, particularly those with limited experience of interpretive qualitative research, there may be a lack of confidence when faced with the prospect of collecting and analysing the data from studies of this kind. In this paper we reflect on the lessons learned from using Grounded Theory in an interpretive case study based piece of research. The paper discusses the lessons and provides guidance for the use of the method in interpretive studies.", "title": "" }, { "docid": "af2a1083436450b9147eb7b51be5c761", "text": "Over the past century, various value models have been proposed. To determine which value model best predicts prosocial behavior, mental health, and pro-environmental behavior, we subjected seven value models to a hierarchical regression analysis. A sample of University students (N = 271) completed the Portrait Value Questionnaire (Schwartz et al., 2012), the Basic Value Survey (Gouveia et al., 2008), and the Social Value Orientation scale (Van Lange et al., 1997). Additionally, they completed the Values Survey Module (Hofstede and Minkov, 2013), Inglehart's (1977) materialism-postmaterialism items, the Study of Values, fourth edition (Allport et al., 1960; Kopelman et al., 2003), and the Rokeach (1973) Value Survey. However, because the reliability of the latter measures was low, only the PVQ-RR, the BVS, and the SVO where entered into our analysis. Our results provide empirical evidence that the PVQ-RR is the strongest predictor of all three outcome variables, explaining variance above and beyond the other two instruments in almost all cases. The BVS significantly predicted prosocial and pro-environmental behavior, while the SVO only explained variance in pro-environmental behavior.", "title": "" }, { "docid": "2620ce1c5ef543fded3a02dfb9e5c3f8", "text": "Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem. Like the chaos in real bee colony behavior, this paper proposes new ABC algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical ABC algorithm. Seven new chaotic ABC algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of ABC and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "edbc09ea4ad9792abd9aa05176c17d42", "text": "The therapeutic nature of the nurse-patient relationship is grounded in an ethic of caring. Florence Nightingale envisioned nursing as an art and a science...a blending of humanistic, caring presence with evidence-based knowledge and exquisite skill. In this article, the author explores the caring practice of nursing as a framework for understanding moral accountability and integrity in practice. Being morally accountable and responsible for one's judgment and actions is central to the nurse's role as a moral agent. Nurses who practice with moral integrity possess a strong sense of themselves and act in ways consistent with what they understand is the right thing to do. A review of the literature related to caring theory, the concepts of moral accountability and integrity, and the documents that speak of these values and concepts in professional practice (eg, Code of Ethics for Nurses with Interpretive Statements, Nursing's Social Policy Statement) are presented in this article.", "title": "" }, { "docid": "87222f419605df6e1d63d60bd26c5343", "text": "Video Games are boring when they are too easy and frustrating when they are too hard. While most singleplayer games allow players to adjust basic difficulty (easy, medium, hard, insane), their overall level of challenge is often static in the face of individual player input. This lack of flexibility can lead to mismatches between player ability and overall game difficulty. In this paper, we explore the computational and design requirements for a dynamic difficulty adjustment system. We present a probabilistic method (drawn predominantly from Inventory Theory) for representing and reasoning about uncertainty in games. We describe the implementation of these techniques, and discuss how the resulting system can be applied to create flexible interactive experiences that adjust on the fly. Introduction Video games are designed to generate engaging experiences: suspenseful horrors, whimsical amusements, fantastic adventures. But unlike films, books, or televised media – which often have similar experiential goals – video games are interactive. Players create meaning by interacting with the game’s internal systems. One such system is inventory – the stock of items that a player collects and carries throughout the game world. The relative abundance or scarcity of inventory items has a direct impact on the player’s experience. As such, games are explicitly designed to manipulate the exchange of resources between world and player. [Simpson, 2001] This network of producer-consumer relationships can be viewed as an economy – or more broadly, as a dynamic system [Castronova, 2000, Luenberger, 79]. 1 Inventory items for “first-person shooters” include health, weapons, ammunition and power-ups like shielding or temporary invincibility. 2 A surplus of ammunition affords experimentation and “shoot first” tactics, while limited access to recovery items (like health packs) will promote a more cautious approach to threatening situations. Game developers iteratively refine these systems based on play testing feedback – tweaking behaviors and settings until the game is balanced. While balancing, they often analyze systems intuitively by tracking specific identifiable patterns or types of dynamic activity. It is a difficult and time consuming process [Rollings and Adams, 2003]. While game balancing and tuning can’t be automated, directed mathematical analysis can reveal deeper structures and relationships within a game system. With the right tools, researchers and developers can calculate relationships in less time, with greater accuracy. In this paper, we describe a first step towards such tools. Hamlet is a Dynamic Difficulty Adjustment (DDA) system built using Valve’s Half Life game engine. Using techniques drawn from Inventory Theory and Operations Research, Hamlet analyzes and adjust the supply and demand of game inventory in order to control overall game difficulty.", "title": "" }, { "docid": "a77e5f81c925e2f170df005b6576792b", "text": "Recommendation systems utilize data analysis techniques to the problem of helping users find the items they would like. Example applications include the recommendation systems for movies, books, CDs and many others. As recommendation systems emerge as an independent research area, the rating structure plays a critical role in recent studies. Among many alternatives, the collaborative filtering algorithms are generally accepted to be successful to estimate user ratings of unseen items and then to derive proper recommendations. In this paper, we extend the concept of single criterion ratings to multi-criteria ones, i.e., an item can be evaluated in many different aspects. For example, the goodness of a restaurant can be evaluated in terms of its food, decor, service and cost. Since there are usually conflicts among different criteria, the recommendation problem cannot be formulated as an optimization problem any more. Instead, we propose in this paper to use data query techniques to solve this multi-criteria recommendation problem. Empirical studies show that our approach is of both theoretical and practical values.", "title": "" }, { "docid": "c91f7b4b02faaca93fb74768c475f8bf", "text": "Data mining is an interdisciplinary subfield of computer science involving methods at the intersection of artificial intelligence, machine learning and statistics. One of the data mining tasks is anomaly detection which is the analysis of large quantities of data to identify items, events or observations which do not conform to an expected pattern. Anomaly detection is applicable in a variety of domains, e.g., fraud detection, fault detection, system health monitoring but this article focuses on application of anomaly detection in the field of network intrusion detection.The main goal of the article is to prove that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. This aim is achieved by realization of the following points: (i) preparation of a concept of original entropy-based network anomaly detection method, (ii) implementation of the method, (iii) preparation of original dataset, (iv) evaluation of the method.", "title": "" }, { "docid": "b2f4295cc36550bbafdb4b94f8fbee7c", "text": "Novel view synthesis aims to synthesize new images from different viewpoints of given images. Most of previous works focus on generating novel views of certain objects with a fixed background. However, for some applications, such as virtual reality or robotic manipulations, large changes in background may occur due to the egomotion of the camera. Generated images of a large-scale environment from novel views may be distorted if the structure of the environment is not considered. In this work, we propose a novel fully convolutional network, that can take advantage of the structural information explicitly by incorporating the inverse depth features. The inverse depth features are obtained from CNNs trained with sparse labeled depth values. This framework can easily fuse multiple images from different viewpoints. To fill the missing textures in the generated image, adversarial loss is applied, which can also improve the overall image quality. Our method is evaluated on the KITTI dataset. The results show that our method can generate novel views of large-scale scene without distortion. The effectiveness of our approach is demonstrated through qualitative and quantitative evaluation. . . .", "title": "" }, { "docid": "be58d822e03a3443b607c478b721095f", "text": "Cerebral amyloid angiopathy (CAA) is pathologically defined as the deposition of amyloid protein, most commonly the amyloid β peptide (Aβ), primarily within the media and adventitia of small and medium-sized arteries of the leptomeninges, cerebral and cerebellar cortex. This deposition likely reflects an imbalance between Aβ production and clearance within the brain and leads to weakening of the overall structure of brain small vessels, predisposing patients tolobar intracerebral haemorrhage (ICH), brain ischaemia and cognitive decline. CAA is associated with markers of small vessel disease, like lobar microbleeds and white matter hyperintensities on magnetic resonance imaging. Therefore, it can be now be diagnosed during life with reasonable accuracy by clinical and neuroimaging criteria. Despite the lack of a specific treatment for this condition, the detection of CAA may help in the management of patients, regarding the prevention of major haemorrhagic complications and genetic counselling. This review discusses recent advances in our understanding of the pathophysiology, detection and management of CAA.", "title": "" }, { "docid": "3d9fe9c30d09a9e66f7339b0ad24edb7", "text": "Due to progress in wired and wireless home networking, sensor networks, networked appliances, mechanical and control engineering, and computers, we can build smart homes, and many smart home projects are currently proceeding throughout the world. However, we have to be careful not to repeat the same mistake that was made with home automation technologies that were booming in the 1970s. That is, [total?] automation should not be a goal of smart home technologies. I believe the following points are important in construction of smart homes from users¿ viewpoints: development of interface technologies between humans and systems for detection of human intensions, feelings, and situations; improvement of system knowledge; and extension of human activity support outside homes to the scopes of communities, towns, and cities.", "title": "" }, { "docid": "a583b48a8eb40a9e88a5137211f15bce", "text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.", "title": "" }, { "docid": "427b3cae516025381086021bc66f834e", "text": "PhishGuru is an embedded training system that teaches users to avoid falling for phishing attacks by delivering a training message when the user clicks on the URL in a simulated phishing email. In previous lab and real-world experiments, we validated the effectiveness of this approach. Here, we extend our previous work with a 515-participant, real-world study in which we focus on long-term retention and the effect of two training messages. We also investigate demographic factors that influence training and general phishing susceptibility. Results of this study show that (1) users trained with PhishGuru retain knowledge even after 28 days; (2) adding a second training message to reinforce the original training decreases the likelihood of people giving information to phishing websites; and (3) training does not decrease users' willingness to click on links in legitimate messages. We found no significant difference between males and females in the tendency to fall for phishing emails both before and after the training. We found that participants in the 18--25 age group were consistently more vulnerable to phishing attacks on all days of the study than older participants. Finally, our exit survey results indicate that most participants enjoyed receiving training during their normal use of email.", "title": "" }, { "docid": "d47c5f2b5fea54e0f650869d0d45ac25", "text": "Time-varying, smooth trajectory estimation is of great interest to the vision community for accurate and well behaving 3D systems. In this paper, we propose a novel principal component local regression filter acting directly on the Riemannian manifold of unit dual quaternions DH1. We use a numerically stable Lie algebra of the dual quaternions together with exp and log operators to locally linearize the 6D pose space. Unlike state of the art path smoothing methods which either operate on SO (3) of rotation matrices or the hypersphere H1 of quaternions, we treat the orientation and translation jointly on the dual quaternion quadric in the 7-dimensional real projective space RP7. We provide an outlier-robust IRLS algorithm for generic pose filtering exploiting this manifold structure. Besides our theoretical analysis, our experiments on synthetic and real data show the practical advantages of the manifold aware filtering on pose tracking and smoothing.", "title": "" }, { "docid": "697491cc059e471f0c97a840a2a9fca7", "text": "This paper presents a virtual reality (VR) simulator for four-arm disaster response robot OCTOPUS, which has high capable of both mobility and workability. OCTOPUS has 26 degrees of freedom (DOF) and is currently teleoperated by two operators, so it is quite difficult to operate OCTOPUS. Thus, we developed a VR simulator for training operation, developing operator support system and control strategy. Compared with actual robot and environment, VR simulator can reproduce them at low cost and high efficiency. The VR simulator consists of VR environment and human-machine interface such as operation-input and video- and sound-output, based on robot operation system (ROS) and Gazebo. To enhance work performance, we implement indicators and data collection functions. Four tasks such as rough terrain passing, high-step climbing, obstacle stepping over, and object transport were conducted to evaluate OCTOPUS itself and our VR simulator. The results indicate that operators could complete all the tasks but the success rate differed in tasks. Smooth and stable operations increased the work performance, but sudden change and oscillation of operation degraded it. Cooperating multi-joint adequately is quite important to execute task more efficiently.", "title": "" }, { "docid": "da536111acc1b7152f445fb7e6c14091", "text": "Nanonization is a simple and effective method to improve dissolution rate and oral bioavailability of drugs with poor water solubility. There is growing interest to downscale the nanocrystal production to enable early preclinical evaluation of new drug candidates when compound availability is scarce. The purpose of the present study was to investigate laser fragmentation to form nanosuspensions in aqueous solution of the insoluble model drug megestrol acetate (MA) using very little quantities of the drug. Laser fragmentation was obtained by focusing a femtosecond (fs) or nanosecond (ns) laser radiation on a magnetically stirred MA suspension in water or aqueous solution of a stabilizing agent. The size distribution and physicochemical properties of the drug nanoparticles were characterized, and the in vitro dissolution and in vivo oral pharmacokinetics of a laser fragmented formulation were evaluated. A MA nanosuspension was also prepared by media milling for comparison purpose. For both laser radiations, smaller particles were obtained as the laser power was increased, but at a cost of higher degradation. Significant nanonization was achieved after a 30-minfs laser treatment at 250mW and a 1-hns laser treatment at 2500mW. The degradation induced by the laser process of the drug was primarily oxidative in nature. The crystal phase of the drug was maintained, although partial loss of crystallinity was observed. The in vitro dissolution rate and in vivo bioavailability of the laser fragmented formulation were similar to those obtained with the nanosuspension prepared by media milling, and significantly improved compared to the coarse drug powder. It follows that this laser nanonization method has potential to be used for the preclinical evaluation of new drug candidates.", "title": "" }, { "docid": "752cf1c7cefa870c01053d87ff4f445c", "text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.", "title": "" }, { "docid": "45ef23f40fd4241b58b8cb0810695785", "text": "Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.", "title": "" }, { "docid": "819753a8799135fc44dd95e478ebeaf9", "text": "Main memories are becoming sufficiently large that most OLTP databases can be stored entirely in main memory, but this may not be the best solution. OLTP workloads typically exhibit skewed access patterns where some records are hot (frequently accessed) but many records are cold (infrequently or never accessed). It is more economical to store the coldest records on secondary storage such as flash. As a first step towards managing cold data in databases optimized for main memory we investigate how to efficiently identify hot and cold data. We propose to log record accesses - possibly only a sample to reduce overhead - and perform offline analysis to estimate record access frequencies. We present four estimation algorithms based on exponential smoothing and experimentally evaluate their efficiency and accuracy. We find that exponential smoothing produces very accurate estimates, leading to higher hit rates than the best caching techniques. Our most efficient algorithm is able to analyze a log of 1B accesses in sub-second time on a workstation-class machine.", "title": "" }, { "docid": "2e42e1f9478fb2548e39a92c5bacbaab", "text": "In this paper, we consider a fully automatic makeup recommendation system and propose a novel examples-rules guided deep neural network approach. The framework consists of three stages. First, makeup-related facial traits are classified into structured coding. Second, these facial traits are fed into examples-rules guided deep neural recommendation model which makes use of the pairwise of Before-After images and the makeup artist knowledge jointly. Finally, to visualize the recommended makeup style, an automatic makeup synthesis system is developed as well. To this end, a new Before-After facial makeup database is collected and labeled manually, and the knowledge of makeup artist is modeled by knowledge base system. The performance of this framework is evaluated through extensive experimental analyses. The experiments validate the automatic facial traits classification, the recommendation effectiveness in statistical and perceptual ways and the makeup synthesis accuracy which outperforms the state of the art methods by large margin. It is also worthy to note that the proposed framework is a pioneering fully automatic makeup recommendation systems to our best knowledge.", "title": "" }, { "docid": "7723c78b2ff8f9fdc285ee05b482efef", "text": "We describe our experience in developing a discourse-annotated corpus for community-wide use. Working in the framework of Rhetorical Structure Theory, we were able to create a large annotated resource with very high consistency, using a well-defined methodology and protocol. This resource is made publicly available through the Linguistic Data Consortium to enable researchers to develop empirically grounded, discourse-specific applications.", "title": "" } ]
scidocsrr
302b33b7f7abe43e01027e16fe586812
Is the Implicit Association Test a Valid and Valuable Measure of Implicit Consumer Social Cognition ?
[ { "docid": "eed70d4d8bfbfa76382bfc32dd12c3db", "text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.", "title": "" }, { "docid": "6d5bb9f895461b3bd7ee82041c3db6aa", "text": "Respondents at an Internet site completed over 600,000 tasks between October 1998 and April 2000 measuring attitudes toward and stereotypes of social groups. Their responses demonstrated, on average, implicit preference for White over Black and young over old and stereotypic associations linking male terms with science and career and female terms with liberal arts and family. The main purpose was to provide a demonstration site at which respondents could experience their implicit attitudes and stereotypes toward social groups. Nevertheless, the data collected are rich in information regarding the operation of attitudes and stereotypes, most notably the strength of implicit attitudes, the association and dissociation between implicit and explicit attitudes, and the effects of group membership on attitudes and stereotypes.", "title": "" } ]
[ { "docid": "5d91c93728632586a63634c941420c64", "text": "A new method for analyzing analog single-event transient (ASET) data has been developed. The approach allows for quantitative error calculations, given device failure thresholds. The method is described and employed in the analysis of an OP-27 op-amp.", "title": "" }, { "docid": "b59f429192a680c1dc07580d21f9e374", "text": "Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.", "title": "" }, { "docid": "d6adda476cc8bd69c37bd2d00f0dace4", "text": "The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS.", "title": "" }, { "docid": "0899cfa62ccd036450c079eb3403902a", "text": "Manual editing of a metro map is essential because many aesthetic and readability demands in map generation cannot be achieved by using a fully automatic method. In addition, a metro map should be updated when new metro lines are developed in a city. Considering that manually designing a metro map is time-consuming and requires expert skills, we present an interactive editing system that considers human knowledge and adjusts the layout to make it consistent with user expectations. In other words, only a few stations are controlled and the remaining stations are relocated by our system. Our system supports both curvilinear and octilinear layouts when creating metro maps. It solves an optimization problem, in which even spaces, route straightness, and maximum included angles at junctions are considered to obtain a curvilinear result. The system then rotates each edge to extend either vertically, horizontally, or diagonally while approximating the station positions provided by users to generate an octilinear layout. Experimental results, quantitative and qualitative evaluations, and user studies show that our editing system is easy to use and allows even non-professionals to design a metro map.", "title": "" }, { "docid": "95d6189ba97f15c7cc33028f13f8789f", "text": "This paper presents a new Bayesian nonnegative matrix factorization (NMF) for monaural source separation. Using this approach, the reconstruction error based on NMF is represented by a Poisson distribution, and the NMF parameters, consisting of the basis and weight matrices, are characterized by the exponential priors. A variational Bayesian inference procedure is developed to learn variational parameters and model parameters. The randomness in separation process is faithfully represented so that the system robustness to model variations in heterogeneous environments could be achieved. Importantly, the exponential prior parameters are used to impose sparseness in basis representation. The variational lower bound of log marginal likelihood is adopted as the objective to control model complexity. The dependencies of variational objective on model parameters are fully characterized in the derived closed-form solution. A clustering algorithm is performed to find the groups of bases for unsupervised source separation. The experiments on speech/music separation and singing voice separation show that the proposed Bayesian NMF (BNMF) with adaptive basis representation outperforms the NMF with fixed number of bases and the other BNMFs in terms of signal-to-distortion ratio and the global normalized source to distortion ratio.", "title": "" }, { "docid": "7e2ba771e25a2e6716ce59522ace2835", "text": "Online debate sites are a large source of informal and opinion-sharing dialogue on current socio-political issues. Inferring users’ stance (PRO or CON) towards discussion topics in domains such as politics or news is an important problem, and is of utility to researchers, government organizations, and companies. Predicting users’ stance supports identification of social and political groups, building of better recommender systems, and personalization of users’ information preferences to their ideological beliefs. In this paper, we develop a novel collective classification approach to stance classification, which makes use of both structural and linguistic features, and which collectively labels the posts’ stance across a network of the users’ posts. We identify both linguistic features of the posts and features that capture the underlying relationships between posts and users. We use probabilistic soft logic (PSL) (Bach et al., 2013) to model post stance by leveraging both these local linguistic features as well as the observed network structure of the posts to reason over the dataset. We evaluate our approach on 4FORUMS (Walker et al., 2012b), a collection of discussions from an online debate site on issues ranging from gun control to gay marriage. We show that our collective classification model is able to easily incorporate rich, relational information and outperforms a local model which uses only linguistic information.", "title": "" }, { "docid": "6dfb62138ad7e0c23826a2c6b7c2507e", "text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.", "title": "" }, { "docid": "5bee78694f3428d3882e27000921f501", "text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.", "title": "" }, { "docid": "764840c288985e0257413c94205d2bf2", "text": "Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.", "title": "" }, { "docid": "2c2daf28c81e7f12113a391835961981", "text": "We address the problem of generating images across two drastically different views, namely ground (street) and aerial (overhead) views. Image synthesis by itself is a very challenging computer vision task and is even more so when generation is conditioned on an image in another view. Due the difference in viewpoints, there is small overlapping field of view and little common content between these two views. Here, we try to preserve the pixel information between the views so that the generated image is a realistic representation of cross view input image. For this, we propose to use homography as a guide to map the images between the views based on the common field of view to preserve the details in the input image. We then use generative adversarial networks to inpaint the missing regions in the transformed image and add realism to it. Our exhaustive evaluation and model comparison demonstrate that utilizing geometry constraints adds fine details to the generated images and can be a better approach for cross view image synthesis than purely pixel based synthesis methods.", "title": "" }, { "docid": "26d20cd47dfd174ecb8606b460c1c040", "text": "In this article, we use an automated bottom-up approach to identify semantic categories in an entire corpus. We conduct an experiment using a word vector model to represent the meaning of words. The word vectors are then clustered, giving a bottom-up representation of semantic categories. Our main finding is that the likelihood of changes in a word’s meaning correlates with its position within its cluster.", "title": "" }, { "docid": "5cb44c68cecb0618be14cd52182dc96e", "text": "Recognition of objects using Deep Neural Networks is an active area of research and many breakthroughs have been made in the last few years. The paper attempts to indicate how far this field has progressed. The paper briefly describes the history of research in Neural Networks and describe several of the recent advances in this field. The performances of recently developed Neural Network Algorithm over benchmark datasets have been tabulated. Finally, some the applications of this field have been provided.", "title": "" }, { "docid": "ff76b52f7859aaffa58307018edb8323", "text": "Malevolent Trojan circuits inserted by layout modifications in an IC at untrustworthy fabrication facilities are difficult to detect by traditional post-manufacturing testing. In this paper, we develop a novel low-overhead design methodology that facilitates the detection of inserted Trojan hardware in an IC through logic testing. As a byproduct, it also increases the security of the design by design obfuscation. Application of the proposed design methodology to an 8-bit RISC processor and a JPEG encoder resulted in improvement in Trojan detection probability significantly. It also obfuscated the design with verification mismatch for 90% of the verification points, while incurring moderate area, power and delay overheads.", "title": "" }, { "docid": "486d31b962600141ba75dfde718f5b3d", "text": "The design, fabrication, and measurement of a coax to double-ridged waveguide launcher and horn antenna is presented. The novel launcher design employs two symmetric field probes across the ridge gap to minimize spreading inductance in the transition, and achieves better than 15 dB return loss over a 10:1 bandwidth. The aperture-matched horn uses a half-cosine transition into a linear taper for the outer waveguide dimensions and ridge width, and a power-law scaled gap to realize monotonically varying cutoff frequencies, thus avoiding the appearance of trapped mode resonances. It achieves a nearly constant beamwidth in both E- and H-planes for an overall directivity of about 16.5 dB from 10-100 GHz.", "title": "" }, { "docid": "970a76190e980afe51928dcaa6d594c8", "text": "Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.", "title": "" }, { "docid": "ad1582fb37440ef7182af4925427f5ca", "text": "The advent of new information technology has radically changed the end-user computing environment over the past decade. To enhance their management decision-making capability, many organizations have made significant investments in business intelligence (BI) systems. The realization of business benefits from BI investments depends on supporting effective use of BI systems and satisfying their end user requirements. Even though a lot of attention has been paid to the decision-making benefits of BI systems in practice, there is still a limited amount of empirical research that explores the nature of enduser satisfaction with BI systems. End-user satisfaction and system usage have been recognized by many researchers as critical determinants of the success of information systems (IS). As an increasing number of companies have adopted BI systems, there is a need to understand their impact on an individual end-user’s performance. In recent years, researchers have considered assessing individual performance effects from IS use as a key area of concern. Therefore, this study aims to empirically test a framework identifying the relationships between end-user computing satisfaction (EUCS), system usage, and individual performance. Data gathered from 330 end users of BI systems in the Taiwanese electronics industry were used to test the relationships proposed in the framework using the structural equation modeling approach. The results provide strong support for our model. Our results indicate that higher levels of EUCS can lead to increased BI system usage and improved individual performance, and that higher levels of BI system usage will lead to higher levels of individual performance. In addition, this study’s findings, consistent with DeLone and McLean’s IS success model, confirm that there exists a significant positive relationship between EUCS and system usage. Theoretical and practical implications of the findings are discussed. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f3c5a1cef29f5fa834433ce859b15694", "text": "This paper describes the design, construction, and testing of a 750-V 100-kW 20-kHz bidirectional isolated dual-active-bridge dc-dc converter using four 1.2-kV 400-A SiC-MOSFET/SBD dual modules. The maximum conversion efficiency from the dc-input to the dc-output terminals is accurately measured to be as high as 98.7% at 42-kW operation. The overall power loss at the rated-power (100 kW) operation, excluding the gate-drive and control circuit losses, is divided into the conduction and switching losses produced by the SiC modules, the iron and copper losses due to magnetic devices, and the other unknown loss. The power-loss breakdown concludes that the sum of the conduction and switching losses is about 60% of the overall power loss and that the conduction loss is nearly equal to the switching loss at the 100-kW and 20-kHz operation.", "title": "" }, { "docid": "c1389acb62cca5cb3cfdec34bd647835", "text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.", "title": "" }, { "docid": "53b43126d066f5e91d7514f5da754ef3", "text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.", "title": "" }, { "docid": "72bc688726c5fc26b2dd7e63d3b28ac0", "text": "In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "title": "" } ]
scidocsrr
240a10a3748a237c47aff9013c7e3949
Examining Spectral Reflectance Saturation in Landsat Imagery and Corresponding Solutions to Improve Forest Aboveground Biomass Estimation
[ { "docid": "59b10765f9125e9c38858af901a39cc7", "text": "--------__------------------------------------__---------------", "title": "" }, { "docid": "9a4ca8c02ffb45013115124011e7417e", "text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.", "title": "" } ]
[ { "docid": "edeefde21bbe1ace9a34a0ebe7bc6864", "text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "title": "" }, { "docid": "74287743f75368623da74e716ae8e263", "text": "Organizations increasingly use social media and especially social networking sites (SNS) to support their marketing agenda, enhance collaboration, and develop new capabilities. However, the success of SNS initiatives is largely dependent on sustainable user participation. In this study, we argue that the continuance intentions of users may be gendersensitive. To theorize and investigate gender differences in the determinants of continuance intentions, this study draws on the expectation-confirmation model, the uses and gratification theory, as well as the self-construal theory and its extensions. Our survey of 488 users shows that while both men and women are motivated by the ability to selfenhance, there are some gender differences. Specifically, while women are mainly driven by relational uses, such as maintaining close ties and getting access to social information on close and distant networks, men base their continuance intentions on their ability to gain information of a general nature. Our research makes several contributions to the discourse in strategic information systems literature concerning the use of social media by individuals and organizations. Theoretically, it expands the understanding of the phenomenon of continuance intentions and specifically the role of the gender differences in its determinants. On a practical level, it delivers insights for SNS providers and marketers into how satisfaction and continuance intentions of male and female SNS users can be differentially promoted. Furthermore, as organizations increasingly rely on corporate social networks to foster collaboration and innovation, our insights deliver initial recommendations on how organizational social media initiatives can be supported with regard to gender-based differences. 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b6ec4629a39097178895762a35e0c7eb", "text": "In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers’ opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers’ opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of documentlevel sentiment classification, and improve the performance significantly.", "title": "" }, { "docid": "5b43cce2027f1e5afbf7985ca2d4af1a", "text": "With Internet delivery of video content surging to an unprecedented level, video has become one of the primary sources for online advertising. In this paper, we present VideoSense as a novel contextual in-video advertising system, which automatically associates the relevant video ads and seamlessly inserts the ads at the appropriate positions within each individual video. Unlike most video sites which treat video advertising as general text advertising by displaying video ads at the beginning or the end of a video or around a video, VideoSense aims to embed more contextually relevant ads at less intrusive positions within the video stream. Specifically, given a Web page containing an online video, VideoSense is able to extract the surrounding text related to this video, detect a set of candidate ad insertion positions based on video content discontinuity and attractiveness, select a list of relevant candidate ads according to multimodal relevance. To support contextual advertising, we formulate this task as a nonlinear 0-1 integer programming problem by maximizing contextual relevance while minimizing content intrusiveness at the same time. The experiments proved the effectiveness of VideoSense for online video service.", "title": "" }, { "docid": "b5de3747c17f6913539b62377f9af5c4", "text": "In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmark datasets WN18RR, FB15k-237, WN11 and FB13. We further apply our ConvKB to a search personalization problem which aims to tailor the search results to each specific user based on the user’s personal interests and preferences. In particular, we model the potential relationship between the submitted query, the user and the search result (i.e., document) as a triple (query, user, document) on which the ConvKB is able to work. Experimental results on query logs from a commercial web search engine show that ConvKB achieves better performances than the standard ranker as well as strong search personalization baselines.", "title": "" }, { "docid": "32a2bfb7a26631f435f9cb5d825d8da2", "text": "An important aspect for the task of grammatical error correction (GEC) that has not yet been adequately explored is adaptation based on the native language (L1) of writers, despite the marked influences of L1 on second language (L2) writing. In this paper, we adapt a neural network joint model (NNJM) using L1-specific learner text and integrate it into a statistical machine translation (SMT) based GEC system. Specifically, we train an NNJM on general learner text (not L1-specific) and subsequently train on L1-specific data using a Kullback-Leibler divergence regularized objective function in order to preserve generalization of the model. We incorporate this adapted NNJM as a feature in an SMT-based English GEC system and show that adaptation achieves significant F0.5 score gains on English texts written by L1 Chinese, Russian, and Spanish writers.", "title": "" }, { "docid": "15ada8f138d89c52737cfb99d73219f0", "text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.", "title": "" }, { "docid": "8e794530be184686a49e5ced6ac6521d", "text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.", "title": "" }, { "docid": "eb6823bcc7e01dbdc9a21388bde0ce4f", "text": "This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of humancentred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human–machine system performance. The combined effect of LOA and AA on all measures did not appear to be ‘additive’ in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human–machine system performance. This work provides additional information for a developing theory of human-centred automation. Theor. Issues in Ergon. Sci., 2003, 1–40, preview article", "title": "" }, { "docid": "2fe1ed0f57e073372e4145121e87d7c6", "text": "Information visualization (InfoVis), the study of transforming data, information, and knowledge into interactive visual representations, is very important to users because it provides mental models of information. The boom in big data analytics has triggered broad use of InfoVis in a variety of domains, ranging from finance to sports to politics. In this paper, we present a comprehensive survey and key insights into this fast-rising area. The research on InfoVis is organized into a taxonomy that contains four main categories, namely empirical methodologies, user interactions, visualization frameworks, and applications, which are each described in terms of their major goals, fundamental principles, recent trends, and state-of-the-art approaches. At the conclusion of this survey, we identify existing technical challenges and propose directions for future research.", "title": "" }, { "docid": "a28c252f9f3e96869c72e6e41146b5bc", "text": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.", "title": "" }, { "docid": "040329beb0f4688ced46d87a51dac169", "text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.", "title": "" }, { "docid": "067e24b29aae26865c858d6b8e60b135", "text": "In this paper, we present an optimization path of stress memorization technique (SMT) for 45nm node and below using a nitride capping layer. We demonstrate that the understanding of coupling between nitride properties, dopant activation and poly-silicon gate mechanical stress allows enhancing nMOS performance by 7% without pMOS degradation. In contrast to previously reported works on SMT (Chen et al., 2004) - (Singh et al., 2005), a low-cost process compatible with consumer electronics requirements has been successfully developed", "title": "" }, { "docid": "715fda02bad1633be9097cc0a0e68c8d", "text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.", "title": "" }, { "docid": "26b13a3c03014fc910ed973c264e4c9d", "text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.", "title": "" }, { "docid": "82119f5c85eaa2c4a76b2c7b0561375c", "text": "A system is described that integrates vision and tactile sensing in a robotics environment to perform object recognition tasks. It uses multiple sensor systems (active touch and passive stereo vision) to compute three dimensional primitives that can be matched against a model data base of complex curved surface objects containing holes and cavities. The low level sensing elements provide local surface and feature matches which arc constrained by relational criteria embedded in the models. Once a model has been invoked, a verification procedure establishes confidence measures for a correct recognition. The three dimen* sional nature of the sensed data makes the matching process more robust as does the system's ability to sense visually occluded areas with touch. The model is hierarchic in nature and allows matching at different levels to provide support or inhibition for recognition. 1. INTRODUCTION Robotic systems are being designed and built to perform complex tasks such as object recognition, grasping, parts manipulation, inspection and measurement. In the case of object recognition, many systems have been designed that have tried to exploit a single sensing modality [1,2,3,4,5,6]. Single sensor systems are necessarily limited in their power. The approach described here to overcome the inherent limitations of a single sensing modality is to integrate multiple sensing modalities (passive stereo vision and active tactile sensing) for object recognition. The advantages of multiple sensory systems in a task like this are many. Multiple sensor systems supply redundant and complementary kinds of data that can be integrated to create a more coherent understanding of a scene. The inclusion of multiple sensing systems is becoming more apparent as research continues in distributed systems and parallel approaches to problem solving. The redundancy and support for a hypothesis that comes from more than one sensing subsystem is important in establishing confidence measures during a recognition process, just as the disagreement between two sensors will inhibit a hypothesis and point to possible sensing or reasoning error. The complementary nature of these sensors allows more powerful matching primitives to be used. The primitives that are the outcome of sensing with these complementary sensors are throe dimensional in nature, providing stronger invariants and a more natural way to recognize objects which are also three dimensional in nature [7].", "title": "" }, { "docid": "ed22fe0d13d4450005abe653f41df2c0", "text": "Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population.", "title": "" }, { "docid": "d07281bab772b6ba613f9526d418661e", "text": "GSM (Global Services of Mobile Communications) 1800 licenses were granted in the beginning of the 2000’s in Turkey. Especially in the installation phase of the wireless telecom services, fraud usage can be an important source of revenue loss. Fraud can be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is the name of the activities to identify unauthorized usage and prevent losses for the mobile network operators’. Mobile phone user’s intentions may be predicted by the call detail records (CDRs) by using data mining (DM) techniques. This study compares various data mining techniques to obtain the best practical solution for the telecom fraud detection and offers the Adaptive Neuro Fuzzy Inference (ANFIS) method as a means to efficient fraud detection. In the test run, shown that ANFIS has provided sensitivity of 97% and specificity of 99%, where it classified 98.33% of the instances correctly.", "title": "" }, { "docid": "0e2a2a32923d8e9fa5779e80e6090dba", "text": "The most powerful and common approach to countering the threats to network / information security is encryption [1]. Even though it is very powerful, the cryptanalysts are very intelligent and they were working day and night to break the ciphers. To make a stronger cipher it is recommended that to use: More stronger and complicated encryption algorithms, Keys with more number of bits (Longer keys), larger block size as input to process, use authentication and confidentiality and secure transmission of keys. It is for sure that if we follow all the mentioned principles we can make a very stronger cipher. With this we have the following problems: It is a time consuming process for both encryption and decryption, It is difficult for the crypt analyzer to analyze the problem. Also suffers with the problems in the existing system. The main objective of this paper is to solve all these problems and to bring the revolution in the Network security with a new substitution technique [3] is ‘color substitution technique’ and named as a “Play color cipher”.", "title": "" } ]
scidocsrr
155777b9568aa560cf4167a14c89cb13
Probabilistic Relations between Words : Evidence from Reduction in Lexical Production
[ { "docid": "187595fb12a5ca3bd665ffbbc9f47465", "text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.", "title": "" } ]
[ { "docid": "ca7269b97464c9b78aa0cb6727926e28", "text": "This paper argues that there has not been enough discussion in the field of applications of Gaussian Process for the fast moving consumer goods industry. Yet, this technique can be important as it e.g., can provide automatic feature relevance determination and the posterior mean can unlock insights on the data. Significant challenges are the large size and high dimensionality of commercial data at a point of sale. The study reviews approaches in the Gaussian Processes modeling for large data sets, evaluates their performance on commercial sales and shows value of this type of models as a decision-making tool for management.", "title": "" }, { "docid": "def621d47a8ead24754b1eebe590314a", "text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.", "title": "" }, { "docid": "ebaedd43e151f13d1d4d779284af389d", "text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.", "title": "" }, { "docid": "43ee3d818b528081aadf6abdc23650fa", "text": "Cloud computing has become an increasingly important research topic given the strong evolution and migration of many network services to such computational environment. The problem that arises is related with efficiency management and utilization of the large amounts of computing resources. This paper begins with a brief retrospect of traditional scheduling, followed by a detailed review of metaheuristic algorithms for solving the scheduling problems by placing them in a unified framework. Armed with these two technologies, this paper surveys the most recent literature about metaheuristic scheduling solutions for cloud. In addition to applications using metaheuristics, some important issues and open questions are presented for the reference of future researches on scheduling for cloud.", "title": "" }, { "docid": "8b1b0ee79538a1f445636b0798a0c7ca", "text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.", "title": "" }, { "docid": "4688caf6a80463579f293b2b762da5b5", "text": "To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs' schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators' revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.", "title": "" }, { "docid": "6bc2837d4d1da3344f901a6d7d8502b5", "text": "Many researchers and professionals have reported nonsubstance addiction to online entertainments in adolescents. However, very few scales have been designed to assess problem Internet use in this population, in spite of their high exposure and obvious vulnerability. The aim of this study was to review the currently available scales for assessing problematic Internet use and to validate a new scale of this kind for use, specifically in this age group, the Problematic Internet Entertainment Use Scale for Adolescents. The research was carried out in Spain in a gender-balanced sample of 1131 high school students aged between 12 and 18 years. Psychometric analyses showed the scale to be unidimensional, with excellent internal consistency (Cronbach's alpha of 0.92), good construct validity, and positive associations with alternative measures of maladaptive Internet use. This self-administered scale can rapidly measure the presence of symptoms of behavioral addiction to online videogames and social networking sites, as well as their degree of severity. The results estimate the prevalence of this problematic behavior in Spanish adolescents to be around 5 percent.", "title": "" }, { "docid": "95afd1d83b5641a7dff782588348d2ec", "text": "Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentor/spl trade/, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.", "title": "" }, { "docid": "f767e0a9711522b06b8d023453f42f3a", "text": "A novel low-cost method for generating circular polarization in a dielectric resonator antenna is proposed. The antenna comprises four rectangular dielectric layers, each one being rotated by an angle of 30 ° relative to its adjacent layers. Utilizing such an approach has provided a circular polarization over a bandwidth of 6% from 9.55 to 10.15 GHz. This has been achieved in conjunction with a 21% impedance-matching bandwidth over the same frequency range. Also, the radiation efficiency of the proposed circularly polarized dielectric resonator antenna is 93% in this frequency band of operation", "title": "" }, { "docid": "54a06cb39007b18833f191aeb7c600d7", "text": "Mobile ad-hoc networks (MANETs) and wireless sensor networks (WSNs) have gained remarkable appreciation and technological development over the last few years. Despite ease of deployment, tremendous applications and significant advantages, security has always been a challenging issue due to the nature of environments in which nodes operate. Nodes’ physical capture, malicious or selfish behavior cannot be detected by traditional security schemes. Trust and reputation based approaches have gained global recognition in providing additional means of security for decision making in sensor and ad-hoc networks. This paper provides an extensive literature review of trust and reputation based models both in sensor and ad-hoc networks. Based on the mechanism of trust establishment, we categorize the state-of-the-art into two groups namely node-centric trust models and system-centric trust models. Based on trust evidence, initialization, computation, propagation and weight assignments, we evaluate the efficacy of the existing schemes. Finally, we conclude our discussion with identification of some unresolved issues in pursuit of trust and reputation management.", "title": "" }, { "docid": "81919bc432dd70ed3e48a0122d91b9e4", "text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.", "title": "" }, { "docid": "8582c4a040e4dec8fd141b00eaa45898", "text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.", "title": "" }, { "docid": "cc58f5adcf4cb0aa1feac0ef96c452b5", "text": "Machine-learning algorithms have shown outstanding image recognition/classification performance for computer vision applications. However, the compute and energy requirement for implementing such classifier models for large-scale problems is quite high. In this paper, we propose feature driven selective classification (FALCON) inspired by the biological visual attention mechanism in the brain to optimize the energy-efficiency of machine-learning classifiers. We use the consensus in the characteristic features (color/texture) across images in a dataset to decompose the original classification problem and construct a tree of classifiers (nodes) with a generic-to-specific transition in the classification hierarchy. The initial nodes of the tree separate the instances based on feature information and selectively enable the latter nodes to perform object specific classification. The proposed methodology allows selective activation of only those branches and nodes of the classification tree that are relevant to the input while keeping the remaining nodes idle. Additionally, we propose a programmable and scalable neuromorphic engine (NeuE) that utilizes arrays of specialized neural computational elements to execute the FALCON-based classifier models for diverse datasets. The structure of FALCON facilitates the reuse of nodes while scaling up from small classification problems to larger ones thus allowing us to construct classifier implementations that are significantly more efficient. We evaluate our approach for a 12-object classification task on the Caltech101 dataset and ten-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45-nm technology. Our results demonstrate up to $3.66\\boldsymbol \\times $ improvement in energy-efficiency for no loss in output quality, and even higher improvements of up to $5.91\\boldsymbol \\times $ with 3.9% accuracy loss compared to an optimized baseline network. In addition, FALCON shows an improvement in training time of up to $1.96\\boldsymbol \\times $ as compared to the traditional classification approach.", "title": "" }, { "docid": "78f2e1fc79a9c774e92452631d6bce7a", "text": "Adders are basic integral part of arithmetic circuits. The adders have been realized with two styles: fixed stage size and variable stage size. In this paper, fixed stage and variable stage carry skip adder configurations have been analyzed and then a new 16-bit high speed variable stage carry skip adder is proposed by modifying the existing structure. The proposed adder has seven stages where first and last stage are of 1 bit each, it keeps increasing steadily till the middle stage which is the bulkiest and hence is the nucleus stage. The delay and power consumption in the proposed adder is reduced by 61.75% and 8% respectively. The proposed adder is implemented and simulated using 90 nm CMOS technology in Cadence Virtuoso. It is pertinent to mention that the delay improvement in the proposed adder has been achieved without increase in any power consumption and circuit complexity. The adder proposed in this work is suitable for high speed and low power VLSI based arithmetic circuits.", "title": "" }, { "docid": "b8fa0ff5dc0b700c1f7dd334639572ec", "text": "This paper discusses about an ongoing project that serves the needs of people with physical disabilities at home. It uses the Bluetooth technology to establish communication between user's Smartphone and controller board. The prototype support manual controlling and microcontroller controlling to lock and unlock home door. By connecting the circuit with a relay board and connection to the Arduino controller board it can be controlled by a Bluetooth available to provide remote access from tablet or smartphone. This paper addresses the development and the functionality of the Android-based application (Android app) to assist disabled people gain control of their living area.", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "1ff5526e4a18c1e59b63a3de17101b11", "text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.", "title": "" }, { "docid": "47db0fdd482014068538a00f7dc826a9", "text": "Importance\nThe use of palliative care programs and the number of trials assessing their effectiveness have increased.\n\n\nObjective\nTo determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers.\n\n\nData Sources\nMEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016.\n\n\nStudy Selection\nRandomized clinical trials of palliative care interventions in adults with life-limiting illness.\n\n\nData Extraction and Synthesis\nTwo reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy-palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points).\n\n\nMain Outcomes and Measures\nQuality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures.\n\n\nResults\nForty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, -0.66; 95% CI, -1.25 to -0.07; ESAS mean difference, -10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, -0.21; 95% CI, -0.42 to 0.00; ESAS mean difference, -3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed.\n\n\nConclusions and Relevance\nIn this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.", "title": "" }, { "docid": "3509f90848c45ad34ebbd30b9d357c29", "text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.", "title": "" }, { "docid": "93d40aa40a32edab611b6e8c4a652dbb", "text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.", "title": "" } ]
scidocsrr
b472806a09f6771505be8e7f72361802
Polynomial texture maps
[ { "docid": "5f89fb0df61770e83ca451900b947d43", "text": "We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to achieve average errors of only 1%. In other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. In fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. These observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering.", "title": "" } ]
[ { "docid": "8ce3fa727ff12f742727d5b80d8611b9", "text": "Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings. In this paper, we focus on understanding such a bias induced in learning through dropout, a popular technique to avoid overfitting in deep learning. For single hidden-layer linear neural networks, we show that dropout tends to make the norm of incoming/outgoing weight vectors of all the hidden nodes equal. In addition, we provide a complete characterization of the optimization landscape induced by dropout.", "title": "" }, { "docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd", "text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.", "title": "" }, { "docid": "86dc000d7e78092a03d03ccd8cb670a0", "text": "Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives generalagenda-basedalgorithms for computing weights and gradients; briefly discusses Dyna-to-Dyna program transformations; and shows that a first implementation of a Dyna-to-C++ compiler produces code that is efficient enough for real NLP research, though still several times slower than hand-crafted code.", "title": "" }, { "docid": "3fce18c6e1f909b91f95667a563aa194", "text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.", "title": "" }, { "docid": "327d071f71bf39bcd171f85746047a02", "text": "Advances in information and communication technologies have led to the emergence of Internet of Things (IoT). In the healthcare environment, the use of IoT technologies brings convenience to physicians and patients as they can be applied to various medical areas (such as constant real-time monitoring, patient information management, medical emergency management, blood information management, and health management). The radio-frequency identification (RFID) technology is one of the core technologies of IoT deployments in the healthcare environment. To satisfy the various security requirements of RFID technology in IoT, many RFID authentication schemes have been proposed in the past decade. Recently, elliptic curve cryptography (ECC)-based RFID authentication schemes have attracted a lot of attention and have been used in the healthcare environment. In this paper, we discuss the security requirements of RFID authentication schemes, and in particular, we present a review of ECC-based RFID authentication schemes in terms of performance and security. Although most of them cannot satisfy all security requirements and have satisfactory performance, we found that there are three recently proposed ECC-based authentication schemes suitable for the healthcare environment in terms of their performance and security.", "title": "" }, { "docid": "cb0803dfd3763199519ff3f4427e1298", "text": "Motion deblurring is a long standing problem in computer vision and image processing. In most previous approaches, the blurred image is modeled as the convolution of a latent intensity image with a blur kernel. However, for images captured by a real camera, the blur convolution should be applied to scene irradiance instead of image intensity and the blurred results need to be mapped back to image intensity via the camera’s response function (CRF). In this paper, we present a comprehensive study to analyze the effects of CRFs on motion deblurring. We prove that the intensity-based model closely approximates the irradiance model at low frequency regions. However, at high frequency regions such as edges, the intensity-based approximation introduces large errors and directly applying deconvolution on the intensity image will produce strong ringing artifacts even if the blur kernel is invertible. Based on the approximation error analysis, we further develop a dualimage based solution that captures a pair of sharp/blurred images for both CRF estimation and motion deblurring. Experiments on synthetic and real images validate our theories and demonstrate the robustness and accuracy of our approach.", "title": "" }, { "docid": "3731d6d00291c02913fa102292bf3cad", "text": "Real-world applications of text categorization often require a system to deal with tens of thousands of categories defined over a large taxonomy. This paper addresses the problem with respect to a set of popular algorithms in text categorization, including Support Vector Machines, k-nearest neighbor, ridge regression, linear least square fit and logistic regression. By providing a formal analysis of the computational complexity of each classification method, followed by an investigation on the usage of different classifiers in a hierarchical setting of categorization, we show how the scalability of a method depends on the topology of the hierarchy and the category distributions. In addition, we are able to obtain tight bounds for the complexities by using the power law to approximate category distributions over a hierarchy. Experiments with kNN and SVM classifiers on the OHSUMED corpus are reported on, as concrete examples.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "d7a9465ac031cf7be6f3e74276805f0f", "text": "Half of American workers have a level of education that does not match the level of education required for their job. Of these, a majority are overeducated, i.e. have more schooling than necessary to perform their job (see, e.g., Leuven & Oosterbeek, 2011). In this paper, we use data from the National Longitudinal Survey of Youth 1979 (NLSY79) combined with the pooled 1989-1991 waves of the CPS to provide some of the first evidence regarding the dynamics of overeducation over the life cyle. Shedding light on this question is key to disentangle the role played by labor market frictions versus other factors such as selection on unobservables, compensating differentials or career mobility prospects. Overall, our results suggest that overeducation is a fairly persistent phenomenon, with 79% of workers remaining overeducated after one year. Initial overeducation also has an impact on wages much later in the career, which points to the existence of scarring effects. Finally, we find some evidence of duration dependence, with a 6.5 point decrease in the exit rate from overeducation after having spent five years overeducated. JEL Classification: J24; I21 ∗Duke University †University of North Carolina at Chapel Hill and IZA ‡Duke University and IZA.", "title": "" }, { "docid": "de59e5e248b5df0f92d7fed8c699d68a", "text": "Most modern devices and cryptoalgorithms are vulnerable to a new class of attack called side-channel attack. It analyses physical parameters of the system in order to get secret key. Most spread techniques are simple and differential power attacks with combination of statistical tools. Few studies cover using machine learning methods for pre-processing and key classification tasks. In this paper, we investigate applicability of machine learning methods and their characteristic. Following theoretical results, we examine power traces of AES encryption with Support Vector Machines algorithm and decision trees and provide roadmap for further research.", "title": "" }, { "docid": "c1d8848317256214b76be3adb87a7d49", "text": "We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is exogenous or unconfounded, that is, independent of the potential outcomes given covariates, biases associated with simple treatment-control average comparisons can be removed by adjusting for differences in the covariates. Rosenbaum and Rubin (1983) show that adjusting solely for differences between treated and control units in the propensity score removes all biases associated with differences in covariates. Although adjusting for differences in the propensity score removes all the bias, this can come at the expense of efficiency, as shown by Hahn (1998), Heckman, Ichimura and Todd (1998), and Robins, Mark and Newey (1992). We show that weighting by the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to an efficient estimate of the average treatment effect. We provide intuition for this result by showing that this estimator can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score.", "title": "" }, { "docid": "b395aa3ae750ddfd508877c30bae3a38", "text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "title": "" }, { "docid": "48432393e1c320c051b59427db0620b5", "text": "The design of removable partial dentures (RPDs) is an important factor for good prognostication. The purpose of this study was to clarify the effectiveness of denture designs and to clarify the component that had high rates of failure and complications. A total of 91 RPDs, worn by 65 patients for 2-10 years, were assessed. Removable partial dentures were classified into four groups: telescopic dentures (TDs), ordinary clasp dentures (ODs), modified clasp dentures (MDs) and combination dentures (CDs). The failure rates of abutment teeth were the highest and those of retainers were the second highest. The failure rates of connectors were generally low, but they increased suddenly after 6 years. Complication and failure rates of denture bases and artificial teeth were generally low. Complication and failure rates of TDs were high at abutment teeth and low level at retainers. Complication and failure rates of ODs were high at retainers.", "title": "" }, { "docid": "660e8d6847d06970e37455b60198c6b6", "text": "Usually, if researchers want to understand research status of any field, they need to browse a great number of related academic literatures. Luckily, in order to work more efficiently, automatic documents summarization can be applied for taking a glance at specific scientific topics. In this paper, we focus on summary generation of citation content. An automatic tool named CitationAS is built, whose three core components are clustering algorithms, label generation and important sentences extraction methods. In experiments, we use bisecting Kmeans, Lingo and STC to cluster retrieved citation content. Then Word2Vec, WordNet and combination of them are applied to generate cluster label. Next, we employ two methods, TF-IDF and MMR, to extract important sentences, which are used to generate summaries. Finally, we adopt gold standard to evaluate summaries obtained from CitationAS. According to evaluations, we find the best label generation method for each clustering algorithm. We also discover that combination of Word2Vec and WordNet doesn’t have good performance compared with using them separately on three clustering algorithms. Combination of Ling algorithm, Word2Vec label generation method and TF-IDF sentences extraction approach will acquire the highest summary quality. Conference Topic Text mining and information extraction", "title": "" }, { "docid": "1c117c63455c2b674798af0e25e3947c", "text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.", "title": "" }, { "docid": "570e03101ae116e2f20ab6337061ec3f", "text": "This study explored the potential for using seed cake from hemp (Cannabis sativa L.) as a protein feed for dairy cows. The aim was to evaluate the effects of increasing the proportion of hempseed cake (HC) in the diet on milk production and milk composition. Forty Swedish Red dairy cows were involved in a 5-week dose-response feeding trial. The cows were allocated randomly to one of four experimental diets containing on average 494 g/kg of grass silage and 506 g/kg of concentrate on a dry matter (DM) basis. Diets containing 0 g (HC0), 143 g (HC14), 233 g (HC23) or 318 g (HC32) HC/kg DM were achieved by replacing an increasing proportion of compound pellets with cold-pressed HC. Increasing the proportion of HC resulted in dietary crude protein (CP) concentrations ranging from 126 for HC0 to 195 g CP/kg DM for HC32. Further effects on the composition of the diet with increasing proportions of HC were higher fat and NDF and lower starch concentrations. There were no linear or quadratic effects on DM intake, but increasing the proportion of HC in the diet resulted in linear increases in fat and NDF intake, as well as CP intake (P < 0.001), and a linear decrease in starch intake (P < 0.001). The proportion of HC had significant quadratic effects on the yields of milk, energy-corrected milk (ECM) and milk protein, fat and lactose. The curvilinear response of all yield parameters indicated maximum production from cows fed diet HC14. Increasing the proportion of HC resulted in linear decreases in both milk protein and milk fat concentration (P = 0.005 and P = 0.017, respectively), a linear increase in milk urea (P < 0.001), and a linear decrease in CP efficiency (milk protein/CP intake; P < 0.001). In conclusion, the HC14 diet, corresponding to a dietary CP concentration of 157 g/kg DM, resulted in the maximum yields of milk and ECM by dairy cows in this study.", "title": "" }, { "docid": "b206a5f5459924381ef6c46f692c7052", "text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.", "title": "" }, { "docid": "f83ca1c2732011e9a661f8cf9a0516ac", "text": "We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).\n Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has \"next-bit pseudoentropy\" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support \"local list-decoding\" (as in the Goldreich--Levin hardcore predicate, STOC '89).\n With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.", "title": "" }, { "docid": "3b06bc2d72e0ae7fa75873ed70e23fc3", "text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.", "title": "" }, { "docid": "d37f648a06d6418a0e816ce000056136", "text": "Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.", "title": "" } ]
scidocsrr
73029a1266cec9efb2777e1f915c7c94
Predictive positioning and quality of service ridesharing for campus mobility on demand systems
[ { "docid": "40f21a8702b9a0319410b716bda0a11e", "text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.", "title": "" } ]
[ { "docid": "a75a8a6a149adf80f6ec65dea2b0ec0d", "text": "This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.", "title": "" }, { "docid": "387e02e65ff994691ae8ae95b7c7f69c", "text": "Real world data sets usually have many features, which increases the complexity of data mining task. Feature selection, as a preprocessing step to the data mining, has been shown very effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving comprehensibility. To find the optimal feature subsets is the aim of feature selection. Rough sets theory provides a mathematical approach to find optimal feature subset, but this approach is time consuming. In this paper, we propose a novel heuristic algorithm based on rough sets theory to find out the feature subset. This algorithm employs appearing frequency of attribute as heuristic information. Experiment results show in most times our algorithm can find out optimal feature subset quickly and efficiently.", "title": "" }, { "docid": "209ff14abd0b16496af29c143b0fa274", "text": "Automated text categorization is an important technique for many web applications, such as document indexing, document filtering, and cataloging web resources. Many different approaches have been proposed for the automated text categorization problem. Among them, centroid-based approaches have the advantages of short training time and testing time due to its computational efficiency. As a result, centroid-based classifiers have been widely used in many web applications. However, the accuracy of centroid-based classifiers is inferior to SVM, mainly because centroids found during construction are far from perfect locations.\n We design a fast Class-Feature-Centroid (CFC) classifier for multi-class, single-label text categorization. In CFC, a centroid is built from two important class distributions: inter-class term index and inner-class term index. CFC proposes a novel combination of these indices and employs a denormalized cosine measure to calculate the similarity score between a text vector and a centroid. Experiments on the Reuters-21578 corpus and 20-newsgroup email collection show that CFC consistently outperforms the state-of-the-art SVM classifiers on both micro-F1 and macro-F1 scores. Particularly, CFC is more effective and robust than SVM when data is sparse.", "title": "" }, { "docid": "d54ad1a912a0b174d1f565582c6caf1c", "text": "This paper presents a new novel design of a smart walker for rehabilitation purpose by patients in hospitals and rehabilitation centers. The design features a full frame walker that provides secured and stable support while being foldable and compact. It also has smart features such as telecommunication and patient activity monitoring.", "title": "" }, { "docid": "a8f86ab8e448fe7e69e988de67668b96", "text": "Batch Normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets.", "title": "" }, { "docid": "a7373d69f5ff9d894a630cc240350818", "text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.", "title": "" }, { "docid": "b6aa2f8fcbddb651207b4207f676320d", "text": "Test coverage prediction for board assemblies has an important function in, among others, test engineering, test cost modeling, test strategy definition and product quality estimation. Introducing a method that defines how this coverage is calculated can increase the value of such prediction across the electronics industry. There are three aspects to test coverage calculation: fault modeling, coverage-per-fault and total coverage. An abstraction level for fault categories is introduced, called MPS (material, placement, soldering) that enables us to compare coverage results using different fault models. Additionally, the rule-based fault coverage estimation and the weighted coverage calculation are discussed. This paper was submitted under the ITC Special Board and System Test Call-for-Papers that had an extended due-date. As such, the full text of the paper was not available in time for inclusion in the general volume of the 2003 ITC Proceedings. The full text is available in 2003 ITC Proceedings— Board and System Test Track. ITC INTERNATIONAL TEST CONFERENCE Proceedings of the International Test Conference 2003 (ITC’03) 1089-3539/03 $ 17.00 © 2003 IEEE", "title": "" }, { "docid": "ac1302f482309273d9e61fdf0f093e01", "text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.", "title": "" }, { "docid": "f355ed837561186cff4e7492470d6ae7", "text": "Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article. Much of the Bayesian viewpoint can be argued (as by Jeereys and Jaynes, for examples) as direct application of the theory of probability. In this article the suggested approach for the construction of Bayesian time series models relies on probability theory to provide decompositions of complex joint probability distributions. Speciically, I refer to the familiar factorization of a joint density into an appropriate product of conditionals. Let x and y represent two random variables. I will not diierentiate between random variables and their realizations. Also, I will use an increasingly popular generic notation for probability densities: x] represents the density of x, xjy] is the conditional density of x given y, and x; y] denotes the joint density of x and y. In this notation we can write \\Bayes's Theorem\" as yjx] = xjy]]y]=x]: (1) y", "title": "" }, { "docid": "76262c43c175646d7a00e02a7a49ab81", "text": "Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns.", "title": "" }, { "docid": "e415deac22afd9221995385e681b7f63", "text": "AIM & OBJECTIVES\nThe purpose of this in vitro study was to evaluate and compare the microleakage of pit and fissure sealants after using six different preparation techniques: (a) brush, (b) pumice slurry application, (c) bur, (d) air polishing, (e) air abrasion, and (f) longer etching time.\n\n\nMATERIAL & METHOD\nThe study was conducted on 60 caries-free first premolars extracted for orthodontic purpose. These teeth were randomly assigned to six groups of 10 teeth each. Teeth were prepared using one of six occlusal surface treatments prior to placement of Clinpro\" 3M ESPE light-cured sealant. The teeth were thermocycled for 500 cycles and stored in 0.9% normal saline. Teeth were sealed apically and coated with nail varnish 1 mm from the margin and stained in 1% methylene blue for 24 hours. Each tooth was divided buccolingually parallel to the long axis of the tooth, yielding two sections per tooth for analysis. The surfaces were scored from 0 to 2 for the extent of microleakage.\n\n\nSTATISTICAL ANALYSIS\nResults obtained for microleakage were analyzed by using t-tests at sectional level and chi-square test and analysis of variance (ANOVA) at the group level.\n\n\nRESULTS\nThe results of round bur group were significantly superior when compared to all other groups. The application of air polishing and air abrasion showed better results than pumice slurry, bristle brush, and longer etching time. Round bur group was the most successful cleaning and preparing technique. Air polishing and air abrasion produced significantly less microleakage than traditional pumice slurry, bristle brush, and longer etching time.", "title": "" }, { "docid": "3c999f3104ac98b010a2147c7b8ddaa0", "text": "Many Big Data technologies were built to enable the processing of human generated data, setting aside the enormous amount of data generated from Machine-to-Machine (M2M) interactions. M2M interactions create real-time data streams that are much more structured, often in the form of series of event occurrences. In this paper, we provide an overview on the main research issues confronted by existing Complex Event Processing (CEP) techniques, as a starting point for Big Data applications that enable the monitoring of complex event occurrences in M2M interactions.", "title": "" }, { "docid": "77a156afb22bbecd37d0db073ef06492", "text": "Rhonda Farrell University of Fairfax, Vienna, VA ABSTRACT While acknowledging the many benefits that cloud computing solutions bring to the world, it is important to note that recent research and studies of these technologies have identified a myriad of potential governance, risk, and compliance (GRC) issues. While industry clearly acknowledges their existence and seeks to them as much as possible, timing-wise it is still well before the legal framework has been put in place to adequately protect and adequately respond to these new and differing global challenges. This paper seeks to inform the potential cloud adopter, not only of the perceived great technological benefit, but to also bring to light the potential security, privacy, and related GRC issues which will need to be prioritized, managed, and mitigated before full implementation occurs.", "title": "" }, { "docid": "8308358ee1d9040b3f62b646edcc8578", "text": "The application of GaN on SiC technology to wideband power amplifier MMICs is explored. The unique characteristics of GaN on SiC applied to reactively matched and distributed wideband circuit topologies are discussed, including comparison to GaAs technology. A 2 – 18 GHz 11W power amplifier MMIC is presented as an example.", "title": "" }, { "docid": "29495e389441ff61d5efad10ad38e995", "text": "The natural world is infinitely diverse, yet this diversity arises from a relatively small set of coherent properties and rules, such as the laws of physics or chemistry. We conjecture that biological intelligent systems are able to survive within their diverse environments by discovering the regularities that arise from these rules primarily through unsupervised experiences, and representing this knowledge as abstract concepts. Such representations possess useful properties of compositionality and hierarchical organisation, which allow intelligent agents to recombine a finite set of conceptual building blocks into an exponentially large set of useful new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such concepts in the visual domain. We first use the previously published β-VAE (Higgins et al., 2017a) architecture to learn a disentangled representation of the latent structure of the visual world, before training SCAN to extract abstract concepts grounded in such disentangled visual primitives through fast symbol association. Our approach requires very few pairings between symbols and images and makes no assumptions about the choice of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of compositional visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to invent and learn novel visual concepts through recombination of the few learnt concepts.", "title": "" }, { "docid": "12344e450dbfba01476353e38f83358f", "text": "This paper explores four issues that have emerged from the research on social, cognitive and teaching presence in an online community of inquiry. The early research in the area of online communities of inquiry has raised several issues with regard to the creation and maintenance of social, cognitive and teaching presence that require further research and analysis. The other overarching issue is the methodological validity associated with the community of inquiry framework. The first issue is about shifting social presence from socio-emotional support to a focus on group cohesion (from personal to purposeful relationships). The second issue concerns the progressive development of cognitive presence (inquiry) from exploration to resolution. That is, moving discussion beyond the exploration phase. The third issue has to do with how we conceive of teaching presence (design, facilitation, direct instruction). More specifically, is there an important distinction between facilitation and direct instruction? Finally, the methodological issue concerns qualitative transcript analysis and the validity of the coding protocol.", "title": "" }, { "docid": "9b96a97426917b18dab401423e777b92", "text": "Anatomical and biophysical modeling of left atrium (LA) and proximal pulmonary veins (PPVs) is important for clinical management of several cardiac diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA and PPVs through visualization. However, there is a strong need for an advanced image segmentation method to be applied to cardiac MRI for quantitative analysis of LA and PPVs. In this study, we address this unmet clinical need by exploring a new deep learning-based segmentation strategy for quantification of LA and PPVs with high accuracy and heightened efficiency. Our approach is based on a multi-view convolutional neural network (CNN) with an adaptive fusion strategy and a new loss function that allows fast and more accurate convergence of the backpropagation based optimization. After training our network from scratch by using more than 60K 2D MRI images (slices), we have evaluated our segmentation strategy to the STACOM 2013 cardiac segmentation challenge benchmark. Qualitative and quantitative evaluations, obtained from the segmentation challenge, indicate that the proposed method achieved the state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).", "title": "" }, { "docid": "0b12d6a973130f7317956326320ded03", "text": "We present simple and computationally efficient nonparametric estimators of Rényi entropy and mutual information based on an i.i.d. sample drawn from an unknown, absolutely continuous distribution over R. The estimators are calculated as the sum of p-th powers of the Euclidean lengths of the edges of the ‘generalized nearest-neighbor’ graph of the sample and the empirical copula of the sample respectively. For the first time, we prove the almost sure consistency of these estimators and upper bounds on their rates of convergence, the latter of which under the assumption that the density underlying the sample is Lipschitz continuous. Experiments demonstrate their usefulness in independent subspace analysis.", "title": "" }, { "docid": "e9ff17015d40f5c6dd5091557f336f43", "text": "Web sites that accept and display content such as wiki articles or comments typically filter the content to prevent injected script code from running in browsers that view the site. The diversity of browser rendering algorithms and the desire to allow rich content make filtering quite difficult, however, and attacks such as the Samy and Yamanner worms have exploited filtering weaknesses. This paper proposes a simple alternative mechanism for preventing script injection called Browser-Enforced Embedded Policies (BEEP). The idea is that a web site can embed a policy in its pages that specifies which scripts are allowed to run. The browser, which knows exactly when it will run a script, can enforce this policy perfectly. We have added BEEP support to several browsers, and built tools to simplify adding policies to web applications. We found that supporting BEEP in browsers requires only small and localized modifications, modifying web applications requires minimal effort, and enforcing policies is generally lightweight.", "title": "" } ]
scidocsrr
a534083a312c26decd7372dd878dbcf6
A 3D Dynamic Scene Analysis Framework for Development of Intelligent Transportation Systems
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" }, { "docid": "f6647e82741dfe023ee5159bd6ac5be9", "text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.", "title": "" } ]
[ { "docid": "691da5852aad20ace40be20bfeae3ea7", "text": "Experimental manipulations of affect induced by a brief newspaper report of a tragic event produced a pervasive increase in subjects' estimates of the frequency of many risks and other undesirable events. Contrary to expectation, the effect was independent of the similarity between the report arid the estimated risk. An account of a fatal stabbing did not increase the frequency estimate of a closely related risk, homicide, more than the estimates of unrelated risks such as natural hazards. An account of a happy event that created positive affect produced a comparable global decrease in judged frequency of risks.", "title": "" }, { "docid": "4eead577c1b3acee6c93a62aee8a6bb5", "text": "The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures.", "title": "" }, { "docid": "9f6429ac22b736bd988a4d6347d8475f", "text": "The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the \"modelling view\" of knowledge acquisition proposed by Clancey, the modeling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behavior (i.e. the problem-solving expertize) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former subsystem only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowlege bases (or \"ontologies\") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual level discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontological distinctions which may play an important role for such purpose.", "title": "" }, { "docid": "6a1a9c6cb2da06ee246af79fdeedbed9", "text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review", "title": "" }, { "docid": "089e1d2d96ae4ba94ac558b6cdccd510", "text": "HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge. In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging. We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.", "title": "" }, { "docid": "e7c97ff0a949f70b79fb7d6dea057126", "text": "Most conventional document categorization methods require a large number of documents with labeled categories for training. These methods are hard to be applied in scenarios, such as scientific publications, where training data is expensive to obtain and categories could change over years and across domains. In this work, we propose UNEC, an unsupervised representation learning model that directly categories documents without the need of labeled training data. Specifically, we develop a novel cascade embedding approach. We first embed concepts, i.e., significant phrases mined from scientific publications, into continuous vectors, which capture concept semantics. Based on the concept similarity graph built from the concept embedding, we further embed concepts into a hidden category space, where the category information of concepts becomes explicit. Finally we categorize documents by jointly considering the category attribution of their concepts. Our experimental results show that UNEC significantly outperforms several strong baselines on a number of real scientific corpora, under both automatic and manual evaluation.", "title": "" }, { "docid": "aa55e655c7fa8c86d189d03c01d5db87", "text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.", "title": "" }, { "docid": "8b971925c3a9a70b6c3eaffedf5a3985", "text": "We consider the NP-complete problem of finding an enclosing rectangle of minimum area that will contain a given a set of rectangles. We present two different constraintsatisfaction formulations of this problem. The first searches a space of absolute placements of rectangles in the enclosing rectangle, while the other searches a space of relative placements between pairs of rectangles. Both approaches dramatically outperform previous approaches to optimal rectangle packing. For problems where the rectangle dimensions have low precision, such as small integers, absolute placement is generally more efficient, whereas for rectangles with high-precision dimensions, relative placement will be more effective. In two sets of experiments, we find both the smallest rectangles and squares that can contain the set of squares of size 1 × 1, 2 × 2, . . . ,N × N , for N up to 27. In addition, we solve an open problem dating to 1966, concerning packing the set of consecutive squares up to 24 × 24 in a square of size 70 × 70. Finally, we find the smallest enclosing rectangles that can contain a set of unoriented rectangles of size 1 × 2, 2 × 3, 3 × 4, . . . ,N × (N + 1), for N up to 25.", "title": "" }, { "docid": "7ca6ea8592c0bd3a31108221975f9470", "text": "BACKGROUND\nThe dermoscopic patterns of pigmented skin tumors are influenced by the body site.\n\n\nOBJECTIVE\nTo evaluate the clinical and dermoscopic features associated with pigmented vulvar lesions.\n\n\nMETHODS\nRetrospective analysis of clinical and dermoscopic images of vulvar lesions. The χ² test was used to test the association between clinical data and histopathological diagnosis.\n\n\nRESULTS\nA total of 42 (32.8%) melanocytic and 86 (67.2%) nonmelanocytic vulvar lesions were analyzed. Nevi significantly prevailed in younger women compared with melanomas and melanosis and exhibited most commonly a globular/cobblestone (51.3%) and a mixed (21.6%) pattern. Dermoscopically all melanomas showed a multicomponent pattern. Melanotic macules showed clinical overlapping features with melanoma, but their dermoscopic patterns differed significantly from those observed in melanomas.\n\n\nCONCLUSION\nThe diagnosis and management of pigmented vulvar lesions should be based on a good clinicodermoscopic correlation. Dermoscopy may be helpful in the differentiation of solitary melanotic macules from early melanoma.", "title": "" }, { "docid": "1a161ce6c138d5351378637c6d94d722", "text": "The domain-general learning mechanisms elicited in incidental learning situations are of potential interest in many research fields, including language acquisition, object knowledge formation and motor learning. They have been the focus of studies on implicit learning for nearly 40 years. Stemming from a different research tradition, studies on statistical learning carried out in the past 10 years after the seminal studies by Saffran and collaborators, appear to be closely related, and the similarity between the two approaches is strengthened further by their recent evolution. However, implicit learning and statistical learning research favor different interpretations, focusing on the formation of chunks and statistical computations, respectively. We examine these differing approaches and suggest that this divergence opens up a major theoretical challenge for future studies.", "title": "" }, { "docid": "756ea86702a4314fa211afb23c4c63ac", "text": "The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.", "title": "" }, { "docid": "5033cc81abffc2b5a10635e87b025991", "text": "We describe the computing tasks involved in autonomous driving, examine existing autonomous driving computing platform implementations. To enable autonomous driving, the computing stack needs to simultaneously provide high performance, low power consumption, and low thermal dissipation, at low cost. We discuss possible approaches to design computing platforms that will meet these needs.", "title": "" }, { "docid": "627aee14031293785224efdb7bac69f0", "text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.", "title": "" }, { "docid": "09538bc92c8bf9818bf84e44024f087c", "text": "An up-to-date review paper on automotive sensors is presented. Attention is focused on sensors used in production automotive systems. The primary sensor technologies in use today are reviewed and are classified according to their three major areas ofautomotive systems application–powertrain, chassis, and body. This subject is extensive. As described in this paper, for use in automotive systems, there are six types of rotational motion sensors, four types of pressure sensors, five types of position sensors, and three types of temperature sensors. Additionally, two types of mass air flow sensors, five types of exhaust gas oxygen sensors, one type of engine knock sensor, four types of linear acceleration sensors, four types of angular-rate sensors, four types of occupant comfort/convenience sensors, two types of near-distance obstacle detection sensors, four types of far-distance obstacle detection sensors, and and ten types of emerging, state-of the-art, sensors technologies are identified.", "title": "" }, { "docid": "025076c60f680a6e7311f07b3027b13c", "text": "The changing nature of warfare has seen a paradigm shift from the conventional to asymmetric, contactless warfare such as information and cyber warfare. Excessive dependence on information and communication technologies, cloud infrastructures, big data analytics, data-mining and automation in decision making poses grave threats to business and economy in adversarial environments. Adversarial machine learning is a fast growing area of research which studies the design of Machine Learning algorithms that are robust in adversarial environments. This paper presents a comprehensive survey of this emerging area and the various techniques of adversary modelling. We explore the threat models for Machine Learning systems and describe the various techniques to attack and defend them. We present privacy issues in these models and describe a cyber-warfare test-bed to test the effectiveness of the various attack-defence strategies and conclude with some open problems in this area of research.", "title": "" }, { "docid": "83ec8e9791086bcb58427d43c6c777aa", "text": "In this work we review the most important existing developments and future trends in the class of Parallel Genetic Algorithms (PGAs). PGAs are mainly subdivided into coarse and fine grain PGAs, the coarse grain models being the most popular ones. An exceptional characteristic of PGAs is that they are not just the parallel version of a sequential algorithm intended to provide speed gains. Instead, they represent a new kind of meta-heuristics of higher efficiency and efficacy thanks to their structured population and parallel execution. The good robustness of these algorithms on problems of high complexity has led to an increasing number of applications in the fields of artificial intelligence, numeric and combinatorial optimization, business, engineering, etc. We make a formalization of these algorithms, and present a timely and topic survey of their most important traditional and recent technical issues. Besides that, useful summaries on their main applications plus Internet pointers to important web sites are included in order to help new researchers to access this growing area.", "title": "" }, { "docid": "5d1fbf1b9f0529652af8d28383ce9a34", "text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.", "title": "" }, { "docid": "4f186e992cd7d5eadb2c34c0f26f4416", "text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …", "title": "" }, { "docid": "04647771810ac62b27ee8da12833a02d", "text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "title": "" }, { "docid": "7abeef7b56ce98d6f96727ac60444bdb", "text": "Up until recently, hypervisor-based virtualization platforms dominated the virtualization industry. However, container-based virtualization, an alternative to hypervisorbased virtualization, simplifies and fastens the deployment of virtual entities. Relevant research has already shown that container-based virtualization either performs equally or better than hypervisor-based virtualization in terms of performance in almost all cases. This research project investigates whether the power efficiency significantly differs on Xen, which is based on hypervisor virtualization, and Docker, which is based on container-based virtualization. The power efficiency is obtained by running synthetic applications and measuring the power usage on different hardware components. Rather than measuring the overall power of the system, or looking at empirical studies, hardware components such as CPU, memory and HDD will be measured internally by placing power sensors between the motherboard and circuits of each measured hardware component. This newly refined approach shows that both virtualization platforms behave roughly similar in IDLE state, when loading the memory and when performing sequential writes for the HDD. Contrarily, the results of CPU and sequential HDD reads show differences between the two virtualization platforms, where the performance of Xen is significantly weaker in terms of power efficiency.", "title": "" } ]
scidocsrr
16b1156b36b37c3a445abab6aa7394a9
A 4 DOF exoskeleton robot with a novel shoulder joint mechanism
[ { "docid": "6bd3614d830cbef03c9567bf096e417a", "text": "Rehabilitation robots start to become an important tool in stroke rehabilitation. Compared to manual arm training, robot-supported training can be more intensive, of longer duration, repetitive and task-oriented. Therefore, these devices have the potential to improve the rehabilitation process in stroke patients. While in the past, most groups have been working with endeffector-based robots, exoskeleton robots become more and more important, mainly because they offer a better guidance of the single human joints, especially during movements with large ranges. Regarding the upper extremities, the shoulder is the most complex human joint and its actuation is, therefore, challenging. This paper deals with shoulder actuation principles for exoskeleton robots. First, a quantitative analysis of the human shoulder movement is presented. Based on that analysis two shoulder actuation principles that provide motion of the center of the glenohumeral joint are presented and evaluated.", "title": "" } ]
[ { "docid": "7ec12c0bf639c76393954baae196a941", "text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.", "title": "" }, { "docid": "f2edf7cc3671b38ae5f597e840eda3a2", "text": "This paper describes the process of creating a design pattern management interface for a collection of mobile design patterns. The need to communicate how patterns are interrelated and work together to create solutions motivated the creation of this interface. Currently, most design pattern collections are presented in alphabetical lists. The Oracle Mobile User Experience team approach is to communicate relationships visually by highlighting and connecting related patterns. Before the team designed the interface, we first analyzed common relationships between patterns and created a pattern language map. Next, we organized the patterns into conceptual design categories. Last, we designed a pattern management interface that enables users to browse patterns and visualize their relationships.", "title": "" }, { "docid": "445b3f542e785425cd284ad556ef825a", "text": "Despite the success of neural networks (NNs), there is still a concern among many over their “black box” nature. Why do they work? Yes, we have Universal Approximation Theorems, but these concern statistical consistency, a very weak property, not enough to explain the exceptionally strong performance reports of the method. Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models, with the effective degree of the polynomial growing at each hidden layer. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available. 1 ar X iv :1 80 6. 06 85 0v 2 [ cs .L G ] 2 9 Ju n 20 18 1 The Mystery of NNs Neural networks (NNs), especially in the currently popular form of many-layered deep learning networks (DNNs), have become many analysts’ go-to method for predictive analytics. Indeed, in the popular press, the term artificial intelligence has become virtually synonymous with NNs.1 Yet there is a feeling among many in the community that NNs are “black boxes”; just what is going on inside? Various explanations have been offered for the success of NNs, a prime example being [Shwartz-Ziv and Tishby(2017)]. However, the present paper will present significant new insights. 2 Contributions of This Paper The contribution of the present work will be as follows:2 (a) We will show that, at each layer of an NY, there is a rough correspondence to some fitted ordinary parametric polynomial regression (PR) model; in essence, NNs are a form of PR. We refer to this loose correspondence here as NNAEPR, Neural Nets Are Essentially Polynomial Models. (b) A very important aspect of NNAEPR is that the degree of the approximating polynomial increases with each hidden layer. In other words, our findings should not be interpreted as merely saying that the end result of an NN can be approximated by some polynomial. (c) We exploit NNAEPR to learn about general properties of NNs via our knowledge of the properties of PR. This will turn out to provide new insights into aspects such as the numbers of hidden layers and numbers of units per layer, as well as how convergence problems arise. For example, we use NNAEPR to predict and confirm a multicollinearity property of NNs not previous reported in the literature. (d) Property (a) suggests that in many applications, one might simply fit a polynomial model in the first place, bypassing NNs. This would have the advantage of avoiding the problems of choosing tuning parameters (the polynomial approach has just one, the degree), nonconvergence and so on. 1There are many different variants of NNs, but for the purposes of this paper, we can consider them as a group. 2 Author listing is alphabetical by surname. XC wrote the entire core code for the polyreg package; NM conceived of the main ideas underlying the work, developed the informal mathematical material and wrote support code; BK assembled the brain and kidney cancer data, wrote some of the support code, and provided domain expertise guidance for genetics applications; PM wrote extensive support code, including extending his kerasformula package, and provided specialized expertise on NNs. All authors conducted data experiments.", "title": "" }, { "docid": "0a31ab53b887cf231d7ca1a286763e5f", "text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.", "title": "" }, { "docid": "50e081b178a1a308c61aae4a29789816", "text": "The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants of the mesostable enzyme limonene epoxide hydrolase. First, point mutations were selected if they significantly improved the predicted free energy of protein folding. Disulfide bonds were designed using sampling of backbone conformational space, which tripled the number of experimentally stabilizing disulfide bridges. Next, orthogonal in silico screening steps were used to remove chemically unreasonable mutations and mutations that are predicted to increase protein flexibility. The resulting library of 64 variants was experimentally screened, which revealed 21 (pairs of) stabilizing mutations located both in relatively rigid and in flexible areas of the enzyme. Finally, combining 10-12 of these confirmed mutations resulted in multi-site mutants with an increase in apparent melting temperature from 50 to 85°C, enhanced catalytic activity, preserved regioselectivity and a >250-fold longer half-life. The developed Framework for Rapid Enzyme Stabilization by Computational libraries (FRESCO) requires far less screening than conventional directed evolution.", "title": "" }, { "docid": "3e570e415690daf143ea30a8554b0ac8", "text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.", "title": "" }, { "docid": "c93d1536c651ab80446a683482444890", "text": "We present a frequency-modulated continuous-wave secondary radar concept to estimate the offset in time and in frequency of two wireless units and to measure their distance relative to each other. By evaluating the Doppler frequency shift of the radar signals, the relative velocity of the mobile units is measured as well. The distance can be measured with a standard deviation as low as 1 cm. However, especially in indoor environments, the accuracy of the system can be degraded by multipath transmissions. Therefore, we show an extension of the algorithm to cope with multipath propagation. We also present the hardware setup of the measurement system. The system is tested in various environments. The results prove the excellent performance and outstanding reliability of the algorithms presented.", "title": "" }, { "docid": "2f7b81ddd5790eacb03ec2a226614280", "text": "Literature on supply chain management (SCM) covers several disciplines and is growing rapidly. This paper firstly aims at extracting the essence of SCM and advanced planning in the form of two conceptual frameworks: The house of SCM and the supply chain planning matrix. As an illustration, contributions to this feature issue will then be assigned to the building blocks of the house of SCM or to the modules covering the supply chain planning matrix. Secondly, focusing on software for advanced planning, we outline its main shortcomings and present latest research results for their resolution. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "71428f1d968a25eb7df33f55557eb424", "text": "BACKGROUND\nThe 'Choose and Book' system provides an online booking service which primary care professionals can book in real time or soon after a patient's consultation. It aims to offer patients choice and improve outpatient clinic attendance rates.\n\n\nOBJECTIVE\nAn audit comparing attendance rates of new patients booked into the Audiological Medicine Clinic using the 'Choose and Book' system with that of those whose bookings were made through the traditional booking system.\n\n\nMETHODS\nData accrued between 1 April 2008 and 31 October 2008 were retrospectively analysed for new patient attendance at the department, and the age and sex of the patients, method of appointment booking used and attendance record were collected. Patients were grouped according to booking system used - 'Choose and Book' or the traditional system. The mean ages of the groups were compared by a t test. The standard error of the difference between proportions was used to compare the data from the two groups. A P value of < or = 0.05 was considered to be significant.\n\n\nRESULTS\n'Choose and Book' patients had a significantly better rate of attendance than traditional appointment patients, P < 0.01 (95% CI 4.3, 20.5%). There was no significant difference between the two groups in terms of sex, P > 0.1 (95% CI-3.0, 16.2%). The 'Choose and Book' patients, however, were significantly older than the traditional appointment patients, P < 0.001 (95% CI 4.35, 12.95%).\n\n\nCONCLUSION\nThis audit suggests that when primary care agents book outpatient clinic appointments online it improves outpatient attendance.", "title": "" }, { "docid": "3a69d6ef79482d26aee487a964ff797f", "text": "The FPGA compilation process (synthesis, map, placement, routing) is a time-consuming process that limits designer productivity. Compilation time can be reduced by using pre-compiled circuit blocks (hard macros). Hard macros consist of previously synthesized, mapped, placed and routed circuitry that can be relatively placed with short tool runtimes and that make it possible to reuse previous computational effort. Two experiments were performed to demonstrate feasibility that hard macros can reduce compilation time. These experiments demonstrated that an augmented Xilinx flow designed specifically to support hard macros can reduce overall compilation time by 3x. Though the process of incorporating hard macros in designs is currently manual and error-prone, it can be automated to create compilation flows with much lower compilation time.", "title": "" }, { "docid": "5abe5696969eca4d19a55e3492af03a8", "text": "In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of Email addresses: [email protected] (Isaac Triguero), [email protected] (Daniel Peralta), [email protected] (Jaume Bacardit), [email protected] (Salvador Garćıa), [email protected] (Francisco Herrera) Preprint submitted to Neurocomputing March 3, 2014 instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data.", "title": "" }, { "docid": "f6c3124f3824bcc836db7eae1b926d65", "text": "Cloud balancing provides an organization with the ability to distribute application requests across any number of application deployments located in different data centers and through Cloud-computing providers. In this paper, we propose a load balancing methodMinsd (Minimize standard deviation of Cloud load method) and apply it on three levels control: PEs (Processing Elements), Hosts and Data Centers. Simulations on CloudSim are used to check its performance and its influence on makespan, communication overhead and throughput. A true log of a cluster also is used to test our method. Results indicate that our method not only gives good Cloud balancing but also ensures reducing makespan and communication overhead and enhancing throughput of the whole the system.", "title": "" }, { "docid": "0ff9e3b699e5cb5c098cdcc7d7ed78b6", "text": "Malwares are becoming persistent by creating fulledged variants of the same or different family. Malwares belonging to same family share same characteristics in their functionality of spreading infections into the victim computer. These similar characteristics among malware families can be taken as a measure for creating a solution that can help in the detection of the malware belonging to particular family. In our approach we have taken the advantage of detecting these malware families by creating the database of these characteristics in the form of n-grams of API sequences. We use various similarity score methods and also extract multiple API sequences to analyze malware effectively.", "title": "" }, { "docid": "73bbb7122b588761f1bf7b711f21a701", "text": "This research attempts to find a new closed-form solution of toroid and overlapping windings for axial flux permanent magnet machines. The proposed solution includes analytical derivations for winding lengths, resistances, and inductances as functions of fundamental airgap flux density and inner-to-outer diameter ratio. Furthermore, phase back-EMFs, phase terminal voltages, and efficiencies are calculated and compared for both winding types. Finite element analysis is used to validate the accuracy of the proposed analytical calculations. The proposed solution should assist machine designers to ascertain benefits and limitations of toroid and overlapping winding types as well as to get faster results.", "title": "" }, { "docid": "6403b543937832f641d98b9212d2428e", "text": "Information edge and 3 millennium predisposed so many of revolutions. Business organization with emphasize on information systems is try to gathering desirable information for decision making. Because of comprehensive change in business background and emerge of computers and internet, the business structure and needed information had change, the competitiveness as a major factor for life of organizations in information edge is preyed of information technology challenges. In this article we have reviewed in the literature of information systems and discussed the concepts of information system as a strategic tool.", "title": "" }, { "docid": "a651ae33adce719033dad26b641e6086", "text": "Knowledge base(KB) plays an important role in artificial intelligence. Much effort has been taken to both manually and automatically construct web-scale knowledge bases. Comparing with manually constructed KBs, automatically constructed KB is broader but with more noises. In this paper, we study the problem of improving the quality for automatically constructed web-scale knowledge bases, in particular, lexical taxonomies of isA relationships. We find that these taxonomies usually contain cycles, which are often introduced by incorrect isA relations. Inspired by this observation, we introduce two kinds of models to detect incorrect isA relations from cycles. The first one eliminates cycles by extracting directed acyclic graphs, and the other one eliminates cycles by grouping nodes into different levels. We implement our models on Probase, a state-of-the-art, automatically constructed, web-scale taxonomy. After processing tens of millions of relations, our models eliminate 74 thousand wrong relations with 91% accuracy.", "title": "" }, { "docid": "6be37d8e76343b0955c30afe1ebf643d", "text": "Session: Feb. 15, 2016, 2‐3:30 pm Chair: Xiaobai Liu, San Diego State University (SDSU) Oral Presentations 920: On the Depth of Deep Neural Networks: A Theoretical View. Shizhao Sun, Wei Chen, Liwei Wang, Xiaoguang Liu, Tie‐Yan Liu 1229: How Important Is Weight Symmetry in Backpropagation? Qianli Liao, Joel Z. Leibo, Tomaso Poggio 1769: Deep Learning with S‐shaped Rectified Linear Activation Units. Xiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, Shuicheng Yan 1142: Learning Step Size Controllers for Robust Neural Network Training. Christian Daniel, Jonathan Taylor, Sebastian Nowozin", "title": "" }, { "docid": "8a695d5913c3b87fb21864c0bdd3d522", "text": "Environmental topics have gained much consideration in corporate green operations. Globalization, stakeholder pressures, and stricter environmental regulations have made organizations develop environmental practices. Thus, green supply chain management (GSCM) is now a proactive approach for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances. DEMATEL evaluates GSCM practices to find the main practices to improve both environmental and economic performances. This study uses intuitionistic fuzzy set theory to handle the linguistic imprecision and the ambiguity of human being’s judgment. A case study from the automotive industry is presented to evaluate the efficiency of the proposed method. The results reveal ‘‘internal management support’’, ‘‘green purchasing’’ and ‘‘ISO 14001 certification’’ are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible, while improving their economic and environmental performance goals. Further, a sensitivity analysis of results, managerial implications, conclusions, limitations and future research opportunities are provided. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "73aa720bebc5f2fa1930930fb4185490", "text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.", "title": "" }, { "docid": "3b4622a4ad745fc0ffb3b6268eb969fa", "text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos", "title": "" } ]
scidocsrr
a4183290852eeff610385e7ca06ba566
Action Permissibility in Deep Reinforcement Learning and Application to Autonomous Driving
[ { "docid": "2502fc02f09be72d138275a7ac41d8bc", "text": "This manual describes the competition software for the Simulated Car Racing Championship, an international competition held at major conferences in the field of Evolutionary Computation and in the field of Computational Intelligence and Games. It provides an overview of the architecture, the instructions to install the software and to run the simple drivers provided in the package, the description of the sensors and the actuators.", "title": "" }, { "docid": "be35c342291d4805d2a5333e31ee26d6", "text": "References • We study efficient exploration in reinforcement learning. • Most provably-efficient learning algorithms introduce optimism about poorly understood states and actions. • Motivated by potential advantages relative to optimistic algorithms, we study an alternative approach: posterior sampling for reinforcement learning (PSRL). • This is the extension of the Thompson sampling algorithm for multi-armed bandit problems to reinforcement learning. • We establish the first regret bounds for this algorithm.  Conceptually simple, separates algorithm from analysis: • PSRL selects policies according to the probability they are optimal without need for explicit construction of confidence sets. • UCRL2 bounds error in each s, a separately, which allows for worst-case mis-estimation to occur simultaneously in every s, a . • We believe this will make PSRL more statistically efficient.", "title": "" } ]
[ { "docid": "4d1ea9da68cc3498b413371f12c90433", "text": "Transfer Learning (TL) plays a crucial role when a given dataset has insufficient labeled examples to train an accurate model. In such scenarios, the knowledge accumulated within a model pre-trained on a source dataset can be transferred to a target dataset, resulting in the improvement of the target model. Though TL is found to be successful in the realm of imagebased applications, its impact and practical use in Natural Language Processing (NLP) applications is still a subject of research. Due to their hierarchical architecture, Deep Neural Networks (DNN) provide flexibility and customization in adjusting their parameters and depth of layers, thereby forming an apt area for exploiting the use of TL. In this paper, we report the results and conclusions obtained from extensive empirical experiments using a Convolutional Neural Network (CNN) and try to uncover thumb rules to ensure a successful positive transfer. In addition, we also highlight the flawed means that could lead to a negative transfer. We explore the transferability of various layers and describe the effect of varying hyper-parameters on the transfer performance. Also, we present a comparison of accuracy value and model size against state-of-the-art methods. Finally, we derive inferences from the empirical results and provide best practices to achieve a successful positive transfer.", "title": "" }, { "docid": "bc2568e7b4bfaa3aebf424ecaad48c10", "text": "With the increasing connection density of ICs, the bump pitch is growing smaller and smaller. The limitations of the conventional solder bumps are becoming more and more obvious due to the spherical geometry of the solder bumps. A novel interconnect structure - copper pillar bump with the structure of a non-reflowable copper pillar and a reflowable solder cap is one of the solutions to the problem. The scope of this paper covers flip chip assembly of the copper pillar bump soldered to lead free flip chip solder on the SAC substrate with bump pitch of 150mum. Reliability study result including high temperature storage (HTS) and temperature cycling (TC) would be detailed discussed in this paper.", "title": "" }, { "docid": "1b22c3d5bb44340fcb66a1b44b391d71", "text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.", "title": "" }, { "docid": "c06c13af6d89c66e2fa065534bfc2975", "text": "Complex foldings of the vaginal wall are unique to some cetaceans and artiodactyls and are of unknown function(s). The patterns of vaginal length and cumulative vaginal fold length were assessed in relation to body length and to each other in a phylogenetic context to derive insights into functionality. The reproductive tracts of 59 female cetaceans (20 species, 6 families) were dissected. Phylogenetically-controlled reduced major axis regressions were used to establish a scaling trend for the female genitalia of cetaceans. An unparalleled level of vaginal diversity within a mammalian order was found. Vaginal folds varied in number and size across species, and vaginal fold length was positively allometric with body length. Vaginal length was not a significant predictor of vaginal fold length. Functional hypotheses regarding the role of vaginal folds and the potential selection pressures that could lead to evolution of these structures are discussed. Vaginal folds may present physical barriers, which obscure the pathway of seawater and/or sperm travelling through the vagina. This study contributes broad insights to the evolution of reproductive morphology and aquatic adaptations and lays the foundation for future functional morphology analyses.", "title": "" }, { "docid": "089e1d2d96ae4ba94ac558b6cdccd510", "text": "HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge. In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging. We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.", "title": "" }, { "docid": "c91ce9eb908d5a0fccc980f306ec0931", "text": "Text Mining has become an important research area. Text Mining is the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. In this paper, a Survey of Text Mining techniques and applications have been s presented.", "title": "" }, { "docid": "9cbf4d0843196b1dcada6f60c0d0c2e8", "text": "In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition capabilities of the human user. An evolutionary search algorithm has been adapted to assist in the fuzzy logic formalization of hypotheses that aim at explaining features inside multivariate, volumetric data. Up to now, users solely rely on their knowledge and expertise when looking for explanatory theories. However, it often remains unclear whether the selected attribute ranges represent the real explanation for the feature of interest. Other selections hidden in the large number of data variables could potentially lead to similar features. Moreover, as simulation complexity grows, users are confronted with huge multidimensional data sets making it almost impossible to find meaningful hypotheses at all. We propose an interactive cycle of knowledge-based analysis and automatic hypothesis generation. Starting from initial hypotheses, created with linking and brushing, the user steers a heuristic search algorithm to look for alternative or related hypotheses. The results are analyzed in information visualization views that are linked to the volume rendering. Individual properties as well as global aggregates are visually presented to provide insight into the most relevant aspects of the generated hypotheses. This novel approach becomes computationally feasible due to a GPU implementation of the time-critical parts in the algorithm. A thorough evaluation of search times and noise sensitivity as well as a case study on data from the automotive domain substantiate the usefulness of the suggested approach.", "title": "" }, { "docid": "339c367d71b4b51ad24aa59799b13416", "text": "One of the biggest challenges of the current big data landscape is our inability to process vast amounts of information in a reasonable time. In this work, we explore and compare two distributed computing frameworks implemented on commodity cluster architectures: MPI/OpenMP on Beowulf that is high-performance oriented and exploits multi-machine/multicore infrastructures, and Apache Spark on Hadoop which targets iterative algorithms through in-memory computing. We use the Google Cloud Platform service to create virtual machine clusters, run the frameworks, and evaluate two supervised machine learning algorithms: KNN and Pegasos SVM. Results obtained from experiments with a particle physics data set show MPI/OpenMP outperforms Spark by more than one order of magnitude in terms of processing speed and provides more consistent performance. However, Spark shows better data management infrastructure and the possibility of dealing with other aspects such as node failure and data replication.", "title": "" }, { "docid": "eb639439559f3e4e3540e3e98de7a741", "text": "This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale. The attribute vectors, in conjunction with the deformation mechanism of the model, warrant that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.", "title": "" }, { "docid": "5b0d5ebe7666334b09a1136c1cb2d8e4", "text": "In this paper, lesion areas affected by anthracnose are segmented using segmentation techniques, graded based on percentage of affected area and neural network classifier is used to classify normal and anthracnose affected on fruits. We have considered three types of fruit namely mango, grape and pomegranate for our work. The developed processing scheme consists of two phases. In the first phase, segmentation techniques namely thresholding, region growing, K-means clustering and watershed are employed for separating anthracnose affected lesion areas from normal area. Then these affected areas are graded by calculating the percentage of affected area. In the second phase texture features are extracted using Runlength Matrix. These features are then used for classification purpose using ANN classifier. We have conducted experimentation on a dataset of 600 fruits’ image samples. The classification accuracies for normal and affected anthracnose fruit types are 84.65% and 76.6% respectively. The work finds application in developing a machine vision system in horticulture field.", "title": "" }, { "docid": "fdc18ccdccefc1fd9c3f79daf549f015", "text": "An overview of the current design practices in the field of Renewable Energy (RE) is presented; also paper delineates the background to the development of unique and novel techniques for power generation using the kinetic energy of tidal streams and other marine currents. Also this study focuses only on vertical axis tidal turbine. Tidal stream devices have been developed as an alternative method of extracting the energy from the tides. This form of tidal power technology poses less threat to the environment and does not face the same limiting factors associated with tidal barrage schemes, therefore making it a more feasible method of electricity production. Large companies are taking interest in this new source of power. There is a rush to research and work with this new energy source. Marine scientists are looking into how much these will affect the environment, while engineers are developing turbines that are harmless for the environment. In addition, the progression of technological advancements tracing several decades of R & D efforts on vertical axis turbines is highlighted.", "title": "" }, { "docid": "66cd10e39a91fb421d1145b2ebe7246c", "text": "Previous research suggests that heterosexual women's sexual arousal patterns are nonspecific; heterosexual women demonstrate genital arousal to both preferred and nonpreferred sexual stimuli. These patterns may, however, be related to the intense and impersonal nature of the audiovisual stimuli used. The current study investigated the gender specificity of heterosexual women's sexual arousal in response to less intense sexual stimuli, and also examined the role of relationship context on both women's and men's genital and subjective sexual responses. Assessments were made of 43 heterosexual women's and 9 heterosexual men's genital and subjective sexual arousal to audio narratives describing sexual or neutral encounters with female and male strangers, friends, or long-term relationship partners. Consistent with research employing audiovisual sexual stimuli, men demonstrated a category-specific pattern of genital and subjective arousal with respect to gender, while women showed a nonspecific pattern of genital arousal, yet reported a category-specific pattern of subjective arousal. Heterosexual women's nonspecific genital response to gender cues is not a function of stimulus intensity or relationship context. Relationship context did significantly affect women's genital sexual arousal--arousal to both female and male friends was significantly lower than to the stranger and long-term relationship contexts--but not men's. These results suggest that relationship context may be a more important factor in heterosexual women's physiological sexual response than gender cues.", "title": "" }, { "docid": "f0a82f428ac508351ffa7b97bb909b60", "text": "Automated Teller Machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, and cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. This paper provides a general review on studies, efforts and development in ATMs location and cash management problem. Keywords—ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs.", "title": "" }, { "docid": "dda8427a6630411fc11e6d95dbff08b9", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "3ef8c2f3b2c18a91c23dad4f4cdd0e43", "text": "Skeleton-based human action recognition has attracted a lot of research attention during the past few years. Recent works attempted to utilize recurrent neural networks to model the temporal dependencies between the 3D positional configurations of human body joints for better analysis of human activities in the skeletal data. The proposed work extends this idea to spatial domain as well as temporal domain to better analyze the hidden sources of action-related information within the human skeleton sequences in both of these domains simultaneously. Based on the pictorial structure of Kinect's skeletal data, an effective tree-structure based traversal framework is also proposed. In order to deal with the noise in the skeletal data, a new gating mechanism within LSTM module is introduced, with which the network can learn the reliability of the sequential data and accordingly adjust the effect of the input data on the updating procedure of the long-term context representation stored in the unit's memory cell. Moreover, we introduce a novel multi-modal feature fusion strategy within the LSTM unit in this paper. The comprehensive experimental results on seven challenging benchmark datasets for human action recognition demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "ac65c09468cd88765009abe49d9114cf", "text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.", "title": "" }, { "docid": "8e9c65aea02ec48c96f74ae0407582e6", "text": "With the wide penetration of mobile internet, social networking (SN) systems are becoming increasingly popular in the developing world. However, most SN sites are text heavy, and are therefore unusable by low-literate populations. Here we ask what would an SN application for low-literate users look like and how would it be used? We designed and deployed KrishiPustak, an audio-visual SN mobile application for low-literate farming populations in rural India. Over a four month deployment, 306 farmers registered through the phones of eight agricultural mediators making 514 posts and 180 replies. We conducted interviews with farmers and mediators and analyzed the content to understand system usage and to drive iterative design. The context of mediated use and agricultural framing had a powerful impact on system understanding (what it was for) and usage. Overall, KrishiPustak was useful and usable, but none-the-less we identify a number of design recommendations for similar SN systems.", "title": "" }, { "docid": "78e631aceb9598767289c86ace415e2b", "text": "We present the Balloon family of password hashing functions. These are the first cryptographic hash functions with proven space-hardness properties that: (i) use a password-independent access pattern, (ii) build exclusively upon standard cryptographic primitives, and (iii) are fast enough for real-world use. Space-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. The central technical challenge of this work was to devise the graph-theoretic and linear-algebraic techniques necessary to prove the space-hardness properties of the Balloon functions (in the random-oracle model). To motivate our interest in security proofs, we demonstrate that it is possible to compute Argon2i, a recently proposed space-hard function that lacks a formal analysis, in less than the claimed required space with no increase in the computation time.", "title": "" }, { "docid": "127b8dfb562792d02a4c09091e09da90", "text": "Current approaches to conservation and natural-resource management often focus on single objectives, resulting in many unintended consequences. These outcomes often affect society through unaccounted-for ecosystem services. A major challenge in moving to a more ecosystem-based approach to management that would avoid such societal damages is the creation of practical tools that bring a scientifically sound, production function-based approach to natural-resource decision making. A new set of computer-based models is presented, the Integrated Valuation of Ecosystem Services and Tradeoffs tool (InVEST) that has been designed to inform such decisions. Several of the key features of these models are discussed, including the ability to visualize relationships among multiple ecosystem services and biodiversity, the ability to focus on ecosystem services rather than biophysical processes, the ability to project service levels and values in space, sensitivity to manager-designed scenarios, and flexibility to deal with data and knowledge limitations. Sample outputs of InVEST are shown for two case applications; the Willamette Basin in Oregon and the Amazon Basin. Future challenges relating to the incorporation of social data, the projection of social distributional effects, and the design of effective policy mechanisms are discussed.", "title": "" }, { "docid": "993cc233ad132a71c2fe093e267e4876", "text": "-Deep learning has been applied to camera relocalization, in particular, PoseNet and its extended work are the convolutional neural networks which regress the camera pose from a single image. However there are many problems, one of them is expensive parameter selection. In this paper, we directly explore the three Euler angles as the orientation representation in the camera pose regressor. There is no need to select the parameter, which is not tolerant in the previous works. Experimental results on the 7 Scenes datasets and the King’s College dataset demonstrate that it has competitive performances.", "title": "" } ]
scidocsrr
a1a91a598d7b604d5f69f20319a077d0
Developing Supply Chains in Disaster Relief Operations through Cross-sector Socially Oriented Collaborations : A Theoretical Model
[ { "docid": "978c1712bf6b469059218697ea552524", "text": "Project-based cross-sector partnerships to address social issues (CSSPs) occur in four “arenas”: business-nonprofit, business-government, government-nonprofit, and trisector. Research on CSSPs is multidisciplinary, and different conceptual “platforms” are used: resource dependence, social issues, and societal sector platforms. This article consolidates recent literature on CSSPs to improve the potential for cross-disciplinary fertilization and especially to highlight developments in various disciplines for organizational researchers. A number of possible directions for future research on the theory, process, practice, method, and critique of CSSPs are highlighted. The societal sector platform is identified as a particularly promising framework for future research.", "title": "" }, { "docid": "ee045772d55000b6f2d3f7469a4161b1", "text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses", "title": "" } ]
[ { "docid": "eff844ffdf2ef5408e23d98564d540f0", "text": "The motions of wheeled mobile robots are largely governed by contact forces between the wheels and the terrain. Inasmuch as future wheel-terrain interactions are unpredictable and unobservable, high performance autonomous vehicles must ultimately learn the terrain by feel and extrapolate, just as humans do. We present an approach to the automatic calibration of dynamic models of arbitrary wheeled mobile robots on arbitrary terrain. Inputs beyond our control (disturbances) are assumed to be responsible for observed differences between what the vehicle was initially predicted to do and what it was subsequently observed to do. In departure from much previous work, and in order to directly support adaptive and predictive controllers, we concentrate on the problem of predicting candidate trajectories rather than measuring the current slip. The approach linearizes the nominal vehicle model and then calibrates the perturbative dynamics to explain the observed prediction residuals. Both systematic and stochastic disturbances are used, and we model these disturbances as functions over the terrain, the velocities, and the applied inertial and gravitational forces. In this way, we produce a model which can be used to predict behavior across all of state space for arbitrary terrain geometry. Results demonstrate that the approach converges quickly and produces marked improvements in the prediction of trajectories for multiple vehicle classes throughout the performance envelope of the platform, including during aggressive maneuvering.", "title": "" }, { "docid": "43e39433013ca845703af053e5ef9e11", "text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.", "title": "" }, { "docid": "4a3496a835d3948299173b4b2767d049", "text": "We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.", "title": "" }, { "docid": "e86ad4e9b61df587d9e9e96ab4eb3978", "text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.", "title": "" }, { "docid": "7161122eaa9c9766e9914ba0f2ee66ef", "text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.", "title": "" }, { "docid": "e30d6fd14f091e188e6a6b86b6286609", "text": "Assessing the spatio-temporal variations of surface water quality is important for water environment management. In this study, surface water samples are collected from 2008 to 2015 at 17 stations in the Ying River basin in China. The two pollutants i.e. chemical oxygen demand (COD) and ammonia nitrogen (NH3-N) are analyzed to characterize the river water quality. Cluster analysis and the seasonal Kendall test are used to detect the seasonal and inter-annual variations in the dataset, while the Moran's index is utilized to understand the spatial autocorrelation of the variables. The influence of natural factors such as hydrological regime, water temperature and etc., and anthropogenic activities with respect to land use and pollutant load are considered as driving factors to understand the water quality evolution. The results of cluster analysis present three groups according to the similarity in seasonal pattern of water quality. The trend analysis indicates an improvement in water quality during the dry seasons at most of the stations. Further, the spatial autocorrelation of water quality shows great difference between the dry and wet seasons due to sluices and dams regulation and local nonpoint source pollution. The seasonal variation in water quality is found associated with the climatic factors (hydrological and biochemical processes) and flow regulation. The analysis of land use indicates a good explanation for spatial distribution and seasonality of COD at the sub-catchment scale. Our results suggest that an integrated water quality measures including city sewage treatment, agricultural diffuse pollution control as well as joint scientific operations of river projects is needed for an effective water quality management in the Ying River basin.", "title": "" }, { "docid": "6e5e6b361d113fa68b2ca152fbf5b194", "text": "Spectral learning algorithms have recently become popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we propose four fast and scalable spectral algorithms for learning word embeddings – low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. All the proposed algorithms harness the multi-view nature of text data i.e. the left and right context of each word, are fast to train and have strong theoretical properties. Some of the variants also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords showing that simple linear approaches give performance comparable to or superior than the state-of-the-art non-linear deep learning based methods.", "title": "" }, { "docid": "c04ae9e3721f23b8b0a5b8306c25becb", "text": "A transmission-line model is developed for predicting the response of a twisted-wire pair (TWP) circuit in the presence of a ground plane, illuminated by a plane-wave electromagnetic field. The twisted pair is modeled as an ideal bifilar helix, the total coupling is separated into differential- (DM) and common-mode (CM) contributions, and closed-form expressions are derived for the equivalent induced sources. Approximate upper bounds to the terminal response of electrically long lines are obtained, and a simplified low-frequency circuit model is used to explain the mechanism of field-to-wire coupling in a TWP above ground, as well as the role of load balancing on the DM and CM electromagnetic noise induced in the terminal loads.", "title": "" }, { "docid": "1d9b1ce73d8d2421092bb5a70016a142", "text": "Social networks have the surprising property of being \"searchable\": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.", "title": "" }, { "docid": "6a23480588ca47b9e53de0fd4ff1ecb1", "text": "We present the nested Chinese restaurant process (nCRP), a stochastic process that assigns probability distributions to ensembles of infinitely deep, infinitely branching trees. We show how this stochastic process can be used as a prior distribution in a Bayesian nonparametric model of document collections. Specifically, we present an application to information retrieval in which documents are modeled as paths down a random tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algorithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this algorithm on collections of scientific abstracts from several journals. This model exemplifies a recent trend in statistical machine learning—the use of Bayesian nonparametric methods to infer distributions on flexible data structures.", "title": "" }, { "docid": "097da6ee2d13e0b4b2f84a26752574f4", "text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.", "title": "" }, { "docid": "be06fc67973751b98dd07599e29e4b01", "text": "The contactless version of the air-filled substrate integrated waveguide (AF-SIW) is introduced for the first time. The conventional AF-SIW configuration requires a pure and flawless connection of the covering layers to the intermediate substrate. To operate efficiently at high frequencies, this requires a costly fabrication process. In the proposed configuration, the boundary condition on both sides around the AF guiding medium is modified to obtain artificial magnetic conductor (AMC) boundary conditions. The AMC surfaces on both sides of the waveguide substrate are realized by a single-periodic structure with the new type of unit cells. The PEC–AMC parallel plates prevent the leakage of the AF guiding region. The proposed contactless AF-SIW shows low-loss performance in comparison with the conventional AF-SIW at millimeter-wave frequencies when the layers of both waveguides are connected poorly.", "title": "" }, { "docid": "4283c9b6b679913648f758abeba2ab93", "text": "A significant goal of natural language processing (NLP) is to devise a system capable of machine understanding of text. A typical system can be tested on its ability to answer questions based on a given context document. One appropriate dataset for such a system is the Stanford Question Answering Dataset (SQuAD), a crowdsourced dataset of over 100k (question, context, answer) triplets. In this work, we focused on creating such a question answering system through a neural net architecture modeled after the attentive reader and sequence attention mix models.", "title": "" }, { "docid": "285587e0e608d8bafa0962b5cf561205", "text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.", "title": "" }, { "docid": "17953a3e86d3a4396cbd8a911c477f07", "text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.", "title": "" }, { "docid": "184d34ef560809aad938c0e08939a1bb", "text": "Mechanical engineers apply principles of motion, energy, force, materials, and mathematics to design and analyze a wide variety of products and systems. The field requires an understanding of core concepts including mechanics, kinematics, thermodynamics, heat transfer, materials science and controls. Mechanical engineers use these core principles along with tools like computer-aided engineering and product life cycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, automotive systems, aircraft, robotics, medical devices, and more. Today, mechanical engineers are pursuing developments in such fields as composites, mechatronics, and nanotechnology, and are helping to create a more sustainable future.", "title": "" }, { "docid": "69dea04dc13754f7f89a1e7b7d973659", "text": "The nature of congestion feedback largely governs the behavior of congestion control. In datacenter networks, where RTTs are in hundreds of microseconds, accurate feedback is crucial to achieve both high utilization and low queueing delay. Proposals for datacenter congestion control predominantly leverage ECN or even explicit innetwork feedback (e.g., RCP-type feedback) to minimize the queuing delay. In this work we explore latency-based feedback as an alternative and show its advantages over ECN. Against the common belief that such implicit feedback is noisy and inaccurate, we demonstrate that latencybased implicit feedback is accurate enough to signal a single packet’s queuing delay in 10 Gbps networks. DX enables accurate queuing delay measurements whose error falls within 1.98 and 0.53 microseconds using software-based and hardware-based latency measurements, respectively. This enables us to design a new congestion control algorithm that performs fine-grained control to adjust the congestion window just enough to achieve very low queuing delay while attaining full utilization. Our extensive evaluation shows that 1) the latency measurement accurately reflects the one-way queuing delay in single packet level; 2) the latency feedback can be used to perform practical and fine-grained congestion control in high-speed datacenter networks; and 3) DX outperforms DCTCP with 5.33x smaller median queueing delay at 1 Gbps and 1.57x at 10 Gbps.", "title": "" }, { "docid": "3d2060ef33910ef1c53b0130f3cc3ffc", "text": "Recommender systems help users deal with information overload and enjoy a personalized experience on the Web. One of the main challenges in these systems is the item cold-start problem which is very common in practice since modern online platforms have thousands of new items published every day. Furthermore, in many real-world scenarios, the item recommendation tasks are based on users’ implicit preference feedback such as whether a user has interacted with an item. To address the above challenges, we propose a probabilistic modeling approach called Neural Semantic Personalized Ranking (NSPR) to unify the strengths of deep neural network and pairwise learning. Specifically, NSPR tightly couples a latent factor model with a deep neural network to learn a robust feature representation from both implicit feedback and item content, consequently allowing our model to generalize to unseen items. We demonstrate NSPR’s versatility to integrate various pairwise probability functions and propose two variants based on the Logistic and Probit functions. We conduct a comprehensive set of experiments on two real-world public datasets and demonstrate that NSPR significantly outperforms the state-of-the-art baselines.", "title": "" }, { "docid": "836f0a9a843802dda2b9ca7b166ef5f8", "text": "Article history: Available online xxxx", "title": "" } ]
scidocsrr
b95c9c9d60fd21e0175319ee54a82445
Detection of false data injection attacks in smart-grid systems
[ { "docid": "ac222a5f8784d7a5563939077c61deaa", "text": "Cyber-Physical Systems (CPS) are integrations of computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. In the physical world, the passage of time is inexorable and concurrency is intrinsic. Neither of these properties is present in today’s computing and networking abstractions. I argue that the mismatch between these abstractions and properties of physical processes impede technical progress, and I identify promising technologies for research and investment. There are technical approaches that partially bridge the abstraction gap today (such as real-time operating systems, middleware technologies, specialized embedded processor architectures, and specialized networks), and there is certainly considerable room for improvement of these technologies. However, it may be that we need a less incremental approach, where new abstractions are built from the ground up. The foundations of computing are built on the premise that the principal task of computers is transformation of data. Yet we know that the technology is capable of far richer interactions the physical world. I critically examine the foundations that have been built over the last several decades, and determine where the technology and theory bottlenecks and opportunities lie. I argue for a new systems science that is jointly physical and computational.", "title": "" }, { "docid": "002aec0b09bbd2d0e3453c9b3aa8d547", "text": "It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the “smart” grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber-physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure.", "title": "" } ]
[ { "docid": "62bf93deeb73fab74004cb3ced106bac", "text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.", "title": "" }, { "docid": "960360bd445566c4581c1ae021ee64d5", "text": "Artwork is a mode of creative expression and this paper is particularly interested in investigating if machine can learn and synthetically create artwork that are usually nonfigurative and structured abstract. To this end, we propose an extension to the Generative Adversarial Network (GAN), namely as the ArtGAN to synthetically generate high quality artwork. This is in contrast to most of the current solutions that focused on generating structural images such as birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the categorical autoencoder-based discriminator that incorporates an autoencoder into the categorical discriminator for additional complementary information. In order to synthesize a high resolution artwork, we include a novel magnified learning strategy to improve the correlations between neighbouring pixels. Based on visual inspection and Inception scores, we demonstrate that ArtGAN is able to draw high resolution and realistic artwork, as well as generate images of much higher quality in four other datasets (i.e. CIFAR-10, STL-10, Oxford-102 and CUB-200).", "title": "" }, { "docid": "21122ab1659629627c46114cc5c3b838", "text": "The introduction of more onboard autonomy in future single and multi-satellite missions is both a question of limited onboard resources and of how far can we actually thrust the autonomous functionalities deployed on board. In-flight experience with nasa's Deep Space 1 and Earth Observing 1 has shown how difficult it is to design, build and test reliable software for autonomy. The degree to which system-level onboard autonomy will be deployed in the single and multi satellite systems of tomorrow will depend, among other things, on the progress made in two key software technologies: autonomous onboard planning and robust execution. Parallel to the developments in these two areas, the actual integration of planning and execution engines is still nowadays a crucial issue in practical application. This paper presents an onboard autonomous model-based executive for execution of time-flexible plans. It describes its interface with an apsi-based timeline-based planner, its control approaches, architecture and its modelling language as an extension of apsl's ddl. In addition, it introduces a modified version of the classical blocks world toy planning problem which has been extended in scope and with a runtime environment for evaluation of integrated planning and executive engines.", "title": "" }, { "docid": "55d7db89621dc57befa330c6dea823bf", "text": "In this paper we propose CUDA-based implementations of two 3D point sets registration algorithms: Soft assign and EM-ICP. Both algorithms are known for being time demanding, even on modern multi-core CPUs. Our GPUbased implementations vastly outperform CPU ones. For instance, our CUDA EM-ICP aligns 5000 points in less than 7 seconds on a GeForce 8800GT, while the same implementation in OpenMP on an Intel Core 2 Quad would take 7 minutes.", "title": "" }, { "docid": "285a1c073ec4712ac735ab84cbcd1fac", "text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.", "title": "" }, { "docid": "448285428c6b6cfca8c2937d8393eee5", "text": "Swarm robotics is a novel approach to the coordination of large numbers of robots and has emerged as the application of swarm intelligence to multi-robot systems. Different from other swarm intelligence studies, swarm robotics puts emphases on the physical embodiment of individuals and realistic interactions among the individuals and between the individuals and the environment. In this chapter, we present a brief review of this new approach. We first present its definition, discuss the main motivations behind the approach, as well as its distinguishing characteristics and major coordination mechanisms. Then we present a brief review of swarm robotics research along four axes; namely design, modelling and analysis, robots and problems.", "title": "" }, { "docid": "6b4fcc3075d2fcf02b7d570fa5a88a58", "text": "Vehicular Ad-hoc Network (VANET) is a new application of Mobile Ad-hoc Network (MANET) in the field of Inter-vehicle communication. As the high mobility of vehicles, some traditional MANET routing protocols may not fit the VANET. In this paper, we propose a cluster-based directional routing protocol (CBDRP) for highway scenarios, in which the header of a cluster selects another header according to the moving direction of vehicle to forward packets. Simulation results shows the CBDRP can solve the problem of link stability in VANET, realizing reliable and rapid data transmission.", "title": "" }, { "docid": "3f1ab17fb722d5a2612675673b200a82", "text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.", "title": "" }, { "docid": "35d7da51ad184250d4cd219ab32f0b5e", "text": "This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a \"Siamese\" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector ~ith a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.", "title": "" }, { "docid": "b2d749c5b27e065922433fe6fb6462ee", "text": "In this paper, a fast adaptive neural network classifier named FANNC is proposed. FANNC exploits the advantages of both adaptive resonance theory and field theory. It needs only one-pass learning, and achieves not only high predictive accuracy but also fast learning speed. Besides, FANNC has incremental learning ability. When new instances are fed, it does not need to retrain the whole training set. Instead, it could learn the knowledge encoded in those instances through slightly adjusting the network topology when necessary, that is, adaptively appending one or two hidden units and corresponding connections to the existing network. This characteristic makes FANNC fit for real-time online learning tasks. Moreover, since the network architecture is adaptively set up, the disadvantage of manually determining the number of hidden units of most feed-forward neural networks is overcome. Benchmark tests show that FANNC is a preferable neural network classifier, which is superior to several other neural algorithms on both predictive accuracy and learning speed.", "title": "" }, { "docid": "90a7849b9e71df0cb9c4b77c369592db", "text": "Social networking and microblogging services such as Twitter provide a continuous source of data from which useful information can be extracted. The detection and characterization of bursty words play an important role in processing such data, as bursty words might hint to events or trending topics of social importance upon which actions can be triggered. While there are several approaches to extract bursty words from the content of messages, there is only little work that deals with the dynamics of continuous streams of messages, in particular messages that are geo-tagged.\n In this paper, we present a framework to identify bursty words from Twitter text streams and to describe such words in terms of their spatio-temporal characteristics. Using a time-aware word usage baseline, a sliding window approach over incoming tweets is proposed to identify words that satisfy some burstiness threshold. For these words then a time-varying, spatial signature is determined, which primarily relies on geo-tagged tweets. In order to deal with the noise and the sparsity of geo-tagged tweets, we propose a novel graph-based regularization procedure that uses spatial cooccurrences of bursty words and allows for computing sound spatial signatures. We evaluate the functionality of our online processing framework using two real-world Twitter datasets. The results show that our framework can efficiently and reliably extract bursty words and describe their spatio-temporal evolution over time.", "title": "" }, { "docid": "b9e8dc2492a0d91f1f7b9866f38235ab", "text": "As the interconnect cross-sections are ever scaled down, a particular care must be taken on the tradeoff between increase of current density in the back end of line and reliability to prevent electromigration (EM). Some lever exists as the well-known Blech effect [1]. One can take advantage of the EM induced backflow flux that counters the EM flux. As a consequence, the total net flux in the line is reduced and additional current density in designs can be allowed in short lines. However, the immortality condition is most of the time addressed with a standard test structures ended by two vias [2]–[3]. Designs present complex configurations far from this typical case and the Blech product (jL)c can be deteriorated or enhanced [4]. In the present paper, we present our study of EM performances of short lines ended by an inactive end of line (EOL) at one end of the test structure. Significant differences on the median time to failure (MTF) are observed with respect to the current direction, from a quasi deletion of failure to a significant reduction of the Blech effect. Based on the resistance saturation, a method is proposed to determine effective lengths of inactive EOL configurations corresponding to the standard case.", "title": "" }, { "docid": "843ea8a700adf545288175c1062107bb", "text": "Stress is a natural reaction to various stress-inducing factors which can lead to physiological and behavioral changes. If persists for a longer period, stress can cause harmful effects on our body. The body sensors along with the concept of the Internet of Things can provide rich information about one's mental and physical health. The proposed work concentrates on developing an IoT system which can efficiently detect the stress level of a person and provide a feedback which can assist the person to cope with the stressors. The system consists of a smart band module and a chest strap module which can be worn around wrist and chest respectively. The system monitors the parameters such as Electro dermal activity and Heart rate in real time and sends the data to a cloud-based ThingSpeak server serving as an online IoT platform. The computation of the data is performed using a ‘MATLAB Visualization’ application and the stress report is displayed. The authorized person can log in, view the report and take actions such as consulting a medical person, perform some meditation or yoga exercises to cope with the condition.", "title": "" }, { "docid": "4b18d2665f1bc6e9576237d88e15c74e", "text": "Beta Regression, an extension of generalized linear models, can estimate the effect of explanatory variables on data falling within the (0,1) interval. Recent developments in Beta Regression theory extend the support interval to now include 0 and 1. The %Beta_Regression macro is updated to now allow for Zero-One Inflated Beta Regression.", "title": "" }, { "docid": "ec673efa5f837ba4c997ee7ccd845ce1", "text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.", "title": "" }, { "docid": "0b01870332dd93897fbcecb9254c40b9", "text": "Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.", "title": "" }, { "docid": "eb6f055399614a4e0876ffefae8d6a28", "text": "For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.", "title": "" }, { "docid": "75e794b731685064820c79f4d68ed79b", "text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to implicitly indicate groups. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. We discuss results from evaluations of those techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.", "title": "" }, { "docid": "b85e9ef3652a99e55414d95bfed9cc0d", "text": "Regulatory T cells (Tregs) prevail as a specialized cell lineage that has a central role in the dominant control of immunological tolerance and maintenance of immune homeostasis. Thymus-derived Tregs (tTregs) and their peripherally induced counterparts (pTregs) are imprinted with unique Forkhead box protein 3 (Foxp3)-dependent and independent transcriptional and epigenetic characteristics that bestows on them the ability to suppress disparate immunological and non-immunological challenges. Thus, unidirectional commitment and the predominant stability of this regulatory lineage is essential for their unwavering and robust suppressor function and has clinical implications for the use of Tregs as cellular therapy for various immune pathologies. However, recent studies have revealed considerable heterogeneity or plasticity in the Treg lineage, acquisition of alternative effector or hybrid fates, and promotion rather than suppression of inflammation in extreme contexts. In addition, the absolute stability of Tregs under all circumstances has been questioned. Since these observations challenge the safety and efficacy of human Treg therapy, the issue of Treg stability versus plasticity continues to be enthusiastically debated. In this review, we assess our current understanding of the defining features of Foxp3(+) Tregs, the intrinsic and extrinsic cues that guide development and commitment to the Treg lineage, and the phenotypic and functional heterogeneity that shapes the plasticity and stability of this critical regulatory population in inflammatory contexts.", "title": "" } ]
scidocsrr
5abcc43722d09043d886168fa3c17eb8
Towards Highly Accurate and Stable Face Alignment for High-Resolution Videos
[ { "docid": "79cffed53f36d87b89577e96a2b2e713", "text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "title": "" }, { "docid": "f1deb9134639fb8407d27a350be5b154", "text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "title": "" } ]
[ { "docid": "991420a2abaf1907ab4f5a1c2dcf823d", "text": "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing &#x2013; the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.", "title": "" }, { "docid": "23676a52e1ed03d7b5c751a9986a7206", "text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of", "title": "" }, { "docid": "a86dac3d0c47757ce8cad41499090b8e", "text": "We propose a theory of regret regulation that distinguishes regret from related emotions, specifies the conditions under which regret is felt, the aspects of the decision that are regretted, and the behavioral implications. The theory incorporates hitherto scattered findings and ideas from psychology, economics, marketing, and related disciplines. By identifying strategies that consumers may employ to regulate anticipated and experienced regret, the theory identifies gaps in our current knowledge and thereby outlines opportunities for future research.", "title": "" }, { "docid": "aa7029c5e29a72a8507cbcb461ef92b0", "text": "Regenerative endodontics has been defined as \"biologically based procedure designed to replace damaged structures, including dentin and root structures, as well as cells of the pulp-dentin complex.\" This is an exciting and rapidly evolving field of human endodontics for the treatment of immature permanent teeth with infected root canal systems. These procedures have shown to be able not only to resolve pain and apical periodontitis but continued root development, thus increasing the thickness and strength of the previously thin and fracture-prone roots. In the last decade, over 80 case reports, numerous animal studies, and series of regenerative endodontic cases have been published. However, even with multiple successful case reports, there are still some remaining questions regarding terminology, patient selection, and procedural details. Regenerative endodontics provides the hope of converting a nonvital tooth into vital one once again.", "title": "" }, { "docid": "9e452c36ed7abfa6289568165b59ad30", "text": "This paper presents an approach to classify the heights of targets with radar systems. The algorithm is based on the analysis of the superposition of several reflections at a high target. The result is a superposition of the different received signals. On the contrary to a high target, a low target has only one reflection point and one propagation path. In this paper, a technique is proposed to detect the superposition and consequently classify targets as low or high. Finally the algorithm is evaluated with measurements.", "title": "" }, { "docid": "a60b1045e2344cf2ab8db6038cbdeb4d", "text": "The study of the interactions between plants and their microbial communities in the rhizosphere is important for developing sustainable management practices and agricultural products such as biofertilizers and biopesticides. Plant roots release a broad variety of chemical compounds to attract and select microorganisms in the rhizosphere. In turn, these plantassociated microorganisms, via different mechanisms, influence plant health and growth. In this review, we summarize recent progress made in unraveling the interactions between plants and rhizosphere microbes through plant root exudates, focusing on how root exudate compounds mediate rhizospheric interactions both at the plant–microbe and plant–microbiome levels. We also discuss the potential of root exudates for harnessing rhizospheric interactions with microbes that could lead to sustainable agricultural practices.", "title": "" }, { "docid": "642b98bf1ea22958411514cb7f01ef68", "text": "This paper studies the problems of vehicle make & model classification. Some of the main challenges are reaching high classification accuracy and reducing the annotation time of the images. To address these problems, we have created a fine-grained database using online vehicle marketplaces of Turkey. A pipeline is proposed to combine an SSD (Single Shot Multibox Detector) model with a CNN (Convolutional Neural Network) model to train on the database. In the pipeline, we first detect the vehicles by following an algorithm which reduces the time for annotation. Then, we feed them into the CNN model. It is reached approximately 4% better classification accuracy result than using a conventional CNN model. Next, we propose to use the detected vehicles as ground truth bounding box (GTBB) of the images and feed them into an SSD model in another pipeline. At this stage, it is reached reasonable classification accuracy result without using perfectly shaped GTBB. Lastly, an application is implemented in a use case by using our proposed pipelines. It detects the unauthorized vehicles by comparing their license plate numbers and make & models. It is assumed that license plates are readable.", "title": "" }, { "docid": "a2799e0cee6ca6d7f6b0cc230957b56b", "text": "We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.", "title": "" }, { "docid": "e6da9e3f8af84139076d30a439da7a18", "text": "Monocular simultaneous localization and mapping (SLAM) is a key enabling technique for many augmented reality (AR) applications. However, conventional methods for monocular SLAM can obtain only sparse or semi-dense maps in highly-textured image areas. Poorly-textured regions which widely exist in indoor and man-made urban environments can be hardly reconstructed, impeding interactions between virtual objects and real scenes in AR apps. In this paper,we present a novel method for real-time monocular dense mapping based on the piecewise planarity assumption for poorly textured regions. Specifically, a semi-dense map for highly-textured regions is first calculated by pixel matching and triangulation [6, 7]. Large textureless regions extracted by Maximally Stable Color Regions (MSCR) [11], which is a homogeneous-color region detector, are approximated using piecewise planar models which are estimated by the corresponding semi-dense 3D points and the proposed multi-plane segmentation algorithm. Plane models associated with the same 3D area across multiple overlapping views are linked and fused to ensure a consistent and accurate 3D reconstruction. Experimental results on two public datasets [15, 23] demonstrate that our method is 2.3X~2.9X faster than the state-of-the-art method DPPTAM [2], and meanwhile achieves better reconstruction accuracy and completeness. We also apply our method to a real AR application and live experiments with a hand-held camera demonstrate the effectiveness and efficiency of our method in practical scenario.", "title": "" }, { "docid": "e10d9cca7bdb4b8f038d3dbc260d0e3f", "text": "An important goal of visualization technology is to support the e xploration and analysis of very large amounts of data. In this paper , we describe a set of pix eloriented visualization techniques which use each pix el of the display to visualize one data v alue and therefore allo w the visualization of the lar gest amount of data possible. Most of the techniques ha ve been specifically designed for visualizing and querying lar ge databases. The techniques may be di vided into query-independent techniques which directly visualize the data (or a certain portion of it) and query-dependent techniques which visualize the data in the conte xt f a specific query. Examples for the class of query-independent techniques are the screen-filling curve and recursi ve pattern techniques. The screen-filling curv e techniques are based on the well-kno wn Morton and Peano-Hilbert curv e algorithms, and the recursive pattern technique is based on a generic recursi ve scheme which generalizes a wide range of pix el-oriented arrangements for visualizing lar ge data sets. Examples for the class of query-dependent techniques are the snak piral and snak eaxes techniques, which visualize the distances with respect to a database query and arrange the most rele vant data items in the center of the display . Beside describing the basic ideas of our techniques, we pro vide example visualizations generated by the various techniques, which demonstrate the usefulness of our techniques and show some of their adv antages and disadv antages.", "title": "" }, { "docid": "793edca657c68ade4d2391c23f585c41", "text": "In the linear bandit problem a learning agent chooses an arm at each round and receives a stochastic reward. The expected value of this stochastic reward is an unknown linear function of the arm choice. As is standard in bandit problems, a learning agent seeks to maximize the cumulative reward over an n round horizon. The stochastic bandit problem can be seen as a special case of the linear bandit problem when the set of available arms at each round is the standard basis ei for the Euclidean space R, i.e. the vector ei is a vector with all 0s except for a 1 in the ith coordinate. As a result each arm is independent of the others and the reward associated with each arm depends only on a single parameter as is the case in stochastic bandits. The underlying algorithmic approach to solve this problem uses the optimism in the face of uncertainty (OFU) principle. The OFU principle solves the exploration-exploitation tradeoff in the linear bandit problem by maintaining a confidence set for the vector of coefficients of the linear function that governs rewards. In each round the algorithm chooses an estimate of the coefficients of the linear function from the confidence set and then takes an action so that the predicted reward is maximized. The problem reduces to constructing confidence sets for the vector of coefficients of the linear function based on the action-reward pairs observed in the past time steps. The linear bandit problem was first studied by Auer et al. (2002) [1] under the name of linear reinforcement learning. Since the introduction of the problem, several works have improved the analysis and explored variants of the problem. The most influential works include Dani et al. (2008) [2], Rusmevichientong et al. (2010) [3], and Abbasi et al. (2011) [4]. In each of these works the set of available arms remains constant, but the set is only restricted to being a bounded subset of a finite-dimensional vector space. Variants of the problem formulation have also been widely applied to recommendation systems following the work of Li et al. (2010) [5] within the context of web advertisement. An important property of this problem is that the arms are not independent because future arm choices depend on the confidence sets constructed from past choices. In the literature, several works including [5] have failed to recognize this property leading to faulty analysis. This fine detail requires special care which we explore in depth in Section 2.", "title": "" }, { "docid": "9bff76e87f4bfa3629e38621060050f7", "text": "Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In this paper, we induce high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leverage the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We share the resulting dataset of over 5.5 million induced labels---4,000 times larger than the previous largest figure extraction dataset---with an average precision of 96.8%, to enable the development of modern data-driven methods for this task. We use this dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. The model was successfully deployed in Semantic Scholar,\\footnote\\urlhttps://www.semanticscholar.org/ a large-scale academic search engine, and used to extract figures in 13 million scientific documents.\\footnoteA demo of our system is available at \\urlhttp://labs.semanticscholar.org/deepfigures/,and our dataset of induced labels can be downloaded at \\urlhttps://s3-us-west-2.amazonaws.com/ai2-s2-research-public/deepfigures/jcdl-deepfigures-labels.tar.gz. Code to run our system locally can be found at \\urlhttps://github.com/allenai/deepfigures-open.", "title": "" }, { "docid": "babdf14e560236f5fcc8a827357514e5", "text": "Email: [email protected] Abstract: The NP-hard (complete) team orienteering problem is a particular vehicle routing problem with the aim of maximizing the profits gained from visiting control points without exceeding a travel cost limit. The team orienteering problem has a number of applications in several fields such as athlete recruiting, technician routing and tourist trip. Therefore, solving optimally the team orienteering problem would play a major role in logistic management. In this study, a novel randomized population constructive heuristic is introduced. This heuristic constructs a diversified initial population for population-based metaheuristics. The heuristics proved its efficiency. Indeed, experiments conducted on the well-known benchmarks of the team orienteering problem show that the initial population constructed by the presented heuristic wraps the best-known solution for 131 benchmarks and good solutions for a great number of benchmarks.", "title": "" }, { "docid": "9a1e0edc4d5eb8a2cbf7fa0c6640f0bc", "text": "The classical SVM is an optimization problem minimizing the hinge losses of mis-classified samples with the regularization term. When the sample size is small or data has noise, it is possible that the classifier obtained with training data may not generalize well to population, since the samples may not accurately represent the true population distribution. We propose a distributionally-robust framework for Support Vector Machines (DR-SVMs). We build an ambiguity set for the population distribution based on samples using the Kantorovich metric. DR-SVMs search the classifier that minimizes the sum of regularization term and the hinge loss function for the worst-case population distribution among the ambiguity set. We provide semi-infinite programming formulation of the DR-SVMs and propose a cutting-plane algorithm to solve the problem. Computational results on simulated data and real data from University of California, Irvine Machine Learning Repository show that the DR-SVMs outperform the SVMs in terms of the Area Under Curve (AUC) measures on several test problems.", "title": "" }, { "docid": "154ab0cbc1dfa3c4bae8a846f800699e", "text": "This paper presents a new strategy for the active disturbance rejection control (ADRC) of a general uncertain system with unknown bounded disturbance based on a nonlinear sliding mode extended state observer (SMESO). Firstly, a nonlinear extended state observer is synthesized using sliding mode technique for a general uncertain system assuming asymptotic stability. Then the convergence characteristics of the estimation error are analyzed by Lyapunov strategy. It revealed that the proposed SMESO is asymptotically stable and accurately estimates the states of the system in addition to estimating the total disturbance. Then, an ADRC is implemented by using a nonlinear state error feedback (NLSEF) controller; that is suggested by J. Han and the proposed SMESO to control and actively reject the total disturbance of a permanent magnet DC (PMDC) motor. These disturbances caused by the unknown exogenous disturbances and the matched uncertainties of the controlled model. The proposed SMESO is compared with the linear extended state observer (LESO). Through digital simulations using MATLAB / SIMULINK, the chattering phenomenon has been reduced dramatically on the control input channel compared to LESO. Finally, the closed-loop system exhibits a high immunity to torque disturbance and quite robustness to matched uncertainties in the system. Keywords—extended state observer; sliding mode; rejection control; tracking differentiator; DC motor; nonlinear state feedback", "title": "" }, { "docid": "a9acc36ae78a12fbf19e8590e931e6f8", "text": "Deep learning models are susceptible to input specific noise, called adversarial perturbations. Moreover, there exist input-agnostic noise, called Universal Adversarial Perturbations (UAP) that can affect inference of the models over most input samples. Given a model, there exist broadly two approaches to craft UAPs: (i) data-driven: that require data, and (ii) data-free: that do not require data samples. Data-driven approaches require actual samples from the underlying data distribution and craft UAPs with high success (fooling) rate. However, data-free approaches craft UAPs without utilizing any data samples and therefore result in lesser success rates. In this paper, for data-free scenarios, we propose a novel approach that emulates the effect of data samples with class impressions in order to craft UAPs using data-driven objectives. Class impression for a given pair of category and model is a generic representation (in the input space) of the samples belonging to that category. Further, we present a neural network based generative model that utilizes the acquired class impressions to learn crafting UAPs. Experimental evaluation demonstrates that the learned generative model, (i) readily crafts UAPs via simple feed-forwarding through neural network layers, and (ii) achieves state-of-the-art success rates for data-free scenario and closer to that for data-driven setting without actually utilizing any data samples.", "title": "" }, { "docid": "665da3a85a548d12864de5fad517e3ee", "text": "To characterize the neural correlates of being personally involved in social interaction as opposed to being a passive observer of social interaction between others we performed an fMRI study in which participants were gazed at by virtual characters (ME) or observed them looking at someone else (OTHER). In dynamic animations virtual characters then showed socially relevant facial expressions as they would appear in greeting and approach situations (SOC) or arbitrary facial movements (ARB). Differential neural activity associated with ME>OTHER was located in anterior medial prefrontal cortex in contrast to the precuneus for OTHER>ME. Perception of socially relevant facial expressions (SOC>ARB) led to differentially increased neural activity in ventral medial prefrontal cortex. Perception of arbitrary facial movements (ARB>SOC) differentially activated the middle temporal gyrus. The results, thus, show that activation of medial prefrontal cortex underlies both the perception of social communication indicated by facial expressions and the feeling of personal involvement indicated by eye gaze. Our data also demonstrate that distinct regions of medial prefrontal cortex contribute differentially to social cognition: whereas the ventral medial prefrontal cortex is recruited during the analysis of social content as accessible in interactionally relevant mimic gestures, differential activation of a more dorsal part of medial prefrontal cortex subserves the detection of self-relevance and may thus establish an intersubjective context in which communicative signals are evaluated.", "title": "" }, { "docid": "a423435c1dc21c33b93a262fa175f5c5", "text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.", "title": "" }, { "docid": "91c7a22694ec8ae4d8ca5ad3147fb11e", "text": "The binary-weight CNN is one of the most efficient solutions for mobile CNNs. However, a large number of operations are required to process each image. To reduce such a huge operation count, we propose an energy-efficient kernel decomposition architecture, based on the observation that a large number of operations are redundant. In this scheme, all kernels are decomposed into sub-kernels to expose the common parts. By skipping the redundant computations, the operation count for each image was consequently reduced by 47.7%. Furthermore, a low cost bit-width quantization technique was implemented by exploiting the relative scales of the feature data. Experimental results showed that the proposed architecture achieves a 22% energy reduction.", "title": "" }, { "docid": "1b2c561b6aea994ef50b713f0b5286a1", "text": "This paper presents a novel system architecture applicable to high-performance and flexible transport data processing which includes complex protocol operation and a nehvork control algorithm. We developed a new tightly coupled Held Programmable Gate Array (FPGA) and Micro-Processing Unit (MPU) system named. Yet Another Re-Definable System (YARDS). It comprises three programmable devices which equateto high flexibility. These devices are the RISC-type MPU with memories, programmable inter-connection devices, and FPGAs. Using these, this system supports various styles of coupling between the FPGAs and the MPU which are suitable for constructing transport data processing. In this paper, two applications of the systemin the telecommunications field are given. One is an Operation, Administration, and Management (OAM) cell operations on an AsynchronousTransfer Mode (ATM) network. The other is a dynamic configuration protocol enables the updateor changeof the functions of the transport data processing system on-line. This is the first approach applying the FPGA/MPU hybrid system to the telecommunications field.", "title": "" } ]
scidocsrr
2119d665534a15b04e49f996db25ac47
The contribution of attentional bias to worry: Distinguishing the roles of selective engagement and disengagement
[ { "docid": "1c7131fcb031497b2c1487f9b25d8d4e", "text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.", "title": "" } ]
[ { "docid": "42452d6df7372cdc9c2cdebd8f0475cb", "text": "This paper presents SgxPectre Attacks that exploit the recently disclosed CPU bugs to subvert the confidentiality and integrity of SGX enclaves. Particularly, we show that when branch prediction of the enclave code can be influenced by programs outside the enclave, the control flow of the enclave program can be temporarily altered to execute instructions that lead to observable cache-state changes. An adversary observing such changes can learn secrets inside the enclave memory or its internal registers, thus completely defeating the confidentiality guarantee offered by SGX. To demonstrate the practicality of our SgxPectre Attacks, we have systematically explored the possible attack vectors of branch target injection, approaches to win the race condition during enclave’s speculative execution, and techniques to automatically search for code patterns required for launching the attacks. Our study suggests that any enclave program could be vulnerable to SgxPectre Attacks since the desired code patterns are available in most SGX runtimes (e.g., Intel SGX SDK, Rust-SGX, and Graphene-SGX). Most importantly, we have applied SgxPectre Attacks to steal seal keys and attestation keys from Intel signed quoting enclaves. The seal key can be used to decrypt sealed storage outside the enclaves and forge valid sealed data; the attestation key can be used to forge attestation signatures. For these reasons, SgxPectre Attacks practically defeat SGX’s security protection. This paper also systematically evaluates Intel’s existing countermeasures against SgxPectre Attacks and discusses the security implications.", "title": "" }, { "docid": "7c2c987c2fc8ea0b18d8361072fa4e31", "text": "Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated overengineering on IR, and not necessarily performance gain for QA. We propose to tightly integrate them by coupling automatically learned features for answer extraction to a shallow-structured IR model. Our method is very quick to implement, and significantly improves IR for QA (measured in Mean Average Precision and Mean Reciprocal Rank) by 10%-20% against an uncoupled retrieval baseline in both document and passage retrieval, which further leads to a downstream 20% improvement in QA F1.", "title": "" }, { "docid": "57261e77a6e8f6a0c984f5e199a71554", "text": "We present a software framework for simulating the HCF Controlled Channel Access (HCCA) in an IEEE 802.11e system. The proposed approach allows for flexible integration of different scheduling algorithms with the MAC. The 802.11e system consists of three modules: Classifier, HCCA Scheduler, MAC. We define a communication interface exported by the MAC module to the HCCA Scheduler. A Scheduler module implementing the reference scheduler defined in the draft IEEE 802.11e document is also described. The software framework reported in this paper has been implemented using the Network Simulator 2 platform. A preliminary performance analysis of the reference scheduler is also reported.", "title": "" }, { "docid": "fba0ff24acbe07e1204b5fe4c492ab72", "text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.", "title": "" }, { "docid": "a7c2c2889b54a4f0e22b1cb09bbd8d6b", "text": "In this paper we present an efficient algorithm for multi-layer depth peeling via bucket sort of fragments on GPU, which makes it possible to capture up to 32 layers simultaneously with correct depth ordering in a single geometry pass. We exploit multiple render targets (MRT) as storage and construct a bucket array of size 32 per pixel. Each bucket is capable of holding only one fragment, and can be concurrently updated using the MAX/MIN blending operation. During the rasterization, the depth range of each pixel location is divided into consecutive subintervals uniformly, and a linear bucket sort is performed so that fragments within each subintervals will be routed into the corresponding buckets. In a following fullscreen shader pass, the bucket array can be sequentially accessed to get the sorted fragments for further applications. Collisions will happen when more than one fragment is routed to the same bucket, which can be alleviated by multi-pass approach. We also develop a two-pass approach to further reduce the collisions, namely adaptive bucket depth peeling. In the first geometry pass, the depth range is redivided into non-uniform subintervals according to the depth distribution to make sure that there is only one fragment within each subinterval. In the following bucket sorting pass, there will be only one fragment routed into each bucket and collisions will be substantially reduced. Our algorithm shows up to 32 times speedup to the classical depth peeling especially for large scenes with high depth complexity, and the experimental results are visually faithful to the ground truth. Also it has no requirement of pre-sorting geometries or post-sorting fragments, and is free of read-modify-write (RMW) hazards.", "title": "" }, { "docid": "af7736d4e796d3439613ed06ca4e4b72", "text": "The past few years have witnessed the fast development of different regularization methods for deep learning models such as fully-connected deep neural networks (DNNs) and Convolutional Neural Networks (CNNs). Most of previous methods mainly consider to drop features from input data and hidden layers, such as Dropout, Cutout and DropBlocks. DropConnect select to drop connections between fully-connected layers. By randomly discard some features or connections, the above mentioned methods control the overfitting problem and improve the performance of neural networks. In this paper, we proposed two novel regularization methods, namely DropFilter and DropFilter-PLUS, for the learning of CNNs. Different from the previous methods, DropFilter and DropFilter-PLUS selects to modify the convolution filters. For DropFilter-PLUS, we find a suitable way to accelerate the learning process based on theoretical analysis. Experimental results on MNISTshow that using DropFilter and DropFilter-PLUS may improve performance on image classification tasks.", "title": "" }, { "docid": "3ea9d312027505fb338a1119ff01d951", "text": "Many experiments provide evidence that practicing retrieval benefits retention relative to conditions of no retrieval practice. Nearly all prior research has employed retrieval practice requiring overt responses, but a few experiments have shown that covert retrieval also produces retention advantages relative to control conditions. However, direct comparisons between overt and covert retrieval are scarce: Does covert retrieval-thinking of but not producing responses-on a first test produce the same benefit as overt retrieval on a criterial test given later? We report 4 experiments that address this issue by comparing retention on a second test following overt or covert retrieval on a first test. In Experiment 1 we used a procedure designed to ensure that subjects would retrieve on covert as well as overt test trials and found equivalent testing effects in the 2 cases. In Experiment 2 we replicated these effects using a procedure that more closely mirrored natural retrieval processes. In Experiment 3 we showed that overt and covert retrieval produced equivalent testing effects after a 2-day delay. Finally, in Experiment 4 we showed that covert retrieval benefits retention more than restudying. We conclude that covert retrieval practice is as effective as overt retrieval practice, a conclusion that contravenes hypotheses in the literature proposing that overt responding is better. This outcome has an important educational implication: Students can learn as much from covert self-testing as they would from overt responding.", "title": "" }, { "docid": "7709df997c72026406d257c85dacb271", "text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.", "title": "" }, { "docid": "7d02f07418dc82b0645b6933a3fecfc0", "text": "This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper \"Evidence-based Health Informatics: How Do We Know What We Know?\" written by Elske Ammenwerth [1]. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the Ammenwerth paper. In subsequent issues the discussion can continue through letters to the editor. With these comments on the paper \"Evidence-based Health Informatics: How do we know what we know?\", written by Elske Ammenwerth [1], the journal seeks to stimulate a broad discussion on the challenges of evaluating information processing and information technology in health care. An international group of experts has been invited by the editor of Methods to comment on this paper. Each of the invited commentaries forms one section of this paper.", "title": "" }, { "docid": "498eada57edb9120da164c5cb396198b", "text": "We propose a passive blackbox-based technique for determining the type of access point (AP) connected to a network. Essentially, a stimulant (i.e., packet train) that emulates normal data transmission is sent through the access point. Since access points from different vendors are architecturally heterogeneous (e.g., chipset, firmware, driver), each AP will act upon the packet train differently. By applying wavelet analysis to the resultant packet train, a distinct but reproducible pattern is extracted allowing a clear classification of different AP types. This has two important applications: (1) as a system administrator, this technique can be used to determine if a rogue access point has connected to the network; and (2) as an attacker, fingerprinting the access point is necessary to launch driver/firmware specific attacks. Extensive experiments were conducted (over 60GB of data was collected) to differentiate 6 APs. We show that this technique can classify APs with a high accuracy (in some cases, we can classify successfully 100% of the time) with as little as 100000 packets. Further, we illustrate that this technique is independent of the stimulant traffic type (e.g., TCP or UDP). Finally, we show that the AP profile is stable across multiple models of the same AP.", "title": "" }, { "docid": "dd1f8a5eae50d0a026387ba1b6695bef", "text": "Cloud computing is one of the significant development that utilizes progressive computational power and upgrades data distribution and data storing facilities. With cloud information services, it is essential for information to be saved in the cloud and also distributed across numerous customers. Cloud information repository is involved with issues of information integrity, data security and information access by unapproved users. Hence, an autonomous reviewing and auditing facility is necessary to guarantee that the information is effectively accommodated and used in the cloud. In this paper, a comprehensive survey on the state-of-art techniques in data auditing and security are discussed. Challenging problems in information repository auditing and security are presented. Finally, directions for future research in data auditing and security have been discussed.", "title": "" }, { "docid": "013c6f8931a8f9e0cff4fb291571e5bf", "text": "Herrmann-Pillath, Carsten, Libman, Alexander, and Yu, Xiaofan—Economic integration in China: Politics and culture The aim of the paper is to explicitly disentangle the role of political and cultural boundaries as factors of fragmentation of economies within large countries. On the one hand, local protectionism plays a substantial role in many federations and decentralized states. On the other hand, if the country exhibits high level of cultural heterogeneity, it may also contribute to the economic fragmentation; however, this topic has received significantly less attention in the literature. This paper looks at the case of China and proxies the cultural heterogeneity by the heterogeneity of local dialects. It shows that the effect of politics clearly dominates that of culture: while provincial borders seem to have a strong influence disrupting economic ties, economic linkages across provinces, even if the regions fall into the same linguistic zone, are rather weak and, on the contrary, linguistic differences within provinces do not prevent economic integration. For some language zones we do, however, find a stronger effect on economic integration. Journal of Comparative Economics 42 (2) (2014) 470–492. Frankfurt School of Finance and Management, Germany; Russian Academy of Sciences, Russia. 2013 Association for Comparative Economic Studies Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1eea111c3efcc67fcc1bb6f358622475", "text": "Methyl Cellosolve (the monomethyl ether of ethylene glycol) has been widely used as the organic solvent in ninhydrin reagents for amino acid analysis; it has, however, properties that are disadvantageous in a reagent for everyday employment. The solvent is toxic and it is difficult to keep the ether peroxide-free. A continuing effort to arrive at a chemically preferable and relatively nontoxic substitute for methyl Cellosolve has led to experiments with dimethyl s&oxide, which proves to be a better solvent for the reduced form of ninhydrin (hydrindantin) than is methyl Cellosolve. Dimethyl sulfoxide can replace the latter, volume for volume, in a ninhydrin reagent mixture that gives equal performance and has improved stability. The result is a ninhydrin-hydrindantin solution in 75% dimethyl sulfoxide25 % 4 M lithium acetate buffer at pH 5.2. This type of mixture, with appropriate hydrindantin concentrations, is recommended to replace methyl Cellosolve-containing reagents in the quantitative determination of amino acids by automatic analyzers and by the manual ninhydrin method.", "title": "" }, { "docid": "11828571b57966958bd364947f41ad40", "text": "A smart city is developed, deployed and maintained with the help of Internet of Things (IoT). The smart cities have become an emerging phenomena with rapid urban growth and boost in the field of information technology. However, the function and operation of a smart city is subject to the pivotal development of security architectures. The contribution made in this paper is twofold. Firstly, it aims to provide a detailed, categorized and comprehensive overview of the research on security problems and their existing solutions for smart cities. The categorization is based on several factors such as governance, socioeconomic and technological factors. This classification provides an easy and concise view of the security threats, vulnerabilities and available solutions for the respective technologies areas that are proposed over the period 2010-2015. Secondly, an IoT testbed for smart cities architecture, i.e., SmartSantander is also analyzed with respect to security threats and vulnerabilities to smart cities. The existing best practices regarding smart city security are discussed and analyzed with respect to their performance, which could be used by different stakeholders of the smart cities.", "title": "" }, { "docid": "02c8093183af96808a71b93ee3103996", "text": "The medical field stands to see significant benefits from the recent advances in deep learning. Knowing the uncertainty in the decision made by any machine learning algorithm is of utmost importance for medical practitioners. This study demonstrates the utility of using Bayesian LSTMs for classification of medical time series. Four medical time series datasets are used to show the accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we show cherry-picked examples of confident and uncertain classifications of the medical time series. With simple modifications of the common practice for deep learning, significant improvements can be made for the medical practitioner and patient.", "title": "" }, { "docid": "812687a5291d786ecda102adda03700c", "text": "The overall goal is to show that conceptual spaces are more promising than other ways of modelling the semantics of natural language. In particular, I will show how they can be used to model actions and events. I will also outline how conceptual spaces provide a cognitive grounding for word classes, including nouns, adjectives, prepositions and verbs.", "title": "" }, { "docid": "e573d85271e3f3cc54b774de8a5c6dd9", "text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.", "title": "" }, { "docid": "e87c93e13f94191450216e308215ff38", "text": "Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair scheduling algorithms because of two unique characteristics of wireless media: (a) bursty channel errors, and (b) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however a base station has only a limited knowledge of the arrival processes of uplink flows.In this paper, we propose a new model for wireless fair scheduling based on an adaptation of fluid fair queueing to handle location-dependent error bursts. We describe an ideal wireless fair scheduling algorithm which provides a packetized implementation of the fluid model while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wireless scheduling algorithm which approximates the ideal algorithm. Through simulations, we show that the algorithm achieves the desirable properties identified in the wireless fluid fair queueing model.", "title": "" }, { "docid": "bb01b5e24d7472ab52079dcb8a65358d", "text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.", "title": "" } ]
scidocsrr
8d0adaf8a7dbc0c6df1cf178dbd2ef79
CMUcam3: An Open Programmable Embedded Vision Sensor
[ { "docid": "f267030a7ff5a8b4b87b9b5418ec3c28", "text": "Vision systems employing region segmentation by color are crucial in real-time mobile robot applications, such as RoboCup[1], or other domains where interaction with humans or a dynamic world is required. Traditionally, systems employing real-time color-based segmentation are either implemented in hardware, or as very specific software systems that take advantage of domain knowledge to attain the necessary efficiency. However, we have found that with careful attention to algorithm efficiency, fast color image segmentation can be accomplished using commodity image capture and CPU hardware. Our paper describes a system capable of tracking several hundred regions of up to 32 colors at 30 Hertz on general purpose commodity hardware. The software system is composed of four main parts; a novel implementation of a threshold classifier, a merging system to form regions through connected components, a separation and sorting system that gathers various region features, and a top down merging heuristic to approximate perceptual grouping. A key to the efficiency of our approach is a new method for accomplishing color space thresholding that enables a pixel to be classified into one or more of up to 32 colors using only two logical AND operations. A naive approach could require up to 192 comparisons for the same classification. The algorithms and representations are described, as well as descriptions of three applications in which it has been used.", "title": "" } ]
[ { "docid": "0c5f30cd0e072309b13cc6c43bb12647", "text": "In this paper, we compare the performance of different approaches to predicting delays in air traffic networks. We consider three classes of models: A recently-developed aggregate model of the delay network dynamics, which we will refer to as the Markov Jump Linear System (MJLS), classical machine learning techniques like Classification and Regression Trees (CART), and three candidate Artificial Neural Network (ANN) architectures. We show that prediction performance can vary significantly depending on the choice of model/algorithm, and the type of prediction (for example, classification vs. regression). We also discuss the importance of selecting the right predictor variables, or features, in order to improve the performance of these algorithms. The models are evaluated using operational data from the National Airspace System (NAS) of the United States. The ANN is shown to be a good algorithm for the classification problem, where it attains an average accuracy of nearly 94% in predicting whether or not delays on the 100 most-delayed links will exceed 60 min, looking two hours into the future. The MJLS model, however, is better at predicting the actual delay levels on different links, and has a mean prediction error of 4.7 min for the regression problem, for a 2 hr horizon. MJLS is also better at predicting outbound delays at the 30 major airports, with a mean error of 6.8 min, for a 2 hr prediction horizon. The effect of temporal factors, and the spatial distribution of current delays, in predicting future delays are also compared. The MJLS model, which is specifically designed to capture aggregate air traffic dynamics, leverages on these factors and outperforms the ANN in predicting the future spatial distribution of delays. In this manner, a tradeoff between model simplicity and prediction accuracy is revealed. Keywordsdelay prediction; network delays; machine learning; artificial neural networks; data mining", "title": "" }, { "docid": "99b1c2f0b3e3deb86ce25d2368a8dd86", "text": "We provide concrete evidence that floating-point computations in C programs can be verified in a homogeneous verification setting based on Coq only, by evaluating the practicality of the combination of the formal semantics of CompCert Clight and the Flocq formal specification of IEEE 754 floating-point arithmetic for the verification of properties of floating-point computations in C programs. To this end, we develop a framework to automatically compute real-number expressions of C floating-point computations with rounding error terms along with their correctness proofs. We apply our framework to the complete analysis of an energy-efficient C implementation of a radar image processing algorithm, for which we provide a certified bound on the total noise introduced by floating-point rounding errors and energy-efficient approximations of square root and sine.", "title": "" }, { "docid": "7c98ac06ea8cb9b83673a9c300fb6f4c", "text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.", "title": "" }, { "docid": "259c17740acd554463731d3e1e2912eb", "text": "In recent years, radio frequency identification technology has moved from obscurity into mainstream applications that help speed the handling of manufactured goods and materials. RFID enables identification from a distance, and unlike earlier bar-code technology, it does so without requiring a line of sight. In this paper, the author introduces the principles of RFID, discusses its primary technologies and applications, and reviews the challenges organizations will face in deploying this technology.", "title": "" }, { "docid": "6c47ae47e95641f10bd3b1a0a9b0dbb6", "text": "Type 2 diabetes mellitus and impaired glucose tolerance are associated with antipsychotic treatment. Risk factors for type 2 diabetes and impaired glucose tolerance include abdominal adiposity, age, ethnic status, and certain neuropsychiatric conditions. While impaired glucose metabolism was first described in psychotic patients prior to the introduction of antipsychotic medications, treatment with antipsychotic medications is associated with impaired glucose metabolism, exacerbation of existing type 1 and 2 diabetes, new-onset type 2 diabetes mellitus, and diabetic ketoacidosis, a severe and potentially fatal metabolic complication. The strength of the association between antipsychotics and diabetes varies across individual medications, with the largest number of reports for chlorpromazine, clozapine, and olanzapine. Recent controlled studies suggest that antipsychotics can impair glucose regulation by decreasing insulin action, although effects on insulin secretion are not ruled out. Antipsychotic medications induce weight gain, and the potential for weight gain varies across individual agents with larger effects observed again for agents like chlorpromazine, clozapine, and olanzapine. Increased abdominal adiposity may explain some treatment-related changes in glucose metabolism. However, case reports and recent controlled studies suggest that clozapine and olanzapine treatment may also be associated with adverse effects on glucose metabolism independent of adiposity. Dyslipidemia is a feature of type 2 diabetes, and antipsychotics such as clozapine and olanzapine have also been associated with hypertriglyceridemia, with agents such as haloperidol, risperidone, and ziprasidone associated with reductions in plasma triglycerides. Diabetes mellitus is associated with increased morbidity and mortality due to both acute (e.g., diabetic ketoacidosis) and long-term (e.g., cardiovascular disease) complications. A progressive relationship between plasma glucose levels and cardiovascular risk (e.g., myocardial infarction, stroke) begins at glucose levels that are well below diabetic or \"impaired\" thresholds. Increased adiposity and dyslipidemia are additional, independent risk factors for cardiovascular morbidity and mortality. Patients with schizophrenia suffer increased mortality due to cardiovascular disease, with presumed contributions from a number of modifiable risk factors (e.g., smoking, sedentary lifestyle, poor diet, obesity, hyperglycemia, and dyslipidemia). Patients taking antipsychotic medications should undergo regular monitoring of weight and plasma glucose and lipid levels, so that clinicians can individualize treatment decisions and reduce iatrogenic contributions to morbidity and mortality.", "title": "" }, { "docid": "543099ac1bb00e14f4fc757a25d9487c", "text": "With the development of personalized services, collaborative filtering techniques have been successfully applied to the network recommendation system. But sparse data seriously affect the performance of collaborative filtering algorithms. To alleviate the impact of data sparseness, using user interest information, an improved user-based clustering Collaborative Filtering (CF) algorithm is proposed in this paper, which improves the algorithm by two ways: user similarity calculating method and user-item rating matrix extended. The experimental results show that the algorithm could describe the user similarity more accurately and alleviate the impact of data sparseness in collaborative filtering algorithm. Also the results show that it can improve the accuracy of the collaborative recommendation algorithm.", "title": "" }, { "docid": "5c898e311680199f1f369d3c264b2b14", "text": "Behaviour Driven Development (BDD) has gained increasing attention as an agile development approach in recent years. However, characteristics that constituite the BDD approach are not clearly defined. In this paper, we present a set of main BDD charactersitics identified through an analysis of relevant literature and current BDD toolkits. Our study can provide a basis for understanding BDD, as well as for extending the exisiting BDD toolkits or developing new ones.", "title": "" }, { "docid": "20db149230db9df2a30f5cd788db1d89", "text": "IP flows have heavy-tailed packet and byte size distributions. This make them poor candidates for uniform sampling---i.e. selecting 1 in N flows---since omission or inclusion of a large flow can have a large effect on estimated total traffic. Flows selected in this manner are thus unsuitable for use in usage sensitive billing. We propose instead using a size-dependent sampling scheme which gives priority to the larger contributions to customer usage. This turns the heavy tails to our advantage; we can obtain accurate estimates of customer usage from a relatively small number of important samples.The sampling scheme allows us to control error when charging is sensitive to estimated usage only above a given base level. A refinement allows us to strictly limit the chance that a customers estimated usage will exceed their actual usage. Furthermore, we show that a secondary goal, that of controlling the rate at which samples are produced, can be fulfilled provided the billing cycle is sufficiently long. All these claims are supported by experiments on flow traces gathered from a commercial network.", "title": "" }, { "docid": "4243f0bafe669ab862aaad2b184c6a0e", "text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.", "title": "" }, { "docid": "ee351931c35e5dd1ebe7d528568df394", "text": "We present an automatic method for fitting multiple B-spline curves to unorganized planar points. The method works on point clouds which have complicated topological structures and a single curve is insufficient for fitting the shape. A divide-and-merge algorithm is developed for dividing the unorganized data points into several groups while each group represents a smooth curve. Each point group is then fitted with a B-spline curve by the SDM method. Our algorithm also sets up automatically the control polygon of initial B-spline curves. Experiments demonstrate the capability of the presented algorithm in accurate reconstruction of topological structures of point clouds.", "title": "" }, { "docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b", "text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …", "title": "" }, { "docid": "dd867c3f55696bebea3d9049a3d43163", "text": "This paper examines the task of detecting intensity of emotion from text. We create the first datasets of tweets annotated for anger, fear, joy, and sadness intensities. We use a technique called best–worst scaling (BWS) that improves annotation consistency and obtains reliable fine-grained scores. We show that emotion-word hashtags often impact emotion intensity, usually conveying a more intense emotion. Finally, we create a benchmark regression system and conduct experiments to determine: which features are useful for detecting emotion intensity; and, the extent to which two emotions are similar in terms of how they manifest in language.", "title": "" }, { "docid": "79ea2c1566b3bb1e27fe715b1a1a385b", "text": "The number of research papers available is growing at a staggering rate. Researchers need tools to help them find the papers they should read among all the papers published each year. In this paper, we present and experiment with hybrid recommender algorithms that combine Collaborative Filtering and Content-based. Filtering to recommend research papers to users. Our hybrid algorithms combine the strengths of each filtering approach to address their individual weaknesses. We evaluated our algorithms through offline experiments on a database of 102, 000 research papers, and through an online experiment with 110 users. For both experiments we used a dataset created from the CiteSeer repository of computer science research papers. We developed separate English and Portuguese versions of the interface and specifically recruited American and Brazilian users to test for cross-cultural effects. Our results show that users value paper recommendations, that the hybrid algorithms can be successfully combined, that different algorithms are more suitable for recommending different kinds of papers, and that users with different levels of experience perceive recommendations differently These results can be applied to develop recommender systems for other types of digital libraries.", "title": "" }, { "docid": "b3bda9c0a0ec22c5d244f8c538ab6056", "text": "Knowledge assets represent a special set of resources for a firm and as such, their management is of great importance to academics and managers. The purpose of this paper is to review the literature as it pertains to knowledge assets and provide a suggested model for intellectual capital management that can be of benefit to both academics and practitioners. In doing so, a set of research propositions are suggested to provide guidance for future research.", "title": "" }, { "docid": "da9751e8db176942da1c582908942ce3", "text": "This paper introduces new types of square-piece jigsaw puzzles: those for which the orientation of each jigsaw piece is unknown. We propose a tree-based reassembly that greedily merges components while respecting the geometric constraints of the puzzle problem. The algorithm has state-of-the-art performance for puzzle assembly, whether or not the orientation of the pieces is known. Our algorithm makes fewer assumptions than past work, and success is shown even when pieces from multiple puzzles are mixed together. For solving puzzles where jigsaw piece location is known but orientation is unknown, we propose a pairwise MRF where each node represents a jigsaw piece's orientation. Other contributions of the paper include an improved measure (MGC) for quantifying the compatibility of potential jigsaw piece matches based on expecting smoothness in gradient distributions across boundaries.", "title": "" }, { "docid": "2a45cf0fcf67ca51db59317663d874b9", "text": "Anoctamin 1 (ANO1), a calcium-activated chloride channel, is highly amplified in prostate cancer, the most common form of cancer and leading causes of cancer death in men, and downregulation of ANO1 expression or its functional activity is known to inhibit cell proliferation, migration and invasion in prostate cancer cells. Here, we performed a cell-based screening for the identification of ANO1 inhibitors as potential anticancer therapeutic agents for prostate cancer. Screening of ~300 selected bioactive natural products revealed that luteolin is a novel potent inhibitor of ANO1. Electrophysiological studies indicated that luteolin potently inhibited ANO1 chloride channel activity in a dose-dependent manner with an IC50 value of 9.8 μM and luteolin did not alter intracellular calcium signaling in PC-3 prostate cancer cells. Luteolin inhibited cell proliferation and migration of PC-3 cells expressing high levels of ANO1 more potently than that of ANO1-deficient PC-3 cells. Notably, luteolin not only inhibited ANO1 channel activity, but also strongly decreased protein expression levels of ANO1. Our results suggest that downregulation of ANO1 by luteolin is a potential mechanism for the anticancer effect of luteolin.", "title": "" }, { "docid": "fd531eeed23d5cdde6d6751b37569474", "text": "Paraphrases play an important role in the variety and complexity of natural language documents. However they adds to the difficulty of natural language processing. Here we describe a procedure for obtaining paraphrases from news article. A set of paraphrases can be useful for various kinds of applications. Articles derived from different newspapers can contain paraphrases if they report the same event of the same day. We exploit this feature by using Named Entity recognition. Our basic approach is based on the assumption that Named Entities are preserved across paraphrases. We applied our method to articles of two domains and obtained notable examples. Although this is our initial attempt to automatically extracting paraphrases from a corpus, the results are promising.", "title": "" }, { "docid": "b40b97410d0cd086118f0980d0f52867", "text": "In smart cities, commuters have the opportunities for smart routing that may enable selecting a route with less car accidents, or one that is more scenic, or perhaps a straight and flat route. Such smart personalization requires a data management framework that goes beyond a static road network graph. This paper introduces PreGo, a novel system developed to provide real time personalized routing. The recommended routes by PreGo are smart and personalized in the sense of being (1) adjustable to individual users preferences, (2) subjective to the trip start time, and (3) sensitive to changes of the road conditions. Extensive experimental evaluation using real and synthetic data demonstrates the efficiency of the PreGo system.", "title": "" }, { "docid": "cb8dbf14b79edd2a3ee045ad08230a30", "text": "Observational data suggest a link between menaquinone (MK, vitamin K2) intake and cardiovascular (CV) health. However, MK intervention trials with vascular endpoints are lacking. We investigated long-term effects of MK-7 (180 µg MenaQ7/day) supplementation on arterial stiffness in a double-blind, placebo-controlled trial. Healthy postmenopausal women (n=244) received either placebo (n=124) or MK-7 (n=120) for three years. Indices of local carotid stiffness (intima-media thickness IMT, Diameter end-diastole and Distension) were measured by echotracking. Regional aortic stiffness (carotid-femoral and carotid-radial Pulse Wave Velocity, cfPWV and crPWV, respectively) was measured using mechanotransducers. Circulating desphospho-uncarboxylated matrix Gla-protein (dp-ucMGP) as well as acute phase markers Interleukin-6 (IL-6), high-sensitive C-reactive protein (hsCRP), tumour necrosis factor-α (TNF-α) and markers for endothelial dysfunction Vascular Cell Adhesion Molecule (VCAM), E-selectin, and Advanced Glycation Endproducts (AGEs) were measured. At baseline dp-ucMGP was associated with IMT, Diameter, cfPWV and with the mean z-scores of acute phase markers (APMscore) and of markers for endothelial dysfunction (EDFscore). After three year MK-7 supplementation cfPWV and the Stiffness Index βsignificantly decreased in the total group, whereas distension, compliance, distensibility, Young's Modulus, and the local carotid PWV (cPWV) improved in women having a baseline Stiffness Index β above the median of 10.8. MK-7 decreased dp-ucMGP by 50 % compared to placebo, but did not influence the markers for acute phase and endothelial dysfunction. In conclusion, long-term use of MK-7 supplements improves arterial stiffness in healthy postmenopausal women, especially in women having a high arterial stiffness.", "title": "" } ]
scidocsrr
fed9694336c6085ed06a590e0c821402
New Simple-Structured AC Solid-State Circuit Breaker
[ { "docid": "6af7f70f0c9b752d3dbbe701cb9ede2a", "text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions", "title": "" }, { "docid": "d8255047dc2e28707d711f6d6ff19e30", "text": "This paper discusses the design of a 10 kV and 200 A hybrid dc circuit breaker suitable for the protection of the dc power systems in electric ships. The proposed hybrid dc circuit breaker employs a Thompson coil based ultrafast mechanical switch (MS) with the assistance of two additional solid-state power devices. A low-voltage (80 V) metal–oxide–semiconductor field-effect transistors (MOSFETs)-based commutating switch (CS) is series connected with the MS to realize the zero current turn-OFF of the MS. In this way, the arcing issue with the MS is avoided. A 15 kV SiC emitter turn-OFF thyristor-based main breaker (MB) is parallel connected with the MS and CS branch to interrupt the fault current. A stack of MOVs parallel with the MB are used to clamp the voltage across the hybrid dc circuit breaker during interruption. This paper focuses on the electronic parts of the hybrid dc circuit breaker, and a companion paper will elucidate the principle and operation of the fast acting MS and the overall operation of the hybrid dc circuit breaker. The selection and design of both the high-voltage and low-voltage electronic components in the hybrid dc circuit breaker are presented in this paper. The turn-OFF capability of the MB with and without snubber circuit is experimentally tested, validating its suitability for the hybrid dc circuit breaker application. The CSs’ conduction performances are tested up to 200 A, and its current commutating during fault current interruption is also analyzed. Finally, the hybrid dc circuit breaker demonstrated a fast current interruption within 2 ms at 7 kV and 100 A.", "title": "" } ]
[ { "docid": "d38f9ef3248bf54b7a073beaa186ad42", "text": "Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.", "title": "" }, { "docid": "99ba1fd6c96dad6d165c4149ac2ce27a", "text": "In order to solve unsupervised domain adaptation problem, recent methods focus on the use of adversarial learning to learn the common representation among domains. Although many designs are proposed, they seem to ignore the negative influence of domain-specific characteristics in transferring process. Besides, they also tend to obliterate these characteristics when extracted, although they are useful for other tasks and somehow help preserve the data. Take into account these issues, in this paper, we want to design a novel domainadaptation architecture which disentangles learned features into multiple parts to answer the questions: what features to transfer across domains and what to preserve within domains for other tasks. Towards this, besides jointly matching domain distributions in both image-level and feature-level, we offer new idea on feature exchange across domains combining with a novel feed-back loss and a semantic consistency loss to not only enhance the transferability of learned common feature but also preserve data and semantic information during exchange process. By performing domain adaptation on two standard digit datasets – MNIST and USPS, we show that our architecture can solve not only the full transfer problem but also partial transfer problem efficiently. The translated image results also demonstrate the potential of our architecture in image style transfer application.", "title": "" }, { "docid": "04d7b3e3584d89d5a3bc5c22c3fd1438", "text": "With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.", "title": "" }, { "docid": "0742314b8099dce0eadaa12f96579209", "text": "Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.", "title": "" }, { "docid": "38d7107de35f3907c0e42b111883613e", "text": "On-line social networks have become a massive communication and information channel for users world-wide. In particular, the microblogging platform Twitter, is characterized by short-text message exchanges at extremely high rates. In this type of scenario, the detection of emerging topics in text streams becomes an important research area, essential for identifying relevant new conversation topics, such as breaking news and trends. Although emerging topic detection in text is a well established research area, its application to large volumes of streaming text data is quite novel. Making scalability, efficiency and rapidness, the key aspects for any emerging topic detection algorithm in this type of environment.\n Our research addresses the aforementioned problem by focusing on detecting significant and unusual bursts in keyword arrival rates or bursty keywords. We propose a scalable and fast on-line method that uses normalized individual frequency signals per term and a windowing variation technique. This method reports keyword bursts which can be composed of single or multiple terms, ranked according to their importance. The average complexity of our method is O(n log n), where n is the number of messages in the time window. This complexity allows our approach to be scalable for large streaming datasets. If bursts are only detected and not ranked, the algorithm remains with lineal complexity O(n), making it the fastest in comparison to the current state-of-the-art. We validate our approach by comparing our performance to similar systems using the TREC Tweet 2011 Challenge tweets, obtaining 91% of matches with LDA, an off-line gold standard used in similar evaluations. In addition, we study Twitter messages related to the SuperBowl football events in 2011 and 2013.", "title": "" }, { "docid": "c69d15a44bcb779394df5776e391ec23", "text": "Ankylosing spondylitis (AS) is a chronic and inflammatory rheumatic disease, characterized by pain and structural and functional impairments, such as reduced mobility and axial deformity, which lead to diminished quality of life. Its treatment includes not only drugs, but also nonpharmacological therapy. Exercise appears to be a promising modality. The aim of this study is to review the current evidence and evaluate the role of exercise either on land or in water for the management of patients with AS in the biological era. Systematic review of the literature published until November 2016 in Medline, Embase, Cochrane Library, Web of Science and Scopus databases. Thirty-five studies were included for further analysis (30 concerning land exercise and 5 concerning water exercise; combined or not with biological drugs), comprising a total of 2515 patients. Most studies showed a positive effect of exercise on Bath Ankylosing Spondylitis Disease Activity Index, Bath Ankylosing Spondylitis Functional Index, pain, mobility, function and quality of life. The benefit was statistically significant in randomized controlled trials. Results support a multimodal approach, including educational sessions and maintaining home-based program. This study highlights the important role of exercise in management of AS, therefore it should be encouraged and individually prescribed. More studies with good methodological quality are needed to strengthen the results and to define the specific characteristics of exercise programs that determine better results.", "title": "" }, { "docid": "699836a5b2caf6acde02c4bad16c2795", "text": "Drilling end-effector is a key unit in autonomous drilling robot. The perpendicularity of the hole has an important influence on the quality of airplane assembly. Aiming at the robot drilling perpendicularity, a micro-adjusting attitude mechanism and a surface normal measurement algorithm are proposed in this paper. In the mechanism, two rounded eccentric discs are used and the small one is embedded in the big one, which makes the drill’s point static when adjusting the drill’s attitude. Thus, removal of drill’s point position after adjusting the drill attitude can be avoided. Before the micro-adjusting progress, four non-coplanar points in space are used to determine a unique sphere. The normal at the drilling point is measured by four laser ranging sensors. The adjusting angles at which the motors should be rotated to adjust attitude can be calculated by using the deviation between the normal and the drill axis. Finally, the motors will drive the two eccentric discs to achieve micro-adjusting progress. Experiments on drilling robot system and the results demonstrate that the adjusting mechanism and the algorithm for surface normal measurement are effective with high accuracy and efficiency. (1)设计一种微型姿态调整机构, 实现对钻头姿态进行调整, 使其沿制孔点法线进行制孔, 提高孔的垂直度. 使得钻头调整前后, 钻头顶点保持不变, 提高制孔效率. (2)利用4个激光测距传感器, 根据空间不共面四点确定唯一球, 测得制孔点处的法线向量, 为钻头的姿态调整做准备.", "title": "" }, { "docid": "a05a953097e5081670f26e85c4b8e397", "text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.", "title": "" }, { "docid": "80666930dbabe1cd9d65af762cc4b150", "text": "Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents.", "title": "" }, { "docid": "78bc13c6b86ea9a8fda75b66f665c39f", "text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.", "title": "" }, { "docid": "53ae229e708297bf73cf3a33b32e42da", "text": "Signal-dependent phase variation, AM/PM, along with amplitude variation, AM/AM, are known to determine nonlinear distortion characteristics of current-mode PAs. However, these distortion effects have been treated separately, putting more weight on the amplitude distortion, while the AM/PM generation mechanisms are yet to be fully understood. Hence, the aim of this work is to present a large-signal physical model that can describe both the AM/AM and AM/PM PA nonlinear distortion characteristics and their internal relationship.", "title": "" }, { "docid": "c6d25017a6cba404922933672a18d08a", "text": "The Internet of Things (IoT) makes smart objects the ultimate building blocks in the development of cyber-physical smart pervasive frameworks. The IoT has a variety of application domains, including health care. The IoT revolution is redesigning modern health care with promising technological, economic, and social prospects. This paper surveys advances in IoT-based health care technologies and reviews the state-of-the-art network architectures/platforms, applications, and industrial trends in IoT-based health care solutions. In addition, this paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attack taxonomies from the health care perspective. Further, this paper proposes an intelligent collaborative security model to minimize security risk; discusses how different innovations such as big data, ambient intelligence, and wearables can be leveraged in a health care context; addresses various IoT and eHealth policies and regulations across the world to determine how they can facilitate economies and societies in terms of sustainable development; and provides some avenues for future research on IoT-based health care based on a set of open issues and challenges.", "title": "" }, { "docid": "e33fd686860657a93a0e47807b4cbe24", "text": "Planning optimal paths for large numbers of robots is computationally expensive. In this thesis, we present a new framework for multirobot path planning called subdimensional expansion, which initially plans for each robot individually, and then coordinates motion among the robots as needed. More specifically, subdimensional expansion initially creates a one-dimensional search space embedded in the joint configuration space of the multirobot system. When the search space is found to be blocked during planning by a robot-robot collision, the dimensionality of the search space is locally increased to ensure that an alternative path can be found. As a result, robots are only coordinated when necessary, which reduces the computational cost of finding a path. Subdimensional expansion is a flexible framework that can be used with multiple planning algorithms. For discrete planning problems, subdimensional expansion can be combined with A* to produce the M* algorithm, a complete and optimal multirobot path planning problem. When the configuration space of individual robots is too large to be explored effectively with A*, subdimensional expansion can be combined with probabilistic planning algorithms to produce sRRT and sPRM. M* is then extended to solve variants of the multirobot path planning algorithm. We present the Constraint Manifold Subsearch (CMS) algorithm to solve problems where robots must dynamically form and dissolve teams with other robots to perform cooperative tasks. Uncertainty M* (UM*) is a variant of M* that handles systems with probabilistic dynamics. Finally, we apply M* to multirobot sequential composition. Results are validated with extensive simulations and experiments on multiple physical robots.", "title": "" }, { "docid": "73d31d63cfaeba5fa7c2d2acc4044ca0", "text": "Plastics in the marine environment have become a major concern because of their persistence at sea, and adverse consequences to marine life and potentially human health. Implementing mitigation strategies requires an understanding and quantification of marine plastic sources, taking spatial and temporal variability into account. Here we present a global model of plastic inputs from rivers into oceans based on waste management, population density and hydrological information. Our model is calibrated against measurements available in the literature. We estimate that between 1.15 and 2.41 million tonnes of plastic waste currently enters the ocean every year from rivers, with over 74% of emissions occurring between May and October. The top 20 polluting rivers, mostly located in Asia, account for 67% of the global total. The findings of this study provide baseline data for ocean plastic mass balance exercises, and assist in prioritizing future plastic debris monitoring and mitigation strategies.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "f160dd844c54dafc8c5265ff0e4d4a05", "text": "The increasing number of smart phones presents a significant opportunity for the development of m-payment services. Despite the predicted success of m-payment, the market remains immature in most countries. This can be explained by the lack of agreement on standards and business models for all stakeholders in m-payment ecosystem. In this paper, the STOF business model framework is employed to analyze m-payment services from the point of view of one of the key players in the ecosystem i.e., banks. We apply Analytic Hierarchy Process (AHP) method to analyze the critical design issues for four domains of the STOF model. The results of the analysis show that service domain is the most important, followed by technology, organization and finance domains. Security related issues are found to be the most important by bank representatives. The future research can be extended to the m-payment ecosystem by collecting data from different actors from the ecosystem.", "title": "" }, { "docid": "f3d0ae1db485b95b8b6931f8c6f2ea40", "text": "Spoken language understanding (SLU) is a core component of a spoken dialogue system. In the traditional architecture of dialogue systems, the SLU component treats each utterance independent of each other, and then the following components aggregate the multi-turn information in the separate phases. However, there are two challenges: 1) errors from previous turns may be propagated and then degrade the performance of the current turn; 2) knowledge mentioned in the long history may not be carried into the current turn. This paper addresses the above issues by proposing an architecture using end-to-end memory networks to model knowledge carryover in multi-turn conversations, where utterances encoded with intents and slots can be stored as embeddings in the memory and the decoding phase applies an attention model to leverage previously stored semantics for intent prediction and slot tagging simultaneously. The experiments on Microsoft Cortana conversational data show that the proposed memory network architecture can effectively extract salient semantics for modeling knowledge carryover in the multi-turn conversations and outperform the results using the state-of-the-art recurrent neural network framework (RNN) designed for single-turn SLU.", "title": "" }, { "docid": "b2283fb23a199dbfec42b76dec31ac69", "text": "High accurate indoor localization and tracking of smart phones is critical to pervasive applications. Most radio-based solutions either exploit some error prone power-distance models or require some labor-intensive process of site survey to construct RSS fingerprint database. This study offers a new perspective to exploit RSS readings by their contrast relationship rather than absolute values, leading to three observations and functions called turn verifying, room distinguishing and entrance discovering. On this basis, we design WaP (WiFi-Assisted Particle filter), an indoor localization and tracking system exploiting particle filters to combine dead reckoning, RSS-based analyzing and knowledge of floor plan together. All the prerequisites of WaP are the floor plan and the coarse locations on which room the APs reside. WaP prototype is realized on off-the-shelf smartphones with limited particle number typically 400, and validated in a college building covering 1362m2. Experiment results show that WaP can achieve average localization error of 0.71m for 100 trajectories by 8 pedestrians.", "title": "" }, { "docid": "10634117fd51d94f9b12b9f0ed034f65", "text": "Our corpus of descriptive text contains a significant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner’s theory of the attentional state, and in particular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to conclude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status.", "title": "" }, { "docid": "1840d879044662bfb1e6b2ea3ee9c2c8", "text": "Working memory (WM) training has been reported to benefit abilities as diverse as fluid intelligence (Jaeggi et al., Proceedings of the National Academy of Sciences of the United States of America, 105:6829-6833, 2008) and reading comprehension (Chein & Morrison, Psychonomic Bulletin & Review, 17:193-199, 2010), but transfer is not always observed (for reviews, see Morrison & Chein, Psychonomics Bulletin & Review, 18:46-60, 2011; Shipstead et al., Psychological Bulletin, 138:628-654, 2012). In contrast, recent WM training studies have consistently reported improvement on the trained tasks. The basis for these training benefits has received little attention, however, and it is not known which WM components and/or processes are being improved. Therefore, the goal of the present study was to investigate five possible mechanisms underlying the effects of adaptive dual n-back training on working memory (i.e., improvements in executive attention, updating, and focus switching, as well as increases in the capacity of the focus of attention and short-term memory). In addition to a no-contact control group, the present study also included an active control group whose members received nonadaptive training on the same task. All three groups showed significant improvements on the n-back task from pretest to posttest, but adaptive training produced larger improvements than did nonadaptive training, which in turn produced larger improvements than simply retesting. Adaptive, but not nonadaptive, training also resulted in improvements on an untrained running span task that measured the capacity of the focus of attention. No other differential improvements were observed, suggesting that increases in the capacity of the focus of attention underlie the benefits of adaptive dual n-back training.", "title": "" } ]
scidocsrr
ef3f08e17f6ba2cfc17956b583032cf6
Augmented reality in the smart factory: Supporting workers in an industry 4.0. environment
[ { "docid": "d8bf55d8a2aaa1061310f3d976a87c57", "text": "characterized by four distinguishable interface styles, each lasting for many years and optimized to the hardware available at the time. In the first period, the early 1950s and 1960s, computers were used in batch mode with punched-card input and line-printer output; there were essentially no user interfaces because there were no interactive users (although some of us were privileged to be able to do console debugging using switches and lights as our “user interface”). The second period in the evolution of interfaces (early 1960s through early 1980s) was the era of timesharing on mainframes and minicomputers using mechanical or “glass” teletypes (alphanumeric displays), when for the first time users could interact with the computer by typing in commands with parameters. Note that this era persisted even through the age of personal microcomputers with such operating systems as DOS and Unix with their command line shells. During the 1970s, timesharing and manual command lines remained deeply entrenched, but at Xerox PARC the third age of user interfaces dawned. Raster graphics-based networked workstations and “pointand-click” WIMP GUIs (graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse) are the legacy of Xerox PARC that we’re still using today. WIMP GUIs were popularized by the Macintosh in 1984 and later copied by Windows on the PC and Motif on Unix workstations. Applications today have much the same look and feel as the early desktop applications (except for the increased “realism” achieved through the use of drop shadows for buttons and other UI widgets); the main advance lies in the shift from monochrome displays to color and in a large set of software-engineering tools for building WIMP interfaces. I find it rather surprising that the third generation of WIMP user interfaces has been so dominant for more than two decades; they are apparently sufficiently good for conventional desktop tasks that the field is stuck comfortably in a rut. I argue in this essay that the status quo does not suffice—that the newer forms of computing and computing devices available today necessitate new thinking t h e h u m a n c o n n e c t i o n Andries van Dam", "title": "" } ]
[ { "docid": "d3f97e0de15ab18296e161e287890e18", "text": "Nosocomial or hospital acquired infections threaten the survival and neurodevelopmental outcomes of infants admitted to the neonatal intensive care unit, and increase cost of care. Premature infants are particularly vulnerable since they often undergo invasive procedures and are dependent on central catheters to deliver nutrition and on ventilators for respiratory support. Prevention of nosocomial infection is a critical patient safety imperative, and invariably requires a multidisciplinary approach. There are no short cuts. Hand hygiene before and after patient contact is the most important measure, and yet, compliance with this simple measure can be unsatisfactory. Alcohol based hand sanitizer is effective against many microorganisms and is efficient, compared to plain or antiseptic containing soaps. The use of maternal breast milk is another inexpensive and simple measure to reduce infection rates. Efforts to replicate the anti-infectious properties of maternal breast milk by the use of probiotics, prebiotics, and synbiotics have met with variable success, and there are ongoing trials of lactoferrin, an iron binding whey protein present in large quantities in colostrum. Attempts to boost the immunoglobulin levels of preterm infants with exogenous immunoglobulins have not been shown to reduce nosocomial infections significantly. Over the last decade, improvements in the incidence of catheter-related infections have been achieved, with meticulous attention to every detail from insertion to maintenance, with some centers reporting zero rates for such infections. Other nosocomial infections like ventilator acquired pneumonia and staphylococcus aureus infection remain problematic, and outbreaks with multidrug resistant organisms continue to have disastrous consequences. Management of infections is based on the profile of microorganisms in the neonatal unit and community and targeted therapy is required to control the disease without leading to the development of more resistant strains.", "title": "" }, { "docid": "2dc24d2ecaf2494543128f5e9e5f4864", "text": "Design of a multiphase hybrid permanent magnet (HPM) generator for series hybrid electric vehicle (SHEV) application is presented in this paper. The proposed hybrid excitation topology together with an integral passive rectifier replaces the permanent magnet (PM) machine and active power electronics converter in hybrid/electric vehicles, facilitating the control over constant PM flux-linkage. The HPM topology includes two rotor elements: a PM and a wound field (WF) rotor with a 30% split ratio, coupled on the same shaft in one machine housing. Both rotors share a nine-phase stator that results in higher output voltage and power density when compared to three-phase design. The HPM generator design is based on a 3-kW benchmark PM machine to ensure the feasibility and validity of design tools and procedures. The WF rotor is designed to realize the same pole shape and number as in the PM section and to obtain the same flux-density in the air-gap while minimizing the WF input energy. Having designed and analyzed the machine using equivalent magnetic circuit and finite element analysis, a laboratory prototype HPM generator is built and tested with the measurements compared to predicted results confirming the designed characteristics and machine performance. The paper also presents comprehensive machine loss and mass audits.", "title": "" }, { "docid": "55f95c7b59f17fb210ebae97dbd96d72", "text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.", "title": "" }, { "docid": "afdc57b5d573e2c99c73deeef3c2fd5f", "text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.", "title": "" }, { "docid": "d7e794a106f29f5ebe917c2e7b6007eb", "text": "In this paper, several recent theoretical conceptions of technology-mediated education are examined and a study of 2159 online learners is presented. The study validates an instrument designed to measure teaching, social, and cognitive presence indicative of a community of learners within the community of inquiry (CoI) framework [Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a textbased environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 1–19; Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23]. Results indicate that the survey items cohere into interpretable factors that represent the intended constructs. Further it was determined through structural equation modeling that 70% of the variance in the online students’ levels of cognitive presence, a multivariate measure of learning, can be modeled based on their reports of their instructors’ skills in fostering teaching presence and their own abilities to establish a sense of social presence. Additional analysis identifies more details of the relationship between learner understandings of teaching and social presence and its impact on their cognitive presence. Implications for online teaching, policy, and faculty development are discussed. ! 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8392c5faf4c837fd06b6f50d110b6e84", "text": "Pool of knowledge available to the mankind depends on the source of learning resources, which can vary from ancient printed documents to present electronic material. The rapid conversion of material available in traditional libraries to digital form needs a significant amount of work if we are to maintain the format and the look of the electronic documents as same as their printed counterparts. Most of the printed documents contain not only characters and its formatting but also some associated non text objects such as tables, charts and graphical objects. It is challenging to detect them and to concentrate on the format preservation of the contents while reproducing them. To address this issue, we propose an algorithm using local thresholds for word space and line height to locate and extract all categories of tables from scanned document images. From the experiments performed on 298 documents, we conclude that our algorithm has an overall accuracy of about 75% in detecting tables from the scanned document images. Since the algorithm does not completely depend on rule lines, it can detect all categories of tables in a range of scanned documents with different font types, styles and sizes to extract their formatting features. Moreover, the algorithm can be applied to locate tables in multi column layouts with small modification in layout analysis. Treating tables with their existing formatting features will tremendously help the reproducing of printed documents for reprinting and updating purposes.", "title": "" }, { "docid": "48a75e28154d630da14fd3dba09d0af8", "text": "Over the years, artificial intelligence (AI) is spreading its roots in different areas by utilizing the concept of making the computers learn and handle complex tasks that previously require substantial laborious tasks by human beings. With better accuracy and speed, AI is helping lawyers to streamline work processing. New legal AI software tools like Catalyst, Ross intelligence, and Matlab along with natural language processing provide effective quarrel resolution, better legal clearness, and superior admittance to justice and fresh challenges to conventional law firms providing legal services using leveraged cohort correlate model. This paper discusses current applications of legal AI and suggests deep learning and machine learning techniques that can be applied in future to simplify the cumbersome legal tasks.", "title": "" }, { "docid": "64c156ee4171b5b84fd4eedb1d922f55", "text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.", "title": "" }, { "docid": "fca63f719115e863f5245f15f6b1be50", "text": "Model-based testing (MBT) in hardware-in-the-loop (HIL) platform is a simulation and testing environment for embedded systems, in which test design automation provided by MBT is combined with HIL methodology. A HIL platform is a testing environment in which the embedded system under testing (SUT) assumes to be operating with real-world inputs and outputs. In this paper, we focus on presenting the novel methodologies and tools that were used to conduct the validation of the MBT in HIL platform. Another novelty of the validation approach is that it aims to provide a comprehensive and many-sided process view to validating MBT and HIL related systems including different component, integration and system level testing activities. The research is based on the constructive method of the related scientific literature and testing technologies, and the results are derived through testing and validating the implemented MBT in HIL platform. The used testing process indicated that the functionality of the constructed MBT in HIL prototype platform was validated.", "title": "" }, { "docid": "5d13c7c50cb43de80df7b6f02c866dab", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, even in the black-box case, where the attacker is limited to solely query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or estimating gradients from the output scores. We introduce GenAttack, a gradient-free optimization technique which uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on the MNIST, CIFAR-10, and ImageNet datasets show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than existing approaches. For example, in our CIFAR-10 experiments, GenAttack required roughly 2,568 times less queries than the current state-of-the-art black-box attack. Furthermore, we show that GenAttack can successfully attack both the state-of-the-art ImageNet defense, ensemble adversarial training, and non-differentiable, randomized input transformation defenses. GenAttack’s success against ensemble adversarial training demonstrates that its query efficiency enables it to exploit the defense’s weakness to direct black-box attacks. GenAttack’s success against non-differentiable input transformations indicates that its gradient-free nature enables it to be applicable against defenses which perform gradient masking/obfuscation to confuse the attacker. Our results suggest that evolutionary algorithms open up a promising area of research into effective gradient-free black-box attacks.", "title": "" }, { "docid": "01a70ee73571e848575ed992c1a3a578", "text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.", "title": "" }, { "docid": "776cba62170ee8936629aabca314fd46", "text": "While the Global Positioning System (GPS) tends to be not useful anymore in terms of precise localization once one gets into a building, Low Energy beacons might come in handy instead. Navigating free of signal reception problems throughout a building when one has never visited that place before is a challenge tackled with indoors localization. Using Bluetooth Low Energy1 (BLE) beacons (either iBeacon or Eddystone formats) is the medium to accomplish that. Indeed, different purpose oriented applications can be designed, developed and shaped towards the needs of any person in the context of a certain building. This work presents a series of post-processing filters to enhance the outcome of the estimated position applying trilateration as the main and straightforward technique to locate someone within a building. A later evaluation tries to give enough evidence around the feasibility of this indoor localization technique. A mobile app should be everything a user would need to have within a building in order to navigate inside.", "title": "" }, { "docid": "b89099e9b01a83368a1ebdb2f4394eba", "text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.", "title": "" }, { "docid": "330129cb283fac3dc4df9f0c36b1de48", "text": "Hydrokinetic turbines convert kinetic energy of moving river or tide water into electrical energy. In this work, design considerations of river current turbines are discussed with emphasis on straight bladed Darrieus rotors. Fluid dynamic analysis is carried out to predict the performance of the rotor. Discussions on a broad range of physical and operational conditions that may impact the design scenario are also presented. In addition, a systematic design procedure along with supporting information that would aid various decision making steps are outlined and illustrated by a design example. Finally, the scope for further work is highlighted", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "76e7f63fa41d6d457e6e4386ad7b9896", "text": "A growing body of work has highlighted the challenges of identifying the stance that a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts from the debate website ConvinceMe.net, for 14 topics ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for classifying stance on a per topic basis that range from 60% to 75%, as compared to unigram baselines that vary between 47% and 66%. Our results suggest that features and methods that take into account the dialogic context of such posts improve accuracy.", "title": "" }, { "docid": "0c886080015642aa5b7c103adcd2a81d", "text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.", "title": "" }, { "docid": "95db9ce9faaf13e8ff8d5888a6737683", "text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: [email protected] Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:", "title": "" }, { "docid": "f8cd8b54218350fa18d4d59ca0a58a05", "text": "This study provides conceptual and empirical arguments why an assessment of applicants' procedural knowledge about interpersonal behavior via a video-based situational judgment test might be valid for academic and postacademic success criteria. Four cohorts of medical students (N = 723) were followed from admission to employment. Procedural knowledge about interpersonal behavior at the time of admission was valid for both internship performance (7 years later) and job performance (9 years later) and showed incremental validity over cognitive factors. Mediation analyses supported the conceptual link between procedural knowledge about interpersonal behavior, translating that knowledge into actual interpersonal behavior in internships, and showing that behavior on the job. Implications for theory and practice are discussed.", "title": "" }, { "docid": "019d5deed0ed1e5b50097d5dc9121cb6", "text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.", "title": "" } ]
scidocsrr
c13d3e96ac7a5df8c96bc0de66a33a1f
Fine-Grained Image Search
[ { "docid": "9c47b068f7645dc5464328e80be24019", "text": "In this paper we propose a highly effective and scalable framework for recognizing logos in images. At the core of our approach lays a method for encoding and indexing the relative spatial layout of local features detected in the logo images. Based on the analysis of the local features and the composition of basic spatial structures, such as edges and triangles, we can derive a quantized representation of the regions in the logos and minimize the false positive detections. Furthermore, we propose a cascaded index for scalable multi-class recognition of logos.\n For the evaluation of our system, we have constructed and released a logo recognition benchmark which consists of manually labeled logo images, complemented with non-logo images, all posted on Flickr. The dataset consists of a training, validation, and test set with 32 logo-classes. We thoroughly evaluate our system with this benchmark and show that our approach effectively recognizes different logo classes with high precision.", "title": "" } ]
[ { "docid": "24a23aff0026141d1b6970e8216347f8", "text": "Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, environmental and human systems in real-time. The inherent closed-loop responsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing platforms. Distributed Stream Processing Systems (DSPS) are becoming essential components of any IoT stack, but the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT data streams and applications. Here, we develop a benchmark suite and performance metrics to evaluate DSPS for streaming IoT applications. The benchmark includes 13 common IoT tasks classified across various functional categories and forming micro-benchmarks, and two IoT applications for statistical summarization and predictive analytics that leverage various dataflow compositional features of DSPS. These are coupled with stream workloads sourced from real IoT observations from smart cities. We validate the IoT benchmark for the popular Apache Storm DSPS, and present empirical results.", "title": "" }, { "docid": "f8742208fef05beb86d77f1d5b5d25ef", "text": "The latest book on Genetic Programming, Poli, Langdon and McPhee’s (with contributions from John R. Koza) A Field Guide to Genetic Programming represents an exciting landmark with the authors choosing to make their work freely available by publishing using a form of the Creative Commons License[1]. In so doing they have created a must-read resource which is, to use their words, ’aimed at both newcomers and old-timers’. The book is freely available from the authors companion website [2] and Lulu.com [3] in both pdf and html form. For those who desire the more traditional page turning exercise, inexpensive printed copies can be ordered from Lulu.com. The Field Guides companion website also provides a link to the TinyGP code printed over eight pages of Appendix B, and a Discussion Group centered around the book. The book is divided into four parts with fourteen chapters and two appendices. Part I introduces the basics of Genetic Programming, Part II overviews more advanced topics, Part III highlights some of the real world applications and discusses issues facing the GP researcher or practitioner, while Part IV contains two appendices, the first introducing some key resources and the second appendix describes the TinyGP code. The pdf and html forms of the book have an especially useful feature, providing links to the articles available on-line at the time of publication, and to bibtex entries of the GP Bibliography. Following an overview of the book in chapter 1, chapter 2 introduces the basic concepts of GP focusing on the tree representation, initialisation, selection, and the search operators. Chapter 3 is centered around the preparatory steps in applying GP to a problem, which is followed by an outline of a sample run of GP on a simple instance of symbolic regression in Chapter 4. Overall these chapters provide a compact and useful introduction to GP. The first of the Advanced GP chapters in Part II looks at alternative strategies for initialisation and the search operators for tree-based GP. An overview of Modular, Grammatical and Developmental GP is provided in Chapter 6. While the chapter title", "title": "" }, { "docid": "d258a14fc9e64ba612f2c8ea77f85d08", "text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.", "title": "" }, { "docid": "2ee0eb9ab9d6c5b9bdad02b9f95c8691", "text": "Aim: To describe lower extremity injuries for badminton in New Zealand. Methods: Lower limb badminton injuries that resulted in claims accepted by the national insurance company Accident Compensation Corporation (ACC) in New Zealand between 2006 and 2011 were reviewed. Results: The estimated national injury incidence for badminton injuries in New Zealand from 2006 to 2011 was 0.66%. There were 1909 lower limb badminton injury claims which cost NZ$2,014,337 (NZ$ value over 2006 to 2011). The age-bands frequently injured were 10–19 (22%), 40–49 (22%), 30–39 (14%) and 50–59 (13%) years. Sixty five percent of lower limb injuries were knee ligament sprains/tears. Males sustained more cruciate ligament sprains than females (75 vs. 39). Movements involving turning, changing direction, shifting weight, pivoting or twisting were responsible for 34% of lower extremity injuries. Conclusion: The knee was most frequently OPEN ACCESS", "title": "" }, { "docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed", "text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.", "title": "" }, { "docid": "8fe5ad58edf4a1c468fd0b6a303729ee", "text": "Das CDISC Operational Data Model (ODM) ist ein populärer Standard in klinischen Datenmanagementsystemen (CDMS). Er beschreibt sowohl die Struktur einer klinischen Prüfung inklusive der Visiten, Formulare, Datenele mente und Codelisten als auch administrative Informationen wie gültige Nutzeracco unts. Ferner enthält er alle erhobenen klinischen Fakten über die Pro banden. Sein originärer Einsatzzweck liegt in der Archivierung von Studiendatenbanken und dem Austausch klinischer Daten zwischen verschiedenen CDMS. Aufgrund de r reichhaltigen Struktur eignet er sich aber auch für weiterführende Anwendungsfälle. Im Rahmen studentischer Praktika wurden verschied ene Szenarien für funktionale Ergänzungen des freien CDMS OpenClinica unters ucht und implementiert, darunter die Generierung eines Annotated CRF, der Import vo n Studiendaten per Web-Service, das semiautomatisierte Anlegen von Studien so wie der Export von Studiendaten in einen relationalen Data Mart und in ein Forschungs-Data-Warehouse auf Basis von i2b2.", "title": "" }, { "docid": "81b2a039a391b5f2c1a9a15c94f1f67e", "text": "Evolution of resistance in pests can reduce the effectiveness of insecticidal proteins from Bacillus thuringiensis (Bt) produced by transgenic crops. We analyzed results of 77 studies from five continents reporting field monitoring data for resistance to Bt crops, empirical evaluation of factors affecting resistance or both. Although most pest populations remained susceptible, reduced efficacy of Bt crops caused by field-evolved resistance has been reported now for some populations of 5 of 13 major pest species examined, compared with resistant populations of only one pest species in 2005. Field outcomes support theoretical predictions that factors delaying resistance include recessive inheritance of resistance, low initial frequency of resistance alleles, abundant refuges of non-Bt host plants and two-toxin Bt crops deployed separately from one-toxin Bt crops. The results imply that proactive evaluation of the inheritance and initial frequency of resistance are useful for predicting the risk of resistance and improving strategies to sustain the effectiveness of Bt crops.", "title": "" }, { "docid": "596bb1265a375c68f0498df90f57338e", "text": "The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the \" right time \" 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when …", "title": "" }, { "docid": "0e53caa9c6464038015a6e83b8953d92", "text": "Many interactive rendering algorithms require operations on multiple fragments (i.e., ray intersections) at the same pixel location: however, current Graphics Processing Units (GPUs) capture only a single fragment per pixel. Example effects include transparency, translucency, constructive solid geometry, depth-of-field, direct volume rendering, and isosurface visualization. With current GPUs, programmers implement these effects using multiple passes over the scene geometry, often substantially limiting performance. This paper introduces a generalization of the Z-buffer, called the k-buffer, that makes it possible to efficiently implement such algorithms with only a single geometry pass, yet requires only a small, fixed amount of additional memory. The k-buffer uses framebuffer memory as a read-modify-write (RMW) pool of k entries whose use is programmatically defined by a small k-buffer program. We present two proposals for adding k-buffer support to future GPUs and demonstrate numerous multiple-fragment, single-pass graphics algorithms running on both a software-simulated k-buffer and a k-buffer implemented with current GPUs. The goal of this work is to demonstrate the large number of graphics algorithms that the k-buffer enables and that the efficiency is superior to current multipass approaches.", "title": "" }, { "docid": "b64a2e6bb533043a48b7840b72f71331", "text": "Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we brie y describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the di culties raised by this integration.", "title": "" }, { "docid": "b990e62cb73c0f6c9dd9d945f72bb047", "text": "Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Unfortunately, effective (accurate and cheap to compute) admissible heuristics can take years for people to discover. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. It also describes and evaluates an implemented program (Absolver II) that uses a means-ends analysis search control strategy to discover abstracted problems that result in effective admissible heuristics. Absolver II discovered several well-known and a few novel admissible heuristics, including the first known effective one for Rubik's Cube, thus concretely demonstrating that effective admissible heuristics can be tractably discovered by a machine.", "title": "" }, { "docid": "bee4b2dfab47848e8429d4b4617ec9e5", "text": "Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).", "title": "" }, { "docid": "e840e1e77a8a5c2c187c79eda9143ade", "text": "The aim of this study is to find out the customer’s satisfaction with Yemeni Mobile service providers. Th is study examined the relationship between perceived quality, perceived value, customer expectation, and corporate image with customer satisfaction. The result of this study is based on data gathered online from 118 academic staff in public universit ies in Yemen. The study found that the relationship between perceived value, perceived quality and corporate image have a significant positive influence on customer satisfaction, whereas customer expectation has positive but without statistical significance.", "title": "" }, { "docid": "41fe7d2febb05a48daf69b4a41c77251", "text": "Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise. Mathematical Subject Classification (2000): 68T05, 68Q32, 68Q10.", "title": "" }, { "docid": "3172304147c13068b6cec8fd252cda5e", "text": "Widespread growth of open wireless hotspots has made it easy to carry out man-in-the-middle attacks and impersonate web sites. Although HTTPS can be used to prevent such attacks, its universal adoption is hindered by its performance cost and its inability to leverage caching at intermediate servers (such as CDN servers and caching proxies) while maintaining end-to-end security. To complement HTTPS, we revive an old idea from SHTTP, a protocol that offers end-to-end web integrity without confidentiality. We name the protocol HTTPi and give it an efficient design that is easy to deploy for today’s web. In particular, we tackle several previously-unidentified challenges, such as supporting progressive page loading on the client’s browser, handling mixed content, and defining access control policies among HTTP, HTTPi, and HTTPS content from the same domain. Our prototyping and evaluation experience show that HTTPi incurs negligible performance overhead over HTTP, can leverage existing web infrastructure such as CDNs or caching proxies without any modifications to them, and can make many of the mixed-content problems in existing HTTPS web sites easily go away. Based on this experience, we advocate browser and web server vendors to adopt HTTPi.", "title": "" }, { "docid": "d7ebfe6e0f0fa07c5e22d24c69aca13e", "text": "Malware programs that incorporate trigger-based behavior initiate malicious activities based on conditions satisfied only by specific inputs. State-of-the-art malware analyzers discover code guarded by triggers via multiple path exploration, symbolic execution, or forced conditional execution, all without knowing the trigger inputs. We present a malware obfuscation technique that automatically conceals specific trigger-based behavior from these malware analyzers. Our technique automatically transforms a program by encrypting code that is conditionally dependent on an input value with a key derived from the input and then removing the key from the program. We have implemented a compiler-level tool that takes a malware source program and automatically generates an obfuscated binary. Experiments on various existing malware samples show that our tool can hide a significant portion of trigger based code. We provide insight into the strengths, weaknesses, and possible ways to strengthen current analysis approaches in order to defeat this malware obfuscation technique.", "title": "" }, { "docid": "1e306a31f5a9becadc267a895be40335", "text": "Knowledge has been lately recognized as one of the most important assets of organizations. Can information technology help the growth and the sustainment of organizational knowledge? The answer is yes, if care is taken to remember that IT here is just a part of the story (corporate culture and work practices being equally relevant) and that the information technologies best suited for this purpose should be expressly designed with knowledge management in view. This special issue of the Journal of Universal Computer Science contains a selection f papers from the First Conference on Practical Applications of Knowledge Management. Each paper describes a specific type of information technology suitable for the support of different aspects of knowledge management.", "title": "" }, { "docid": "3160dea1a6ebd67d57c0d304e17f4882", "text": "A Concept Inventory (CI) is a set of multiple choice questions used to reveal student's misconceptions related to some topic. Each available choice (besides the correct choice) is a distractor that is carefully developed to address a specific misunderstanding, a student wrong thought. In computer science introductory programming courses, the development of CIs is still beginning, with many topics requiring further study and analysis. We identify, through analysis of open-ended exams and instructor interviews, introductory programming course misconceptions related to function parameter use and scope, variables, recursion, iteration, structures, pointers and boolean expressions. We categorize these misconceptions and define high-quality distractors founded in words used by students in their responses to exam questions. We discuss the difficulty of assessing introductory programming misconceptions independent of the syntax of a language and we present a detailed discussion of two pilot CIs related to parameters: an open-ended question (to help identify new misunderstandings) and a multiple choice question with suggested distractors that we identified.", "title": "" }, { "docid": "8d890dba24fc248ee37653aad471713f", "text": "We consider the problem of constructing a spanning tree for a graph G = (V,E) with n vertices whose maximal degree is the smallest among all spanning trees of G. This problem is easily shown to be NP-hard. We describe an iterative polynomial time approximation algorithm for this problem. This algorithm computes a spanning tree whose maximal degree is at most O(Δ + log n), where Δ is the degree of some optimal tree. The result is generalized to the case where only some vertices need to be connected (Steiner case) and to the case of directed graphs. It is then shown that our algorithm can be refined to produce a spanning tree of degree at most Δ + 1. Unless P = NP, this is the best bound achievable in polynomial time.", "title": "" }, { "docid": "87eb69d6404bf42612806a5e6d67e7bb", "text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.", "title": "" } ]
scidocsrr
bb3b89dd1acf40f12a44eab4bf91d616
Big data and digital forensics
[ { "docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77", "text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.", "title": "" } ]
[ { "docid": "5931cb779b24065c5ef48451bc46fac4", "text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.", "title": "" }, { "docid": "938afbc53340a3aa6e454d17789bf021", "text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.", "title": "" }, { "docid": "235ff4cb1c0091f95caffd528ed95755", "text": "Natural language is a common type of input for data processing systems. Therefore, it is often required to have a large testing data set of this type. In this context, the task to automatically generate natural language texts, which maintain the properties of real texts is desirable. However, current synthetic data generators do not capture natural language text data sufficiently. In this paper, we present a preliminary study on different generative models for text generation, which maintain specific properties of natural language text, i.e., the sentiment of a review text. In a series of experiments using different data sets and sentiment analysis methods, we show that generative models can generate texts with a specific sentiment and that hidden Markov model based text generation achieves less accuracy than Markov chain based text generation, but can generate a higher number of distinct texts.", "title": "" }, { "docid": "2a8c5de43ce73c360a5418709a504fa8", "text": "The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.", "title": "" }, { "docid": "f8622acd0d0c2811b6ae2d0b5d4c9a6b", "text": "Squalene is a linear triterpene that is extensively utilized as a principal component of parenteral emulsions for drug and vaccine delivery. In this review, the chemical structure and sources of squalene are presented. Moreover, the physicochemical and biological properties of squalene-containing emulsions are evaluated in the context of parenteral formulations. Historical and current parenteral emulsion products containing squalene or squalane are discussed. The safety of squalene-based products is also addressed. Finally, analytical techniques for characterization of squalene emulsions are examined.", "title": "" }, { "docid": "b657aeceeee6c29330cf45dcc40d6198", "text": "A small form-factor 60-GHz SiGe BiCMOS radio with two antennas-in-package is presented. The fully-integrated feature-rich transceiver provides a complete RF solution for mobile WiGig/IEEE 802.11ad applications.", "title": "" }, { "docid": "72e6c3c800cd981b1e1dd379d3bbf304", "text": "Brain activity recorded noninvasively is sufficient to control a mobile robot if advanced robotics is used in combination with asynchronous electroencephalogram (EEG) analysis and machine learning techniques. Until now brain-actuated control has mainly relied on implanted electrodes, since EEG-based systems have been considered too slow for controlling rapid and complex sequences of movements. We show that two human subjects successfully moved a robot between several rooms by mental control only, using an EEG-based brain-machine interface that recognized three mental states. Mental control was comparable to manual control on the same task with a performance ratio of 0.74.", "title": "" }, { "docid": "8c8ece47107bc1580e925e42d266ec87", "text": "How do brains shape social networks, and how do social ties shape the brain? Social networks are complex webs by which ideas spread among people. Brains comprise webs by which information is processed and transmitted among neural units. While brain activity and structure offer biological mechanisms for human behaviors, social networks offer external inducers or modulators of those behaviors. Together, these two axes represent fundamental contributors to human experience. Integrating foundational knowledge from social and developmental psychology and sociology on how individuals function within dyads, groups, and societies with recent advances in network neuroscience can offer new insights into both domains. Here, we use the example of how ideas and behaviors spread to illustrate the potential of multilayer network models.", "title": "" }, { "docid": "44de39859665488f8df950007d7a01c6", "text": "Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,", "title": "" }, { "docid": "8b67be5c3adac9bcdbc1aa836708987d", "text": "The adaptive toolbox is a Darwinian-inspired theory that conceives of the mind as a modular system that is composed of heuristics, their building blocks, and evolved capacities. The study of the adaptive toolbox is descriptive and analyzes the selection and structure of heuristics in social and physical environments. The study of ecological rationality is prescriptive and identifies the structure of environments in which specific heuristics either succeed or fail. Results have been used for designing heuristics and environments to improve professional decision making in the real world.", "title": "" }, { "docid": "f9d1777be40b879aee2f6e810422d266", "text": "This study intended to examine the effect of ground colour on memory performance. Most of the past research on colour-memory relationship focus on the colour of the figure rather than the background. Based on these evidences, this study try to extend the previous works to the ground colour and how its effect memory performance based on recall rate. 90 undergraduate students will participate in this study. The experimental design will be used is multiple independent group experimental design. Fifty geometrical shapes will be used in the study phase with measurement of figure, 4.74cm x 3.39cm and ground, 19cm x 25cm. The participants will be measured on numbers of shape that are being recall in test phase in three experimental conditions, coloured background, non-coloured background and mix between coloured and non-coloured background slides condition. It is hypothesized that shape with coloured background will be recalled better than shape with non-coloured background. Analysis of variance (ANOVA) statistical procedure will be used to analyse the data of recall performance between three experimental groups using Statistical Package for Social Sciences (SPSS 17.0) to examine the cause and effect relationship between those variables.", "title": "" }, { "docid": "874e60d3f37aa01d201294ed247eb6a4", "text": "FokI is a type IIs restriction endonuclease comprised of a DNA recognition domain and a catalytic domain. The structural similarity of the FokI catalytic domain to the type II restriction endonuclease BamHI monomer suggested that the FokI catalytic domains may dimerize. In addition, the FokI structure, presented in an accompanying paper in this issue of Proceedings, reveals a dimerization interface between catalytic domains. We provide evidence here that FokI catalytic domain must dimerize for DNA cleavage to occur. First, we show that the rate of DNA cleavage catalyzed by various concentrations of FokI are not directly proportional to the protein concentration, suggesting a cooperative effect for DNA cleavage. Second, we constructed a FokI variant, FokN13Y, which is unable to bind the FokI recognition sequence but when mixed with wild-type FokI increases the rate of DNA cleavage. Additionally, the FokI catalytic domain that lacks the DNA binding domain was shown to increase the rate of wild-type FokI cleavage of DNA. We also constructed an FokI variant, FokD483A, R487A, which should be defective for dimerization because the altered residues reside at the putative dimerization interface. Consistent with the FokI dimerization model, the variant FokD483A, R487A revealed greatly impaired DNA cleavage. Based on our work and previous reports, we discuss a pathway of DNA binding, dimerization, and cleavage by FokI endonuclease.", "title": "" }, { "docid": "cd92f750461aff9877853f483cf09ecf", "text": "Designing and maintaining Web applications is one of the major challenges for the software industry of the year 2000. In this paper we present Web Modeling Language (WebML), a notation for specifying complex Web sites at the conceptual level. WebML enables the high-level description of a Web site under distinct orthogonal dimensions: its data content (structural model), the pages that compose it (composition model), the topology of links between pages (navigation model), the layout and graphic requirements for page rendering (presentation model), and the customization features for one-to-one content delivery (personalization model). All the concepts of WebML are associated with a graphic notation and a textual XML syntax. WebML specifications are independent of both the client-side language used for delivering the application to users, and of the server-side platform used to bind data to pages, but they can be effectively used to produce a site implementation in a specific technological setting. WebML guarantees a model-driven approach to Web site development, which is a key factor for defining a novel generation of CASE tools for the construction of complex sites, supporting advanced features like multi-device access, personalization, and evolution management. The WebML language and its accompanying design method are fully implemented in a pre-competitive Web design tool suite, called ToriiSoft.", "title": "" }, { "docid": "42ebaee6fdbfc487ae2a21e8a55dd3e4", "text": "Human motion prediction, forecasting human motion in a few milliseconds conditioning on a historical 3D skeleton sequence, is a long-standing problem in computer vision and robotic vision. Existing forecasting algorithms rely on extensive annotated motion capture data and are brittle to novel actions. This paper addresses the problem of few-shot human motion prediction, in the spirit of the recent progress on few-shot learning and meta-learning. More precisely, our approach is based on the insight that having a good generalization from few examples relies on both a generic initial model and an effective strategy for adapting this model to novel tasks. To accomplish this, we propose proactive and adaptive meta-learning (PAML) that introduces a novel combination of model-agnostic meta-learning and model regression networks and unifies them into an integrated, end-to-end framework. By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks, while effectively adapting this model for use as a task-specific one by leveraging learningto-learn knowledge about how to transform few-shot model parameters to many-shot model parameters. The resulting PAML predictor model significantly improves the prediction performance on the heavily benchmarked H3.6M dataset in the small-sample size regime.", "title": "" }, { "docid": "eda6795cb79e912a7818d9970e8ca165", "text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.", "title": "" }, { "docid": "61bde9866c99e98aac813a9410d33189", "text": ": Steganography is an art and science of writing hidden messages in such a way that no one apart from the intended recipient knows the existence of the message.The maximum number of bits that can be used for LSB audio steganography without causing noticeable perceptual distortion to the host audio signal is 4 LSBs, if 16 bits per sample audio sequences are used.We propose two novel approaches of substit ution technique of audio steganography that improves the capacity of cover audio for embedding additional data. Using these methods, message bits are embedded into multiple and variable LSBs. These methods utilize upto 7 LSBs for embedding data.Results show that both these methods improve capacity of data hiding of cover audio by 35% to 70% as compared to the standerd LSB algorithm with 4 LSBs used for data embedding. And using encryption and decryption techniques performing cryptography. So for this RSA algorithm used. KeywordsInformation hiding,Audio steganography,Least significant bit(LSB),Most significant bit(MSB)", "title": "" }, { "docid": "64ae34c959e0e4c9a6a155eeb334b3ea", "text": "Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a twochannel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.", "title": "" }, { "docid": "19ea9b23f8757804c23c21293834ff3f", "text": "We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.", "title": "" }, { "docid": "b6cc41414ad1dae4ccd2fcf4df1bd3b6", "text": "Bio-implantable sensors using radio-frequency telemetry links that enable the continuous monitoring and recording of physiological data are receiving a great deal of attention. The objective of this paper is to study the feasibility of an implantable sensor for tissue characterization. This has been done by querying an LC sensor surrounded by dispersive tissues by an external antenna. The resonant frequency of the sensor is monitored by measuring the input impedance of the antenna, and correlated to the desired quantities. Using an equivalent circuit model of the sensor that accounts for the properties of the encapsulating tissue, analytical expressions have been developed for the extraction of the tissue permittivity and conductivity. Finally, experimental validation has been performed with a telemetry link that consists of a loop antenna and a fabricated LC sensor immersed in single and multiple dispersive phantom materials.", "title": "" }, { "docid": "9f84ec96cdb45bcf333db9f9459a3d86", "text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 &times; 2 and 2 &times; 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.", "title": "" } ]
scidocsrr
4b4b0c408c230f46882a7e01e72cd029
Comparison of open-source cloud management platforms: OpenStack and OpenNebula
[ { "docid": "84cb130679353dbdeff24100409f57fe", "text": "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.", "title": "" }, { "docid": "52348982bb1a9dcea3070d9b556ef987", "text": "Cloud computing is the development of parallel computing, distributed computing and grid computing. It has been one of the most hot research topics. Now many corporations have involved in the cloud computing related techniques and many cloud computing platforms have been put forward. This is a favorable situation to study and application of cloud computing related techniques. Though interesting, there are also some problems for so many flatforms. For to a novice or user with little knowledge about cloud computing, it is still very hard to make a reasonable choice. What differences are there for different cloud computing platforms and what characteristics and advantages each has? To answer these problems, the characteristics, architectures and applications of several popular cloud computing platforms are analyzed and discussed in detail. From the comparison of these platforms, users can better understand the different cloud platforms and more reasonablely choose what they want.", "title": "" } ]
[ { "docid": "2a4822a0cd5022b0ca6f603b2279933d", "text": "The products reviews are increasingly used by individuals and organizations for purchase and business decisions. Driven by the desire of profit, spammers produce synthesized reviews to promote some products or demote competitors products. So deceptive opinion spam detection has attracted significant attention from both business and research communities in recent years. Existing approaches mainly focus on traditional discrete features, which are based on linguistic and psychological cues. However, these methods fail to encode the semantic meaning of a document from the discourse perspective, which limits the performance. In this work, we empirically explore a neural network model to learn document-level representation for detecting deceptive opinion spam. First, the model learns sentence representation with convolutional neural network. Then, sentence representations are combined using a gated recurrent neural network, which can model discourse information and yield a document vector. Finally, the document representations are directly used as features to identify deceptive opinion spam. Based on three domains datasets, the results on in-domain and cross-domain experiments show that our proposed method outperforms state-of-the-art methods. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "3bfeb0096c0255aee35001c23acb2057", "text": "Tensegrity structures, isolated solid rods connected by tensile cables, are of interest in the field of soft robotics due to their flexible and robust nature. This makes them suitable for uneven and unpredictable environments in which traditional robots struggle. The compliant structure also ensures that the robot will not injure humans or delicate equipment in co-robotic applications [1]. A 6-bar tensegrity structure is being used as the basis for a new generation of robotic landers and rovers for space exploration [1]. In addition to a soft tensegrity structure, we are also exploring use of soft sensors as an integral part of the compliant elements. Fig. 1 shows an example of a 6-bar tensegrity structure, with integrated liquid metalembedded hyperelastic strain sensors as the 24 tensile components. For this tensegrity, the strain sensors are primarily composed of a silicone elastomer with embedded microchannels filled with conductive liquid metal (eutectic gallium indium alloy (eGaIn), Sigma-Aldrich) (fig.2). As the sensor is elongated, the resistance of the eGaIn channel will increase due to the decreased microchannel cross-sectional area and the increased microchannel length [2]. The primary functions of this hyperelastic sensor tensegrity are model validation, feedback control, and structure analysis under payload. Feedback from the sensors can be used for experimental validation of existing models of tensegrity structures and dynamics, such as for the NASA Tensegrity Robotics Toolkit [3]. In addition, the readings from the sensors can provide distance changed between the ends of the bars, which can be used as a state estimator for UC Berkeley’s rapidly prototyped tensegrity robot to perform feedback control [1]. Furthermore, this physical model allows us to observe and record the force distribution and structure deformation with different payload conditions. Currently, we are exploring the possibility of integrating shape memory alloys into the hyperelastic sensors, which can provide the benefit of both actuation and sensing in a compact module. Preliminary tests indicate that this combination has the potential to generate enough force and displacement to achieve punctuated rolling motion for the 6-bar tensegrity structure.", "title": "" }, { "docid": "1e5073e73c371f1682d95bb3eedaf7f4", "text": "Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.", "title": "" }, { "docid": "116ae1a8d8d8cb5a776ab665a6fc1c8c", "text": "A low noise transimpedance amplifier (TIA) is used in radiation detectors to transform the current pulse produced by a photo-sensitive device into an output voltage pulse with a specified amplitude and shape. We consider here the specifications of a PET (positron emission tomography) system. We review the traditional approach, feedback TIA, using an operational amplifier with feedback, and we investigate two alternative circuits: the common-gate TIA, and the regulated cascode TIA. We derive the transimpedance function (the poles of which determine the pulse shaping); we identify the transistor in each circuit that has the dominant noise source, and we obtain closed-form equations for the rms output noise voltage. We find that the common-gate TIA has high noise, but the regulated cascode TIA has the same dominant noise contribution as the feedback TIA, if the same maximum transconductance value is considered. A circuit prototype of a regulated cascode TIA is designed in a 0.35 μm CMOS technology, to validate the theoretical results by simulation and by measurement.", "title": "" }, { "docid": "9d34171c2fcc8e36b2fb907fe63fc08d", "text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.", "title": "" }, { "docid": "00579ac3e9336b60016f931df6ab2c34", "text": "Often presented as competing products on the market of low cost 3D sensors, the Kinect™ and the Leap Motion™ (LM) can actually be complementary in some scenario. We promote, in this paper, the fusion of data acquired by both LM and Kinect sensors to improve hand tracking performances. The sensor fusion is applied to an existing augmented reality system targeting the treatment of phantom limb pain (PLP) in upper limb amputees. With the Kinect we acquire 3D images of the patient in real-time. These images are post-processed to apply a mirror effect along the sagittal plane of the body, before being displayed back to the patient in 3D, giving him the illusion that he has two arms. The patient uses the virtual reconstructed arm to perform given tasks involving interactions with virtual objects. Thanks to the plasticity of the brain, the restored visual feedback of the missing arm allows, in some cases, to reduce the pain intensity. The Leap Motion brings to the system the ability to perform accurate motion tracking of the hand, including the fingers. By registering the position and orientation of the LM in the frame of reference of the Kinect, we make our system able to accurately detect interactions of the hand and the fingers with virtual objects, which will greatly improve the user experience. We also show that the sensor fusion nicely extends the tracking domain by supplying finger positions even when the Kinect sensor fails to acquire the depth values for the hand.", "title": "" }, { "docid": "9f16e90dc9b166682ac9e2a8b54e611a", "text": "Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.", "title": "" }, { "docid": "d287a48936f60ac81b1d27f0885b5360", "text": "In this article, we are interested in implementing mixed-criticality real-time embedded applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. According to certification standards, mixed-criticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Tasks of different SILs can share a partition only if they are all elevated to the highest SIL among them. Such elevation leads to increased development costs, which increase dramatically with each SIL. Tasks of higher SILs can be decomposed into redundant structures of lower SIL tasks. We are interested to determine (i) the mapping of tasks to processors, (ii) the assignment of tasks to partitions, (iii) the decomposition of tasks into redundant lower SIL tasks, (iv) the sequence and size of the partition time slots on each processor, and (v) the schedule tables, such that all the applications are schedulable and the development costs are minimized. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.", "title": "" }, { "docid": "d75ebc4041927b525d8f4937c760518e", "text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.", "title": "" }, { "docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c", "text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.", "title": "" }, { "docid": "eebf03df49eb4a99f61d371e059ef43e", "text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].", "title": "" }, { "docid": "7263779123a0894f6d7eb996d631f007", "text": "The sudden infant death syndrome (SIDS) is postulated to result from a failure of homeostatic responses to life-threatening challenges (e.g. asphyxia, hypercapnia) during sleep. The ventral medulla participates in sleep-related homeostatic responses, including chemoreception, arousal, airway reflex control, thermoregulation, respiratory drive, and blood pressure regulation, in part via serotonin and its receptors. The ventral medulla in humans contains the arcuate nucleus, in which we have shown isolated defects in muscarinic and kainate receptor binding in SIDS victims. We also have demonstrated that the arcuate nucleus is anatomically linked to the nucleus raphé obscurus, a medullary region with serotonergic neurons. We tested the hypothesis that serotonergic receptor binding is decreased in both the arcuate nucleus and nucleus raphé obscurus in SIDS victims. Using quantitative autoradiography, 3H-lysergic acid diethylamide (3H-LSD binding) to serotonergic receptors (5-HT1A-D and 5-HT2 subtypes) was measured blinded in 19 brainstem nuclei. Cases were classified as SIDS (n = 52), acute controls (infants who died suddenly and in whom a complete autopsy established a cause of death) (n = 15), or chronic cases with oxygenation disorders (n = 17). Serotonergic binding was significantly lowered in the SIDS victims compared with controls in the arcuate nucleus (SIDS, 6 +/- 1 fmol/mg tissue; acutes, 19 +/- 1; and chronics, 16 +/- 1; p = 0.0001) and n. raphé obscurus (SIDS, 28 +/- 3 fmol/mg tissue; acutes, 66 +/- 6; and chronics, 59 +/- 1; p = 0.0001). Binding, however, was also significantly lower (p < 0.05) in 4 other regions that are integral parts of the medullary raphé/serotonergic system, and/or are derived, like the arcuate nucleus and nucleus raphé obscurus, from the same embryonic anlage (rhombic lip). These data suggest that a larger neuronal network than the arcuate nucleus alone is involved in the pathogenesis of SIDS, that is, a network composed of inter-related serotonergic nuclei of the ventral medulla that are involved in homeostatic mechanisms, and/or are derived from a common embryonic anlage.", "title": "" }, { "docid": "7813dc93e6bcda97768d87e80f8efb2b", "text": "The inclusion of transaction costs is an essential element of any realistic portfolio optimization. In this paper, we consider an extension of the standard portfolio problem in which transaction costs are incurred to rebalance an investment portfolio. The Markowitz framework of mean-variance efficiency is used with costs modelled as a percentage of the value transacted. Each security in the portfolio is represented by a pair of continuous decision variables corresponding to the amounts bought and sold. In order to properly represent the variance of the resulting portfolio, it is necessary to rescale by the funds available after paying the transaction costs. We show that the resulting fractional quadratic programming problem can be solved as a quadratic programming problem of size comparable to the model without transaction costs. Computational results for two empirical datasets are presented.", "title": "" }, { "docid": "19b16abf5ec7efe971008291f38de4d4", "text": "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ2-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.", "title": "" }, { "docid": "f58a66f2caf848341b29094e9d3b0e71", "text": "Since student performance and pass rates in school reflect teaching level of the school and even all education system, it is critical to improve student pass rates and reduce dropout rates. Decision Tree (DT) algorithm and Support Vector Machine (SVM) algorithm in data mining, have been used by researchers to find important student features and predict the student pass rates, however they did not consider the coefficient of initialization, and whether there is a dependency between student features. Therefore, in this study, we propose a new concept: features dependencies, and use the grid search algorithm to optimize DT and SVM, in order to improve the accuracy of the algorithm. Furthermore, we added 10-fold cross-validation to DT and SVM algorithm. The results show the experiment can achieve better results in this work. The purpose of this study is providing assistance to students who have greater difficulties in their studies, and students who are at risk of graduating through data mining techniques.", "title": "" }, { "docid": "8cfeb661397d6716ca7fa9954de81330", "text": "There has been a great amount of work on query-independent summarization of documents. However, due to the success of Web search engines query-specific document summarization (query result snippets) has become an important problem, which has received little attention. We present a method to create query-specific summaries by identifying the most query-relevant fragments and combining them using the semantic associations within the document. In particular, we first add structure to the documents in the preprocessing stage and convert them to document graphs. Then, the best summaries are computed by calculating the top spanning trees on the document graphs. We present and experimentally evaluate efficient algorithms that support computing summaries in interactive time. Furthermore, the quality of our summarization method is compared to current approaches using a user survey.", "title": "" }, { "docid": "bc77c4bcc60c3746a791e61951d42c78", "text": "In this paper, a hybrid of indoor ambient light and thermal energy harvesting scheme that uses only one power management circuit to condition the combined output power harvested from both energy sources is proposed to extend the lifetime of the wireless sensor node. By avoiding the use of individual power management circuits for multiple energy sources, the number of components used in the hybrid energy harvesting (HEH) system is reduced and the system form factor, cost and power losses are thus reduced. An efficient microcontroller-based ultra low power management circuit with fixed voltage reference based maximum power point tracking is implemented with closed-loop voltage feedback control to ensure near maximum power transfer from the two energy sources to its connected electronic load over a wide range of operating conditions. From the experimental test results obtained, an average electrical power of 621 μW is harvested by the optimized HEH system at an average indoor solar irradiance of 1010 lux and a thermal gradient of 10 K, which is almost triple of that can be obtained with conventional single-source thermal energy harvesting method.", "title": "" }, { "docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a", "text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.", "title": "" }, { "docid": "2cff00acdccfc43ed2bc35efe704f1ac", "text": "A decision to invest in new manufacturing enabling technologies supporting computer integrated manufacturing (CIM) must include non-quantifiable, intangible benefits to the organization in meeting its strategic goals. Therefore, use of tactical level, purely economic, evaluation methods normally result in the rejection of strategically vital automation proposals. This paper includes four different fuzzy multi-attribute group decision-making methods. The first one is a fuzzy model of group decision proposed by Blin. The second is fuzzy synthetic evaluation, the third is Yager’s weighted goals method, and the last one is fuzzy analytic hierarchy process. These methods are extended to select the best computer integrated manufacturing system by taking into account both intangible and tangible factors. A computer software for these approaches is developed and finally some numerical applications of these methods are given to compare the results of all methods. # 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "7f39974c1eb5dcecf2383ec9cd5abc42", "text": "Edited volumes are an imperfect format for the presentation of ideas, not least because their goals vary. Sometimes they aim simply to survey the field, at other times to synthesize and advance the field. I prefer the former for disciplines that by their nature are not disposed to achieve definitive statements (philosophy, for example). A volume on an empirical topic, however, by my judgment falls short if it closes without firm conclusions, if not on the topic itself, at least on the state of the art of its study. Facial Attractiveness does fall short of this standard, but not for lack of serious effort (especially appreciated are such features as the summary table in Chapter 5). Although by any measure an excellent and thorough review of the major strands of its topic, the volume’s authors are often in such direct conflict that the reader is disappointed that the editors do not, in the end, provide sufficient guidance about where the most productive research avenues lie. Every contribution is persuasive, but as they cannot all be correct, who is to win the day? An obvious place to begin is with the question, What is “attractiveness”? Most writers seem unaware of the problem, and how it might impact their research methodology. What, the reader wants to know, is the most defensible conceptualization of the focal phenomenon? Often an author focuses explicitly on the aesthetic dimension of “attractive,” treating it as a synonym for “beauty.” A recurring phrase in the book is that “beauty is in the eye of the beholder,” with the authors undertaking to argue whether this standard accurately describes social reality. They reach contradictory conclusions. Chapter 1 (by Adam Rubenstein et al.) finds the maxim to be a “myth” which, by chapter’s end, is presumably dispelled; Anthony Little and his co-authors in Chapter 3, however, view their contribution as “help[ing] to place beauty back into the eye of the beholder.” Other chapters take intermediate positions. Besides the aesthetic, “attractive” can refer to raw sexual appeal, or to more long-term relationship evaluations. Which kind of attractiveness one intends will determine the proper methodology to use, and thereby impact the likely experimental results. As only one example, if one intends to investigate aesthetic attraction, the sexual orientation of the judges does not matter, whereas it matters a great deal if one intends to investigate sexual or relationship attraction. Yet no study discussed in these", "title": "" } ]
scidocsrr
55e2362d012d58ae90a1a987246593b3
Device Mismatch: An Analog Design Perspective
[ { "docid": "df374fcdaf0b7cd41ca5ef5932378655", "text": "This paper is concerned with the design of precision MOS anafog circuits. Section ff of the paper discusses the characterization and modeling of mismatch in MOS transistors. A characterization methodology is presented that accurately predicts the mismatch in drain current over a wide operating range using a minimumset of measured data. The physical causes of mismatch are discussed in detail for both pand n-channel devices. Statistieal methods are used to develop analytical models that relate the mismatchto the devicedimensions.It is shownthat these models are valid for smafl-geometrydevices also. Extensive experimental data from a 3-pm CMOS process are used to verify these models. Section 111of the paper demonstrates the applicationof the transistor matching studies to the design of a high-performance digital-to-analog converter (DAC). A circuit designmethodologyis presented that highfights the close interaction between the circuit yield and the matching accuracy of devices. It has been possibleto achievea circuit yieldof greater than 97 percent as a result of the knowledgegenerated regarding the matching behavior of transistors and due to the systematicdesignapproach.", "title": "" } ]
[ { "docid": "79560f7ec3c5f42fe5c5e0ad175fe6a0", "text": "The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximum amount of input or sensor perturbation which is still tolerated. This problem of computing maximum perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximum resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.", "title": "" }, { "docid": "8d6171dbe50a25873bd435ad25e48ae9", "text": "An automatic landing system is required on a long-range drone because the position of the vehicle cannot be reached visually by the pilot. The autopilot system must be able to correct the drone movement dynamically in accordance with its flying altitude. The current article describes autopilot system on an H-Octocopter drone using image processing and complementary filter. This paper proposes a new approach to reduce oscillations during the landing phase on a big drone. The drone flies above 10 meters to a provided coordinate using GPS data, to check for the existence of the landing area. This process is done visually using the camera. PID controller is used to correct the movement by calculate error distance detected by camera. The controller also includes altitude parameters on its calculations through a complementary filter. The controller output is the PWM signals which control the movement and altitude of the vehicle. The signal then transferred to Flight Controller through serial communication, so that, the drone able to correct its movement. From the experiments, the accuracy is around 0.56 meters and it can be done in 18 seconds.", "title": "" }, { "docid": "86318b52b1bdf0dcf64a2d067645237b", "text": "Neurons that fire high-frequency bursts of spikes are found in various sensory systems. Although the functional implications of burst firing might differ from system to system, bursts are often thought to represent a distinct mode of neuronal signalling. The firing of bursts in response to sensory input relies on intrinsic cellular mechanisms that work with feedback from higher centres to control the discharge properties of these cells. Recent work sheds light on the information that is conveyed by bursts about sensory stimuli, on the cellular mechanisms that underlie bursting, and on how feedback can control the firing mode of burst-capable neurons, depending on the behavioural context. These results provide strong evidence that bursts have a distinct function in sensory information transmission.", "title": "" }, { "docid": "26b67fe7ee89c941d313187672b1d514", "text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.", "title": "" }, { "docid": "613f0bf05fb9467facd2e58b70d2b09e", "text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.", "title": "" }, { "docid": "6a4a76e48ff8bfa9ad17f116c3258d49", "text": "Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.", "title": "" }, { "docid": "1ffef8248a0cc0b69a436c4d949ed221", "text": "This paper presents preliminary research on a new decision making tool that integrates financial and non-financial performance measures in project portfolio management via the Triple Bottom Line (TBL) and uses the Analytic Hierarchy Process (AHP) as a decision support model. This new tool evaluates and prioritizes a set of projects and creates a balanced project portfolio based upon the perspectives and priorities of decision makers. It can assist decision makers with developing and making proactive decisions which support the strategy of their organization with respect to financial, environmental and social issues, ensuring the sustainability of their organization in the future.", "title": "" }, { "docid": "fd8b7b9f4469bd253ee66f6c464691a6", "text": "The \"flipped classroom\" is a learning model in which content attainment is shifted forward to outside of class, then followed by instructor-facilitated concept application activities in class. Current studies on the flipped model are limited. Our goal was to provide quantitative and controlled data about the effectiveness of this model. Using a quasi-experimental design, we compared an active nonflipped classroom with an active flipped classroom, both using the 5-E learning cycle, in an effort to vary only the role of the instructor and control for as many of the other potentially influential variables as possible. Results showed that both low-level and deep conceptual learning were equivalent between the conditions. Attitudinal data revealed equal student satisfaction with the course. Interestingly, both treatments ranked their contact time with the instructor as more influential to their learning than what they did at home. We conclude that the flipped classroom does not result in higher learning gains or better attitudes compared with the nonflipped classroom when both utilize an active-learning, constructivist approach and propose that learning gains in either condition are most likely a result of the active-learning style of instruction rather than the order in which the instructor participated in the learning process.", "title": "" }, { "docid": "04e478610728f0aae76e5299c28da25a", "text": "Single image super resolution is one of the most important topic in computer vision and image processing research, many convolutional neural networks (CNN) based super resolution algorithms were proposed and achieved advanced performance, especially in recovering image details, in which PixelCNN is the most representative one. However, due to the intensive computation requirement of PixelCNN model, running time remains a major challenge, which limited its wider application. In this paper, several modifications are proposed to improve PixelCNN based recursive super resolution model. First, a discrete logistic mixture likelihood is adopted, then a cache structure for generating process is proposed, with these modifications, numerous redundant computations are removed without loss of accuracy. Finally, a partial generating network is proposed for higher resolution generation. Experiments on CelebA dataset demonstrate the effectiveness the superiority of the proposed method.", "title": "" }, { "docid": "0d7c29b40f92b5997791f1bbe192269c", "text": "We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics – natural language captions or other labels – depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC’16 benchmark, video summarization on the SumMe and TV-Sum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks.", "title": "" }, { "docid": "47f2a5a61677330fc85ff6ac700ac39f", "text": "We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.", "title": "" }, { "docid": "ffe6edef11daef1db0c4aac77bed7a23", "text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.", "title": "" }, { "docid": "b6bbd83da68fbf1d964503fb611a2be5", "text": "Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.", "title": "" }, { "docid": "4028f1cd20127f3c6599e6073bb1974b", "text": "This paper presents a power delivery monitor (PDM) peripheral integrated in a flip-chip packaged 28 nm system-on-chip (SoC) for mobile computing. The PDM is composed entirely of digital standard cells and consists of: 1) a fully integrated VCO-based digital sampling oscilloscope; 2) a synthetic current load; and 3) an event engine for triggering, analysis, and debug. Incorporated inside an SoC, it enables rapid, automated analysis of supply impedance, as well as monitoring supply voltage droop of multi-core CPUs running full software workloads and during scan-test operations. To demonstrate these capabilities, we describe a power integrity case study of a dual-core ARM Cortex-A57 cluster in a commercial 28 nm mobile SoC. Measurements are presented of power delivery network (PDN) electrical parameters, along with waveforms of the CPU cluster running test cases and benchmarks on bare metal and Linux OS. The effect of aggressive power management techniques, such as power gating on the dominant resonant frequency and peak impedance, is highlighted. Finally, we present measurements of supply voltage noise during various scan-test operations, an often-neglected aspect of SoC power integrity.", "title": "" }, { "docid": "b3947afb7856b0ffd5983f293ca508b9", "text": "High gain low profile slotted cavity with substrate integrated waveguide (SIW) is presented using TE440 high order mode. The proposed antenna is implemented to achieve 16.4 dBi high gain at 28 GHz with high radiation efficiency of 98%. Furthermore, the proposed antenna has a good radiation pattern. Simulated results using CST and HFSS software are presented and discussed. Several advantages such as low profile, low cost, light weight, small size, and easy implementation make the proposed antenna suitable for millimeter-wave wireless communications.", "title": "" }, { "docid": "e3c8f10316152f0bc775f4823b79c7f6", "text": "The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision.", "title": "" }, { "docid": "6b203b7a8958103b30701ac139eb1fb8", "text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.", "title": "" }, { "docid": "70f1f5de73c3a605b296299505fd4e61", "text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.", "title": "" }, { "docid": "0b9b85dc4f80e087f591f89b12bb6146", "text": "Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods.", "title": "" }, { "docid": "e32fc572acb93c65083b372a6b24e7ee", "text": "BACKGROUND\nFemale Genital Mutilation/Cutting (FGM/C) is a harmful traditional practice with severe health complications, deeply rooted in many Sub-Saharan African countries. In The Gambia, the prevalence of FGM/C is 78.3% in women aged between 15 and 49 years. The objective of this study is to perform a first evaluation of the magnitude of the health consequences of FGM/C in The Gambia.\n\n\nMETHODS\nData were collected on types of FGM/C and health consequences of each type of FGM/C from 871 female patients who consulted for any problem requiring a medical gynaecologic examination and who had undergone FGM/C in The Gambia.\n\n\nRESULTS\nThe prevalence of patients with different types of FGM/C were: type I, 66.2%; type II, 26.3%; and type III, 7.5%. Complications due to FGM/C were found in 299 of the 871 patients (34.3%). Even type I, the form of FGM/C of least anatomical extent, presented complications in 1 of 5 girls and women examined.\n\n\nCONCLUSION\nThis study shows that FGM/C is still practiced in all the six regions of The Gambia, the most common form being type I, followed by type II. All forms of FGM/C, including type I, produce significantly high percentages of complications, especially infections.", "title": "" } ]
scidocsrr
29eea63213e67fed705a51de82e5e04c
Classifying the Political Leaning of News Articles and Users from User Votes
[ { "docid": "7f05bd51c98140417ff73ec2d4420d6a", "text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.", "title": "" }, { "docid": "f66854fd8e3f29ae8de75fc83d6e41f5", "text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.", "title": "" } ]
[ { "docid": "a15275cc08ad7140e6dd0039e301dfce", "text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.", "title": "" }, { "docid": "89318aa5769daa08a67ae7327c458e8e", "text": "The present thesis is concerned with the development and evaluation (in terms of accuracy and utility) of systems using hand postures and hand gestures for enhanced Human-Computer Interaction (HCI). In our case, these systems are based on vision techniques, thus only requiring cameras, and no other specific sensors or devices. When dealing with hand movements, it is necessary to distinguish two aspects of these hand movements : the static aspect and the dynamic aspect. The static aspect is characterized by a pose or configuration of the hand in an image and is related to the Hand Posture Recognition (HPR) problem. The dynamic aspect is defined either by the trajectory of the hand, or by a series of hand postures in a sequence of images. This second aspect is related to the Hand Gesture Recognition (HGR) task. Given the recognized lack of common evaluation databases in the HGR field, a first contribution of this thesis was the collection and public distribution of two databases, containing both oneand two-handed gestures, which part of the results reported here will be based upon. On these databases, we compare two state-of-the-art models for the task of HGR. As a second contribution, we propose a HPR technique based on a new feature extraction. This method has the advantage of being faster than conventional methods while yielding good performances. In addition, we provide comparison results of this method with other state-of-the-art technique. Finally, the most important contribution of this thesis lies in the thorough study of the state-of-the-art not only in HGR and HPR but also more generally in the field of HCI. The first chapter of the thesis provides an extended study of the state-of-the-art. The second chapter of this thesis contributes to HPR. We propose to apply for HPR a technique employed with success for face detection. This method is based on the Modified Census Transform (MCT) to extract relevant features in images. We evaluate this technique on an existing benchmark database and provide comparison results with other state-of-the-art approaches. The third chapter is related to HGR. In this chapter we describe the first recorded database, containing both oneand two-handed gestures in the 3D space. We propose to compare two models used with success in HGR, namely Hidden Markov Models (HMM) and Input-Output Hidden Markov Model (IOHMM). The fourth chapter is also focused on HGR but more precisely on two-handed gesture recognition. For that purpose, a second database has been recorded using two cameras. The goal of these gestures is to manipulate virtual objects on a screen. We propose to investigate on this second database the state-of-the-art sequence processing techniques we used in the previous chapter. We then discuss the results obtained using different features, and using images of one or two cameras. In conclusion, we propose a method for HPR based on new feature extraction. For HGR, we provide two databases and comparison results of two major sequence processing techniques. Finally, we present a complete survey on recent state-of-the-art techniques for both HPR and HGR. We also present some possible applications of these techniques, applied to two-handed gesture interaction. We hope this research will open new directions in the field of hand posture and gesture recognition.", "title": "" }, { "docid": "8954672b2e2b6351abfde0747fd5d61c", "text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.", "title": "" }, { "docid": "9a68a804863b5cfd2a271518c3d360ef", "text": "Clinicians whose practice includes elderly patients need a short, reliable instrument to detect the presence of intellectual impairment and to determine the degree. A 10-item Short Portable Mental Status Questionnaire (SPMSQ), easily administered by any clinician in the office or in a hospital, has been designed, tested, standardized and validated. The standardization and validation procedure included administering the test to 997 elderly persons residing in the community, to 141 elderly persons referred for psychiatric and other health and social problems to a multipurpose clinic, and to 102 elderly persons living in institutions such as nursing homes, homes for the aged, or state mental hospitals. It was found that educational level and race had to be taken into account in scoring individual performance. On the basis of the large community population, standards of performance were established for: 1) intact mental functioning, 2) borderline or mild organic impairment, 3) definite but moderate organic impairment, and 4) severe organic impairment. In the 141 clinic patients, the SPMSQ scores were correlated with the clinical diagnoses. There was a high level of agreement between the clinical diagnosis of organic brain syndrome and the SPMSQ scores that indicated moderate or severe organic impairment.", "title": "" }, { "docid": "2bf0219394d87654d2824c805844fcaa", "text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 [email protected][email protected][email protected]", "title": "" }, { "docid": "e49369808eecc67d10f4084245e25163", "text": "In recent years, the recognition of activity is a daring task which helps elderly people, disabled patients and so on. The aim of this paper is to design a system for recognizing the human activity in egocentric video. In this research work, the various textural features like gray level co-occurrence matrix and local binary pattern and point feature speeded up robust features are retrieved from activity videos which is a proposed work and classifiers like probabilistic neural network, support vector machine (SVM), k nearest neighbor (kNN) and proposed SVM+kNN classifiers are used to classify the activity. Here, multimodal egocentric activity dataset is chosen as input. The performance results showed that the SVM+kNN classifier outperformed other classifiers.", "title": "" }, { "docid": "6a851f4fdd456dbaef547a63d53c7a5a", "text": "In the 20th century, the introduction of multiple vaccines significantly reduced childhood morbidity, mortality, and disease outbreaks. Despite, and perhaps because of, their public health impact, an increasing number of parents and patients are choosing to delay or refuse vaccines. These individuals are described as \"vaccine hesitant.\" This phenomenon has developed due to the confluence of multiple social, cultural, political, and personal factors. As immunization programs continue to expand, understanding and addressing vaccine hesitancy will be crucial to their successful implementation. This review explores the history of vaccine hesitancy, its causes, and suggested approaches for reducing hesitancy and strengthening vaccine acceptance.", "title": "" }, { "docid": "62d1574e23fcf07befc54838ae2887c1", "text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.", "title": "" }, { "docid": "76092028b6d109c5e73521d796643eb0", "text": "Volume raycasting techniques are important for both visual arts and visualization. They allow efficient generation of visual effects and visualization of scientific data obtained by tomography or numerical simulation. Volume-rendering techniques are also effective for direct rendering of implicit surfaces used for soft-body animation and constructive solid geometry. The focus of this course is on volumetric illumination techniques that approximate physically based light transport in participating media. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion, and simple Monte Carlo-based approaches to global illumination, including translucency and scattering.", "title": "" }, { "docid": "3744970293b3ed4c4543e6f2313fe2e4", "text": "With the proliferation of GPS-enabled smart devices and increased availability of wireless network, spatial crowdsourcing (SC) has been recently proposed as a framework to automatically request workers (i.e., smart device carriers) to perform location-sensitive tasks (e.g., taking scenic photos, reporting events). In this paper we study a destination-aware task assignment problem that concerns the optimal strategy of assigning each task to proper worker such that the total number of completed tasks can be maximized whilst all workers can reach their destinations before deadlines after performing assigned tasks. Finding the global optimal assignment turns out to be an intractable problem since it does not imply optimal assignment for individual worker. Observing that the task assignment dependency only exists amongst subsets of workers, we utilize tree-decomposition technique to separate workers into independent clusters and develop an efficient depth-first search algorithm with progressive bounds to prune non-promising assignments. Our empirical studies demonstrate that our proposed technique is quite effective and settle the problem nicely.", "title": "" }, { "docid": "849ae444bca6edc7b5d81c0b5c8e2f90", "text": "This paper develops an electric vehicle switched-reluctance motor (SRM) drive powered by a battery/supercapacitor having grid-to-vehicle (G2V) and vehicle-to-home (V2H)/vehicle-to-grid (V2G) functions. The power circuit of the motor drive is formed by a bidirectional two-quadrant front-end dc/dc converter and an SRM asymmetric bridge converter. Through proper control and setting of key parameters, good acceleration/deceleration, reversible driving, and braking characteristics are obtained. In idle condition, the proposed motor drive schematic can be rearranged to construct the integrated power converter to perform the following functions: 1) G2V charging mode: a single-phase two-stage switch-mode rectifier based charger is formed with power factor correction capability; 2) autonomous V2H discharging mode: the 60-Hz 220-V/110-V ac sources are generated by the developed single-phase three-wire inverter to power home appliances. Through the developed differential mode and common mode control schemes, well-regulated output voltages are achieved; 3) grid-connected V2G discharging mode: the programmed real power can be sent back to the utility grid.", "title": "" }, { "docid": "2c37ee67205320d54149a71be104c0e1", "text": "This talk will review the mission, activities, and recommendations of the “Blue Ribbon Panel on Cyberinfrastructure” recently appointed by the leadership on the U.S. National Science Foundation (NSF). The NSF invests in “people, ideas, and tools” and in particular is a major investor in basic research to produce communication and information technology (ICT) as well as its use in supporting basic research and education in most all areas of science and engineering. The NSF through its Directorate for Computer and Information Science and Engineering (CISE) has provided substantial funding for high-end computing resources, initially by awards to five supercomputer centers and later through $70 M per year investments in two partnership alliances for advanced computation infrastructures centered at the University of Illinois and the University of California, San Diego. It has also invested in an array of complementary R&D initiatives in networking, middleware, digital libraries, collaboratories, computational and visualization science, and distributed terascale grid environments.", "title": "" }, { "docid": "437ad5ac30619459627b8f76034da29d", "text": "In 1986, this author presented a paper at a conference, giving a sampling of computer and network security issues, and the tools of the day to address them. The purpose of this current paper is to revisit the topic of computer and network security, and see what changes, especially in types of attacks, have been brought about in 30 years. This paper starts by presenting a review of the state of computer and network security in 1986, along with how certain facets of it have changed. Next, it talks about today's security environment, and finally discusses some of today's many computer and network attack methods that are new or greatly updated since 1986. Many references for further study are provided. The classes of attacks that are known today are the same as the ones known in 1986, but many new methods of implementing the attacks have been enabled by new technologies and the increased pervasiveness of computers and networks in today's society. The threats and specific types of attacks faced by the computer community 30 years ago have not gone away. New threat methods and attack vectors have opened due to advancing technology, supplementing and enhancing, rather than replacing the long-standing threat methods.", "title": "" }, { "docid": "0a6a3e82b701bfbdbb73a9e8573fc94a", "text": "Providing effective feedback on resource consumption in the home is a key challenge of environmental conservation efforts. One promising approach for providing feedback about residential energy consumption is the use of ambient and artistic visualizations. Pervasive computing technologies enable the integration of such feedback into the home in the form of distributed point-of-consumption feedback devices to support decision-making in everyday activities. However, introducing these devices into the home requires sensitivity to the domestic context. In this paper we describe three abstract visualizations and suggest four design requirements that this type of device must meet to be effective: pragmatic, aesthetic, ambient, and ecological. We report on the findings from a mixed methods user study that explores the viability of using ambient and artistic feedback in the home based on these requirements. Our findings suggest that this approach is a viable way to provide resource use feedback and that both the aesthetics of the representation and the context of use are important elements that must be considered in this design space.", "title": "" }, { "docid": "5285b2b579c8a0f0915e76e41d66330c", "text": "Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of a software test's outcome. A common scenario in software testing is therefore that test data are generated, and a tester manually adds test oracles. As this is a difficult task, it is important to produce small yet representative test sets, and this representativeness is typically measured using code coverage. There is, however, a fundamental problem with the common approach of targeting one coverage goal at a time: Coverage goals are not independent, not equally difficult, and sometimes infeasible-the result of test generation is therefore dependent on the order of coverage goals and how many of them are feasible. To overcome this problem, we propose a novel paradigm in which whole test suites are evolved with the aim of covering all coverage goals at the same time while keeping the total size as small as possible. This approach has several advantages, as for example, its effectiveness is not affected by the number of infeasible targets in the code. We have implemented this novel approach in the EvoSuite tool, and compared it to the common approach of addressing one goal at a time. Evaluated on open source libraries and an industrial case study for a total of 1,741 classes, we show that EvoSuite achieved up to 188 times the branch coverage of a traditional approach targeting single branches, with up to 62 percent smaller test suites.", "title": "" }, { "docid": "2ffafb9e8c49d30b295a7047fe33268f", "text": "The time-based resource-sharing model of working memory assumes that memory traces suffer from a time-related decay when attention is occupied by concurrent activities. Using complex continuous span tasks in which temporal parameters are carefully controlled, P. Barrouillet, S. Bernardin, S. Portrat, E. Vergauwe, & V. Camos (2007) recently provided evidence that any increase in time of the processing component of these tasks results in lower recall performance. However, K. Oberauer and R. Kliegl (2006) pointed out that, in this paradigm, increased processing times are accompanied by a corollary decrease of the remaining time during which attention is available to refresh memory traces. As a consequence, the main determinant of recall performance in complex span tasks would not be the duration of attentional capture inducing time-related decay, as Barrouillet et al. (2007) claimed, but the time available to repair memory traces, and thus would be compatible with an interference account of forgetting. The authors demonstrate here that even when the time available to refresh memory traces is kept constant, increasing the processing time still results in poorer recall, confirming that time-related decay is the source of forgetting within working memory.", "title": "" }, { "docid": "8304686526c37e5d1e0e4e7708bf6c29", "text": "JavaScript is becoming the de-facto programming language of the Web. Large-scale web applications (web apps) written in Javascript are commonplace nowadays, with big technology players (e.g., Google, Facebook) using it in their core flagship products. Today, it is common practice to reuse existing JavaScript code, usually in the form of third-party libraries and frameworks. If on one side this practice helps in speeding up development time, on the other side it comes with the risk of bringing dead code, i.e., JavaScript code which is never executed, but still downloaded from the network and parsed in the browser. This overhead can negatively impact the overall performance and energy consumption of the web app. In this paper we present Lacuna, an approach for JavaScript dead code elimination, where existing JavaScript analysis techniques are applied in combination. The proposed approach supports both static and dynamic analyses, it is extensible, and independent of the specificities of the used JavaScript analysis techniques. Lacuna can be applied to any JavaScript code base, without imposing any constraints to the developer, e.g., on her coding style or on the use of some specific JavaScript feature (e.g., modules). Lacuna has been evaluated on a suite of 29 publicly-available web apps, composed of 15,946 JavaScript functions, and built with different JavaScript frameworks (e.g., Angular, Vue.js, jQuery). Despite being a prototype, Lacuna obtained promising results in terms of analysis execution time and precision.", "title": "" }, { "docid": "7f897e5994685f0b158da91cef99c855", "text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.", "title": "" }, { "docid": "e35194cb3fdd3edee6eac35c45b2da83", "text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.", "title": "" } ]
scidocsrr
9f1b5676a0bd25fb31ca2db199fb5516
Design and implementation of a tdd-based 128-antenna massive MIMO prototype system
[ { "docid": "3e111f220be59d347d6acd9d73b2c653", "text": "This paper considers a multiple-input multiple-output (MIMO) receiver with very low-precision analog-to-digital convertors (ADCs) with the goal of developing massive MIMO antenna systems that require minimal cost and power. Previous studies demonstrated that the training duration should be relatively long to obtain acceptable channel state information. To address this requirement, we adopt a joint channel-and-data (JCD) estimation method based on Bayes-optimal inference. This method yields minimal mean square errors with respect to the channels and payload data. We develop a Bayes-optimal JCD estimator using a recent technique based on approximate message passing. We then present an analytical framework to study the theoretical performance of the estimator in the large-system limit. Simulation results confirm our analytical results, which allow the efficient evaluation of the performance of quantized massive MIMO systems and provide insights into effective system design.", "title": "" } ]
[ { "docid": "de5e6be7d21bd93cbc042c40c9bf6ef4", "text": "We present STRUCTURE HARVESTER (available at http://taylor0.biology.ucla.edu/structureHarvester/ ), a web-based program for collating results generated by the program STRUCTURE. The program provides a fast way to assess and visualize likelihood values across multiple values of K and hundreds of iterations for easier detection of the number of genetic groups that best fit the data. In addition, STRUCTURE HARVESTER will reformat data for use in downstream programs, such as CLUMPP.", "title": "" }, { "docid": "eb286d4a7406dc235820ccb848844840", "text": "This paper describes the design and testing of a new introductory programming language, GRAIL1. GRAIL was designed to minimise student syntax errors, and hence allow the study of the impact of syntax errors on learning to program. An experiment was conducted using students learning programming for the first time. The students were split into two groups, one group learning LOGO and the other GRAIL. The resulting code was then analysed for syntax and logic errors. The groups using LOGO made more errors than the groups using GRAIL, which shows that choice of programming language can have a substantial impact on error rates of novice programmers.", "title": "" }, { "docid": "6eaa0d1b6a7e55eca070381954638292", "text": "Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.", "title": "" }, { "docid": "c35306b0ec722364308d332664c823f8", "text": "The uniform asymmetrical microstrip parallel coupled line is used to design the multi-section unequal Wilkinson power divider with high dividing ratio. The main objective of the paper is to increase the trace widths in order to facilitate the construction of the power divider with the conventional photolithography method. The separated microstrip lines in the conventional Wilkinson power divider are replaced with the uniform asymmetrical parallel coupled lines. An even-odd mode analysis is used to calculate characteristic impedances and then the per-unit-length capacitance and inductance parameter matrix are used to calculate the physical dimension of the power divider. To clarify the advantages of this method, two three-section Wilkinson power divider with an unequal power-division ratio of 1 : 2.5 are designed and fabricated and measured, one in the proposed configuration and the other in the conventional configuration. The simulation and the measurement results show that not only the specified design goals are achieved, but also all the microstrip traces can be easily implemented in the proposed power divider.", "title": "" }, { "docid": "46f95796996d4638afcc7b703a1f3805", "text": "One of the main challenges in Grid systems is designing an adaptive, scalable, and model-independent method for job scheduling to achieve a desirable degree of load balancing and system efficiency. Centralized job scheduling methods have some drawbacks, such as single point of failure and lack of scalability. Moreover, decentralized methods require a coordination mechanism with limited communications. In this paper, we propose a multi-agent approach to job scheduling in Grid, named Centralized Learning Distributed Scheduling (CLDS), by utilizing the reinforcement learning framework. The CLDS is a model free approach that uses the information of jobs and their completion time to estimate the efficiency of resources. In this method, there are a learner agent and several scheduler agents that perform the task of learning and job scheduling with the use of a coordination strategy that maintains the communication cost at a limited level. We evaluated the efficiency of the CLDS method by designing and performing a set of experiments on a simulated Grid system under different system scales and loads. The results show that the CLDS can effectively balance the load of system even in large scale and heavy loaded Grids, while maintains its adaptive performance and scalability.", "title": "" }, { "docid": "d7793313ab21020e79e41817b8372ee8", "text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.", "title": "" }, { "docid": "b7b1153067a784a681f2c6d0105acb2a", "text": "Investigations of the human connectome have elucidated core features of adult structural networks, particularly the crucial role of hub-regions. However, little is known regarding network organisation of the healthy elderly connectome, a crucial prelude to the systematic study of neurodegenerative disorders. Here, whole-brain probabilistic tractography was performed on high-angular diffusion-weighted images acquired from 115 healthy elderly subjects (age 76-94 years; 65 females). Structural networks were reconstructed between 512 cortical and subcortical brain regions. We sought to investigate the architectural features of hub-regions, as well as left-right asymmetries, and sexual dimorphisms. We observed that the topology of hub-regions is consistent with a young adult population, and previously published adult connectomic data. More importantly, the architectural features of hub connections reflect their ongoing vital role in network communication. We also found substantial sexual dimorphisms, with females exhibiting stronger inter-hemispheric connections between cingulate and prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized subnetworks consistent with the neural circuitry specialised for language and executive functions, whilst rightward subnetworks were dominant in visual and visuospatial streams. These findings provide insights into healthy brain ageing and provide a benchmark for the study of neurodegenerative disorders such as Alzheimer's disease (AD) and frontotemporal dementia (FTD).", "title": "" }, { "docid": "e2d1f265ab2a93ed852069288b90bcc4", "text": "This paper presents a novel multi-view dense point cloud generation algorithm based on low-altitude remote sensing images. The proposed method was designed to be especially effective in enhancing the density of point clouds generated by Multi-View Stereo (MVS) algorithms. To overcome the limitations of MVS and dense matching algorithms, an expanded patch was set up for each point in the point cloud. Then, a patch-based Multiphoto Geometrically Constrained Matching (MPGC) was employed to optimize points on the patch based on least square adjustment, the space geometry relationship, and epipolar line constraint. The major advantages of this approach are twofold: (1) compared with the MVS method, the proposed algorithm can achieve denser three-dimensional (3D) point cloud data; and (2) compared with the epipolar-based dense matching method, the proposed method utilizes redundant measurements to weaken the influence of occlusion and noise on matching results. Comparison studies and experimental results have validated the accuracy of the proposed algorithm in low-altitude remote sensing image dense point cloud generation.", "title": "" }, { "docid": "f1efe8868f19ccbb4cf2ab5c08961cdb", "text": "High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.", "title": "" }, { "docid": "dd45eef2b028866faa7d7d133077059a", "text": "In the past 15 years, multiple articles have appeared that target fascia as an important component of treatment in the field of physical medicine and rehabilitation. To better understand the possible actions of fascial treatments, there is a need to clarify the definition of fascia and how it interacts with various other structures: muscles, nerves, vessels, organs. Fascia is a tissue that occurs throughout the body. However, different kinds of fascia exist. In this narrative review, we demonstrate that symptoms related to dysfunction of the lymphatic system, superficial vein system, and thermoregulation are closely related to dysfunction involving superficial fascia. Dysfunction involving alterations in mechanical coordination, proprioception, balance, myofascial pain, and cramps are more related to deep fascia and the epimysium. Superficial fascia is obviously more superficial than the other types and contains more elastic tissue. Consequently, effective treatment can probably be achieved with light massage or with treatment modalities that use large surfaces that spread the friction in the first layers of the subcutis. The deep fasciae and the epymisium require treatment that generates enough pressure to reach the surface of muscles. For this reason, the use of small surface tools and manual deep friction with the knuckles or elbows are indicated. Due to different anatomical locations and to the qualities of the fascial tissue, it is important to recognize that different modalities of approach have to be taken into consideration when considering treatment options.", "title": "" }, { "docid": "4b84582e69cd8393ba4dfefb073bf74e", "text": "In maintenance of concrete structures, crack detection is important for the inspection and diagnosis of concrete structures. However, it is difficult to detect cracks automatically. In this paper, we propose a robust automatic crack-detection method from noisy concrete surface images. The proposed method includes two preprocessing steps and two detection steps. The first preprocessing step is a subtraction process using the median filter to remove slight variations like shadings from concrete surface images; only an original image is used in the preprocessing. In the second preprocessing step, a multi-scale line filter with the Hessian matrix is used both to emphasize cracks against blebs or stains and to adapt the width variation of cracks. After the preprocessing, probabilistic relaxation is used to detect cracks coarsely and to prevent noises. It is unnecessary to optimize any parameters in probabilistic relaxation. Finally, using the results from the relaxation process, a locally adaptive thresholding is performed to detect cracks more finely. We evaluate robustness and accuracy of the proposed method quantitatively using 60 actual noisy concrete surface images.", "title": "" }, { "docid": "5dce9610b3985fb7d9628d4c201ef66e", "text": "The recent advances in state estimation, perception, and navigation algorithms have significantly contributed to the ubiquitous use of quadrotors for inspection, mapping, and aerial imaging. To further increase the versatility of quadrotors, recent works investigated the use of an adaptive morphology, which consists of modifying the shape of the vehicle during flight to suit a specific task or environment. However, these works either increase the complexity of the platform or decrease its controllability. In this letter, we propose a novel, simpler, yet effective morphing design for quadrotors consisting of a frame with four independently rotating arms that fold around the main frame. To guarantee stable flight at all times, we exploit an optimal control strategy that adapts on the fly to the drone morphology. We demonstrate the versatility of the proposed adaptive morphology in different tasks, such as negotiation of narrow gaps, close inspection of vertical surfaces, and object grasping and transportation. The experiments are performed on an actual, fully autonomous quadrotor relying solely on onboard visual-inertial sensors and compute. No external motion tracking systems and computers are used. This is the first work showing stable flight without requiring any symmetry of the morphology.", "title": "" }, { "docid": "b5c65533fd768b9370d8dc3aba967105", "text": "Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.", "title": "" }, { "docid": "47b5e127b64cf1842841afcdb67d6d84", "text": "This work describes the aerodynamic characteristic for aircraft wing model with and without bird feather like winglet. The aerofoil used to construct the whole structure is NACA 653-218 Rectangular wing and this aerofoil has been used to compare the result with previous research using winglet. The model of the rectangular wing with bird feather like winglet has been fabricated using polystyrene before design using CATIA P3 V5R13 software and finally fabricated in wood. The experimental analysis for the aerodynamic characteristic for rectangular wing without winglet, wing with horizontal winglet and wing with 60 degree inclination winglet for Reynolds number 1.66×10, 2.08×10 and 2.50×10 have been carried out in open loop low speed wind tunnel at the Aerodynamics laboratory in Universiti Putra Malaysia. The experimental result shows 25-30 % reduction in drag coefficient and 10-20 % increase in lift coefficient by using bird feather like winglet for angle of attack of 8 degree. Keywords—Aerofoil, Wind tunnel, Winglet, Drag Coefficient.", "title": "" }, { "docid": "78b7987361afd8c7814ee416c81a311b", "text": "This paper presents the characterization of various types of SubMiniature version A (SMA) connectors. The characterization is performed by measurements in frequency and time domain. The SMA connectors are mounted on microstrip (MS) and conductor-backed coplanar waveguide (CPW-CB) manufactured on high-frequency (HF) laminates. The designed characteristic impedance of the transmission lines is 50 Ω and deviation from the designed characteristic impedance is measured. The measurement results suggest that for a given combination of the transmission line and SMA connector, the discontinuity in terms of characteristic impedance can be significantly improved by choosing the right connector type.", "title": "" }, { "docid": "088308b06392780058dd8fa1686c5c35", "text": "Every company should be able to demonstrate own efficiency and effectiveness by used metrics or other processes and standards. Businesses may be missing a direct comparison with competitors in the industry, which is only possible using appropriately chosen instruments, whether financial or non-financial. The main purpose of this study is to describe and compare the approaches of the individual authors. to find metric from reviewed studies which organization use to measuring own marketing activities with following separating into financial metrics and non-financial metrics. The paper presents advance in useable metrics, especially financial and non-financial metrics. Selected studies, focusing on different branches and different metrics, were analyzed by the authors. The results of the study is describing relevant metrics to prove efficiency in varied types of organizations in connection with marketing effectiveness. The studies also outline the potential methods for further research focusing on the application of metrics in a diverse environment. The study contributes to a clearer idea of how to measure performance and effectiveness.", "title": "" }, { "docid": "0305bac1e39203b49b794559bfe0b376", "text": "The emerging field of semantic web technologies promises new stimulus for Software Engineering research. However, since the underlying concepts of the semantic web have a long tradition in the knowledge engineering field, it is sometimes hard for software engineers to overlook the variety of ontology-enabled approaches to Software Engineering. In this paper we therefore present some examples of ontology applications throughout the Software Engineering lifecycle. We discuss the advantages of ontologies in each case and provide a framework for classifying the usage of ontologies in Software Engineering.", "title": "" }, { "docid": "86eefd1336d047e16b49297ae628cb6a", "text": "Applications of digital signature technology are on the rise because of legal and technological developments, along with strong market demand for secured transactions on the Internet. In order to predict the future demand for digital signature products and online security, it is important to understand the application development trends in digital signature technology. This comparative study across various modes of e-business indicates that the majority of digital signature applications have been developed for the Business-to-Business (B2B) mode of e-business. This study also indicates a slow adoption rate of digital signature products by governments and the potential for their rapid growth in the Business-to-Consumer (B2C) mode of e-business. These developments promise to provide a robust security infrastructure for online businesses, which may promote e-business further in the future.", "title": "" }, { "docid": "e2de274128ec75d25d9353fc7534eeca", "text": "A central prerequisite to understand the phenomenon of art in psychological terms is to investigate the nature of the underlying perceptual and cognitive processes. Building on a study by Augustin, Leder, Hutzler, and Carbon (2008) the current ERP study examined the neural time course of two central aspects of representational art, one of which is closely related to object- and scene perception, the other of which is art-specific: content and style. We adapted a paradigm that has repeatedly been employed in psycholinguistics and that allows one to examine the neural time course of two processes in terms of when sufficient information is available to allow successful classification. Twenty-two participants viewed pictures that systematically varied in style and content and conducted a combined go/nogo dual choice task. The dependent variables of interest were the Lateralised Readiness Potential (LRP) and the N200 effect. Analyses of both measures support the notion that in the processing of art style follows content, with style-related information being available at around 224 ms or between 40 and 94 ms later than content-related information. The paradigm used here offers a promising approach to further explore the time course of art perception, thus helping to unravel the perceptual and cognitive processes that underlie the phenomenon of art and the fascination it exerts.", "title": "" }, { "docid": "dc5f111bfe7fa27ae7e9a4a5ba897b51", "text": "We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.", "title": "" } ]
scidocsrr
a10ec8373a777cf959c5f0812920f46f
On Validity of Program Transformations in the Java Memory Model
[ { "docid": "b9a14bea9bb5af017ab325efe76bae84", "text": "A semantics to a small fragment of Java capturing the new memory model (JMM) described in the Language Specification is given by combining operational, denotational and axiomatic techniques in a novel semantic framework. The operational steps (specified in the form of SOS) construct denotational models (configuration structures) and are constrained by the axioms of a configuration theory. The semantics is proven correct with respect to the Language Specification and shown to capture many common examples in the JMM literature.", "title": "" } ]
[ { "docid": "8ac6160d8e6f7d425e2b2416626e5c2d", "text": "This report presents a concept design for the algorithms part of the STL and outlines the design of the supporting language mechanism. Both are radical simplifications of what was proposed in the C++0x draft. In particular, this design consists of only 41 concepts (including supporting concepts), does not require concept maps, and (perhaps most importantly) does not resemble template metaprogramming.", "title": "" }, { "docid": "80f79899a8a049a3cb66c045a6d2f902", "text": "BACKGROUND\nUnderstanding the factors regulating our microbiota is important but requires appropriate statistical methodology. When comparing two or more populations most existing approaches either discount the underlying compositional structure in the microbiome data or use probability models such as the multinomial and Dirichlet-multinomial distributions, which may impose a correlation structure not suitable for microbiome data.\n\n\nOBJECTIVE\nTo develop a methodology that accounts for compositional constraints to reduce false discoveries in detecting differentially abundant taxa at an ecosystem level, while maintaining high statistical power.\n\n\nMETHODS\nWe introduced a novel statistical framework called analysis of composition of microbiomes (ANCOM). ANCOM accounts for the underlying structure in the data and can be used for comparing the composition of microbiomes in two or more populations. ANCOM makes no distributional assumptions and can be implemented in a linear model framework to adjust for covariates as well as model longitudinal data. ANCOM also scales well to compare samples involving thousands of taxa.\n\n\nRESULTS\nWe compared the performance of ANCOM to the standard t-test and a recently published methodology called Zero Inflated Gaussian (ZIG) methodology (1) for drawing inferences on the mean taxa abundance in two or more populations. ANCOM controlled the false discovery rate (FDR) at the desired nominal level while also improving power, whereas the t-test and ZIG had inflated FDRs, in some instances as high as 68% for the t-test and 60% for ZIG. We illustrate the performance of ANCOM using two publicly available microbial datasets in the human gut, demonstrating its general applicability to testing hypotheses about compositional differences in microbial communities.\n\n\nCONCLUSION\nAccounting for compositionality using log-ratio analysis results in significantly improved inference in microbiota survey data.", "title": "" }, { "docid": "189cc09c72686ae7282eef04c1b365f1", "text": "With the rapid growth of the internet as well as increasingly more accessible mobile devices, the amount of information being generated each day is enormous. We have many popular websites such as Yelp, TripAdvisor, Grubhub etc. that offer user ratings and reviews for different restaurants in the world. In most cases, though, the user is just interested in a small subset of the available information, enough to get a general overview of the restaurant and its popular dishes. In this paper, we present a way to mine user reviews to suggest popular dishes for each restaurant. Specifically, we propose a method that extracts and categorize dishes from Yelp restaurant reviews, and then ranks them to recommend the most popular dishes.", "title": "" }, { "docid": "5d1b66986357f2566ac503727a80bb87", "text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.", "title": "" }, { "docid": "1844a5877f911ecaf932282e5a67b727", "text": "Many online social network (OSN) users are unaware of the numerous security risks that exist in these networks, including privacy violations, identity theft, and sexual harassment, just to name a few. According to recent studies, OSN users readily expose personal and private details about themselves, such as relationship status, date of birth, school name, email address, phone number, and even home address. This information, if put into the wrong hands, can be used to harm users both in the virtual world and in the real world. These risks become even more severe when the users are children. In this paper, we present a thorough review of the different security and privacy risks, which threaten the well-being of OSN users in general, and children in particular. In addition, we present an overview of existing solutions that can provide better protection, security, and privacy for OSN users. We also offer simple-to-implement recommendations for OSN users, which can improve their security and privacy when using these platforms. Furthermore, we suggest future research directions.", "title": "" }, { "docid": "6dbaeff4f3cb814a47e8dc94c4660d33", "text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.", "title": "" }, { "docid": "dd6b50a56b740d07f3d02139d16eeec4", "text": "Mitochondria play a central role in the aging process. Studies in model organisms have started to integrate mitochondrial effects on aging with the maintenance of protein homeostasis. These findings center on the mitochondrial unfolded protein response (UPR(mt)), which has been implicated in lifespan extension in worms, flies, and mice, suggesting a conserved role in the long-term maintenance of cellular homeostasis. Here, we review current knowledge of the UPR(mt) and discuss its integration with cellular pathways known to regulate lifespan. We highlight how insight into the UPR(mt) is revolutionizing our understanding of mitochondrial lifespan extension and of the aging process.", "title": "" }, { "docid": "5c690df3977b078243b9cb61e5e712a6", "text": "Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.", "title": "" }, { "docid": "a92ec968ed54217126dc84660a6602b5", "text": "In the wake of new forms of curricular policy in many parts of the world, teachers are increasingly required to act as agents of change. And yet, teacher agency is under-theorised and often misconstrued in the educational change literature, wherein agency and change are seen as synonymous and positive. This paper addresses the issue of teacher agency in the context of an empirical study of curriculum making in schooling. Drawing upon the existing literature, we outline an ecological view of agency as an effect. These insights frame the analysis of a set of empirical data, derived from a research project about curriculum-making in a school and further education college in Scotland. Based upon the evidence, we argue that the extent to which teachers are able to achieve agency varies from context to context based upon certain environmental conditions of possibility and constraint, and that an important factor in this lies in the beliefs, values and attributes that teachers mobilise in relation to particular situations.", "title": "" }, { "docid": "dbdbdf3df12ef47c778e0e9f4ddfc7d6", "text": "In the recent years, research on speech recognition has given much diligence to the automatic transcription of speech data such as broadcast news (BN), medical transcription, etc. Large Vocabulary Continuous Speech Recognition (LVCSR) systems have been developed successfully for Englishes (American English (AE), British English (BE), etc.) and other languages but in case of Indian English (IE), it is still at infancy stage. IE is one of the varieties of English spoken in Indian subcontinent and is largely different from the English spoken in other parts of the world. In this paper, we have presented our work on LVCSR of IE video lectures. The speech data contains video lectures on various engineering subjects given by the experts from all over India as part of the NPTEL project which comprises of 23 hours. We have used CMU Sphinx for training and decoding in our large vocabulary continuous speech recognition experiments. The results analysis instantiate that building IE acoustic model for IE speech recognition is essential due to the fact that it has given 34% less average word error rate (WER) than HUB-4 acoustic models. The average WER before and after adaptation of IE acoustic model is 38% and 31% respectively. Even though, our IE acoustic model is trained with limited training data and the corpora used for building the language models do not mimic the spoken language, the results are promising and comparable to the results reported for AE lecture recognition in the literature.", "title": "" }, { "docid": "ea8a7678dc2b0059ed491cb311f71c52", "text": "With the advent of safety needles to prevent inadvertent needle sticks in the operating room (OR), a potentially new issue has arisen. These needles may result in coring, or the shaving off of fragments of the rubber stopper, when the needle is pierced through the rubber stopper of the medication vial. These fragments may be left in the vial and then drawn up with the medication and possibly injected into patients. The current study prospectively evaluated the incidence of coring when blunt and sharp needles were used to pierce rubber topped vials. We also evaluated the incidence of coring in empty medication vials with rubber tops. The rubber caps were then pierced with either an18-gauge sharp hypodermic needle or a blunt plastic (safety) needle. Coring occurred in 102 of 250 (40.8%) vials when a blunt needle was used versus 9 of 215 (4.2%) vials with a sharp needle (P < 0.0001). A significant incidence of coring was demonstrated when a blunt plastic safety needle was used. This situation is potentially a patient safety hazard and methods to eliminate this problem are needed.", "title": "" }, { "docid": "89c52082d42a9f6445a7771852db3330", "text": "Total quality management (TQM) is an approach to management embracing both social and technical dimensions aimed at achieving excellent results, which needs to be put into practice through a specific framework. Nowadays, quality award models, such as the Malcolm Baldrige National Quality Award (MBNQA) and the European Foundation for Quality Management (EFQM) Excellence Model, are used as a guide to TQM implementation by a large number of organizations. Nevertheless, there is a paucity of empirical research confirming whether these models clearly reflect the main premises of TQM. The purpose of this paper is to analyze the extent to which the EFQM Excellence Model captures the main assumptions involved in the TQM concept, that is, the distinction between technical and social TQM issues, the holistic interpretation of TQM in the firm, and the causal linkage between TQM procedures and organizational performance. Based on responses collected from managers of 446 Spanish companies by means of a structured questionnaire, we find that: (a) social and technical dimensions are embedded in the model; (b) both dimensions are intercorrelated; (c) they jointly enhance results. These findings support the EFQM Excellence Model as an operational framework for TQM, and also reinforce the results obtained in previous studies for the MBNQA, suggesting that quality award models really are TQM frameworks. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +34 964 72 85 34; fax: +34 964 72 86 29. E-mail address: [email protected] (J.C. Bou-Llusar).", "title": "" }, { "docid": "198311a68ad3b9ee8020b91d0b029a3c", "text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.", "title": "" }, { "docid": "e3b91b1133a09d7c57947e2cd85a17c7", "text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.", "title": "" }, { "docid": "b7fc7aa3a0824c71bc3b00f335b7b65e", "text": "In this paper we advocate the use of device-to-device (D2D) communications in a LoRaWAN Low Power Wide Area Network (LPWAN). After overviewing the critical features of the LoRaWAN technology, we discuss the pros and cons of enabling the D2D communications for it. Subsequently we propose a network-assisted D2D communications protocol and show its feasibility by implementing it on top of a LoRaWAN-certified commercial transceiver. The conducted experiments show the performance of the proposed D2D communications protocol and enable us to assess its performance. More precisely, we show that the D2D communications can reduce the time and energy for data transfer by 6 to 20 times compared to conventional LoRaWAN data transfer mechanisms. In addition, the use of D2D communications may have a positive effect on the network by enabling spatial re-use of the frequency resources. The proposed LoRaWAN D2D communications can be used for a wide variety of applications requiring high coverage, e.g. use cases in distributed smart grid deployments for management and trading.", "title": "" }, { "docid": "eb7990a677cd3f96a439af6620331400", "text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "title": "" }, { "docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe", "text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.", "title": "" }, { "docid": "fbf2a211d53603cbcb7441db3006f035", "text": "This letter presents a new metamaterial-based waveguide technology referred to as ridge gap waveguides. The main advantages of the ridge gap waveguides compared to hollow waveguides are that they are planar and much cheaper to manufacture, in particular at high frequencies such as for millimeter and sub- millimeter waves. The latter is due to the fact that there are no mechanical joints across which electric currents must float. The gap waveguides have lower losses than microstrip lines, and they are completely shielded by metal so no additional packaging is needed, in contrast to the severe packaging problems associated with microstrip circuits. The gap waveguides are realized in a narrow gap between two parallel metal plates by using a texture or multilayer structure on one of the surfaces. The waves follow metal ridges in the textured surface. All wave propagation in other directions is prohibited (in cutoff) by realizing a high surface impedance (ideally a perfect magnetic conductor) in the textured surface at both sides of all ridges. Thereby, cavity resonances do not appear either within the band of operation. The present letter introduces the gap waveguide and presents some initial simulated results.", "title": "" }, { "docid": "de4e2e131a0ceaa47934f4e9209b1cdd", "text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.", "title": "" }, { "docid": "b38529e74442de80822204b63d061e3e", "text": "Factors other than age and genetics may increase the risk of developing Alzheimer disease (AD). Accumulation of the amyloid-β (Aβ) peptide in the brain seems to initiate a cascade of key events in the pathogenesis of AD. Moreover, evidence is emerging that the sleep–wake cycle directly influences levels of Aβ in the brain. In experimental models, sleep deprivation increases the concentration of soluble Aβ and results in chronic accumulation of Aβ, whereas sleep extension has the opposite effect. Furthermore, once Aβ accumulates, increased wakefulness and altered sleep patterns develop. Individuals with early Aβ deposition who still have normal cognitive function report sleep abnormalities, as do individuals with very mild dementia due to AD. Thus, sleep and neurodegenerative disease may influence each other in many ways that have important implications for the diagnosis and treatment of AD.", "title": "" } ]
scidocsrr
60d2b9018208dd89a85c7e6c288d0234
MEDICATION ADMINISTRATION AND THE COMPLEXITY OF NURSING WORKFLOW
[ { "docid": "b6a045abb9881abafae097e29f866745", "text": "AIMS AND OBJECTIVES\nUnderstanding the processes by which nurses administer medication is critical to the minimization of medication errors. This study investigates nurses' views on the factors contributing to medication errors in the hope of facilitating improvements to medication administration processes.\n\n\nDESIGN AND METHODS\nA focus group of nine Registered Nurses discussed medication errors with which they were familiar as a result of both their own experiences and of literature review. The group, along with other researchers, then developed a semi-structured questionnaire consisting of three parts: narrative description of the error, the nurse's background and contributing factors. After the contributing factors had been elicited and verified with eight categories and 34 conditions, additional Registered Nurses were invited to participate by recalling one of the most significant medication errors that they had experienced and identifying contributing factors from those listed on the questionnaire. Identities of the hospital, patient and participants involved in the study remain confidential.\n\n\nRESULTS\nOf the 72 female nurses who responded, 55 (76.4%) believed more than one factor contributed to medication errors. 'Personal neglect' (86.1%), 'heavy workload' (37.5%) and 'new staff' (37.5%) were the three main factors in the eight categories. 'Need to solve other problems while administering drugs,''advanced drug preparation without rechecking,' and 'new graduate' were the top three of the 34 conditions. Medical wards (36.1%) and intensive care units (33.3%) were the two most error-prone places. The errors common to the two were 'wrong dose' (36.1%) and 'wrong drug' (26.4%). Antibiotics (38.9%) were the most commonly misadministered drugs.\n\n\nCONCLUSIONS\nAlthough the majority of respondents considered nurse's personal neglect as the leading factor in medication errors, analysis indicated that additional factors involving the health care system, patients' conditions and doctors' prescriptions all contributed to administration errors.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nIdentification of the main factors and conditions contributing to medication errors allows clinical nurses and administration systems to eliminate situations that promote errors and to incorporate changes that minimize them, creating a safer patient environment.", "title": "" } ]
[ { "docid": "bf53216a95c20d5f41b7821b05418919", "text": "Bowlby's attachment theory is a theory of psychopathology as well as a theory of normal development. It contains clear and specific propositions regarding the role of early experience in developmental psychopathology, the importance of ongoing context, and the nature of the developmental process underlying pathology. In particular, Bowlby argued that adaptation is always the joint product of developmental history and current circumstances (never either alone). Early experience does not cause later pathology in a linear way; yet, it has special significance due to the complex, systemic, transactional nature of development. Prior history is part of current context, playing a role in selection, engagement, and interpretation of subsequent experience and in the use of available environmental supports. Finally, except in very extreme cases, early anxious attachment is not viewed as psychopathology itself or as a direct cause of psychopathology but as an initiator of pathways probabilistically associated with later pathology.", "title": "" }, { "docid": "cc28e571bab9747922008e0ddfebbea4", "text": "Rumelhart and McClelland's chapter about learning the past tense created a degree of controversy extraordinary even in the adversarial culture of modern science. It also stimulated a vast amount of research that advanced the understanding of the past tense, inflectional morphology in English and other languages, the nature of linguistic representations, relations between language and other phenomena such as reading and object recognition, the properties of artificial neural networks, and other topics. We examine the impact of the Rumelhart and McClelland model with the benefit of 25 years of hindsight. It is not clear who \"won\" the debate. It is clear, however, that the core ideas that the model instantiated have been assimilated into many areas in the study of language, changing the focus of research from abstract characterizations of linguistic competence to an emphasis on the role of the statistical structure of language in acquisition and processing.", "title": "" }, { "docid": "af47d1cc068467eaee7b6129682c9ee3", "text": "Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.", "title": "" }, { "docid": "48e48660a711f1cf2d4d7703368b73c9", "text": "Growing evidence suggests that transcriptional regulators and secreted RNA molecules encapsulated within membrane vesicles modify the phenotype of target cells. Membrane vesicles, actively released by cells, represent a mechanism of intercellular communication that is conserved evolutionarily and involves the transfer of molecules able to induce epigenetic changes in recipient cells. Extracellular vesicles, which include exosomes and microvesicles, carry proteins, bioactive lipids, and nucleic acids, which are protected from enzyme degradation. These vesicles can transfer signals capable of altering cell function and/or reprogramming targeted cells. In the present review we focus on the extracellular vesicle-induced epigenetic changes in recipient cells that may lead to phenotypic and functional modifications. The relevance of these phenomena in stem cell biology and tissue repair is discussed.", "title": "" }, { "docid": "d8938884a61e7c353d719dbbb65d00d0", "text": "Image encryption plays an important role to ensure confidential transmission and storage of image over internet. However, a real–time image encryption faces a greater challenge due to large amount of data involved. This paper presents a review on image encryption techniques of both full encryption and partial encryption schemes in spatial, frequency and hybrid domains.", "title": "" }, { "docid": "cab0fd454701c0b302040a1875ab2865", "text": "They are susceptible to a variety of attacks, including node capture, physical tampering, and denial of service, while prompting a range of fundamental research challenges.", "title": "" }, { "docid": "57dbe095ca124fbf0fc394b927e9883f", "text": "How much is 131 million US dollars? To help readers put such numbers in context, we propose a new task of automatically generating short descriptions known as perspectives, e.g. “$131 million is about the cost to employ everyone in Texas over a lunch period”. First, we collect a dataset of numeric mentions in news articles, where each mention is labeled with a set of rated perspectives. We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation. In construction, we compose formulae from numeric facts in a knowledge base and rank the resulting formulas based on familiarity, numeric proximity and semantic compatibility. In generation, we convert a formula into natural language using a sequence-to-sequence recurrent neural network. Our system obtains a 15.2% F1 improvement over a non-compositional baseline at formula construction and a 12.5 BLEU point improvement over a baseline description generation.", "title": "" }, { "docid": "b99207292a098761d1bb5cc220cf0790", "text": "Many researchers have attempted to predict the Enron corporate hierarchy from the data. This work, however, has been hampered by a lack of data. We present a new, large, and freely available gold-standard hierarchy. Using our new gold standard, we show that a simple lower bound for social network-based systems outperforms an upper bound on the approach taken by current NLP systems.", "title": "" }, { "docid": "c4e7c757ad5a67b550d09f530b5204ef", "text": "This paper describes our effort for a planning-based computational model of narrative generation that is designed to elicit surprise in the reader's mind, making use of two temporal narrative devices: flashback and foreshadowing. In our computational model, flashback provides a backstory to explain what causes a surprising outcome, while foreshadowing gives hints about the surprise before it occurs. Here, we present Prevoyant, a planning-based computational model of surprise arousal in narrative generation, and analyze the effectiveness of Prevoyant. The work here also presents a methodology to evaluate surprise in narrative generation using a planning-based approach based on the cognitive model of surprise causes. The results of the experiments that we conducted show strong support that Prevoyant effectively generates a discourse structure for surprise arousal in narrative.", "title": "" }, { "docid": "e4a13888d88b3d7df1956813c06c4fd9", "text": "Climate change is predicted to have a range of direct and indirect impacts on marine and freshwater capture fisheries, with implications for fisheries-dependent economies, coastal communities and fisherfolk. This technical paper reviews these predicted impacts, and introduces and applies the concepts of vulnerability, adaptation and adaptive capacity. Capture fisheries are largely driven by fossil fuels and so contribute to greenhouse gas emissions through fishing operations, estimated at 40-130 Tg CO2. Transportation of catches is another source of emissions, which are uncertain due to modes and distances of transportation but may exceed those from fishing operations. Mitigation measures may impact on fisheries by increasing the cost of fossil fuel use. Fisheries and fisherfolk may be impacted in a wide range of ways due to climate change. These include biophysical impacts on the distribution or productivity of marine and freshwater fish stocks through processes such as ocean acidification, habitat damage, changes in oceanography, disruption to precipitation and freshwater availability. Fisheries will also be exposed to a diverse range of direct and indirect climate impacts, including displacement and migration of human populations; impacts on coastal communities and infrastructure due to sea level rise; and changes in the frequency, distribution or intensity of tropical storms. Fisheries are dynamic social-ecological systems and are already experiencing rapid change in markets, exploitation and governance, ensuring a constantly developing context for future climate-related impacts. These existing socioeconomic trends and the indirect effects of climate change may interact with, amplify or even overwhelm biophysical impacts on fish ecology. The variety of different impact mechanisms, complex interactions between social, ecological and economic systems, and Climate change implications for fisheries and aquaculture – Overview of current scientific knowledge 108 the possibility of sudden and surprising changes make future effects of climate change on fisheries difficult to predict. The vulnerability of fisheries and fishing communities depends on their exposure and sensitivity to change, but also on the ability of individuals or systems to anticipate and adapt. This adaptive capacity relies on various assets and can be constrained by culture or marginalization. Vulnerability varies between countries and communities, and between demographic groups within society. Generally, poorer and less empowered countries and individuals are more vulnerable to climate impacts, and the vulnerability of fisheries is likely to be higher where they already suffer from overexploitation or overcapacity. Adaptation to climate impacts includes reactive or anticipatory actions by individuals or public institutions. These range from abandoning fisheries altogether for alternative occupations, to developing insurance and warning systems and changing fishing operations. Governance of fisheries affects the range of adaptation options available and will need to be flexible enough to account for changes in stock distribution and abundance. Governance aimed towards equitable and sustainable fisheries, accepting inherent uncertainty, and based on an ecosystem approach, as currently advocated, is thought to generally improve the adaptive capacity of fisheries. However, adaptation may be costly and limited in scope, so that mitigation of emissions to minimise climate change remain a key responsibility of governments. ACKNOWLEDGEMENTS This report was compiled with input from Eddie Allison from the WorldFish Center, Penang, and benefited from the comments of participants at the FAO Workshop on Climate Change Implications for Fisheries and Aquaculture held in Rome from 7 to 9 April 2008. Cassandra De Young also provided comments which improved the report. Climate change and capture fisheries: potential impacts, adaptation and mitigation 109", "title": "" }, { "docid": "bf9ef1e84275ac77be0fd71334dde1ab", "text": "The development of summarization research has been significantly hampered by the costly acquisition of reference summaries. This paper proposes an effective way to automatically collect large scales of news-related multi-document summaries with reference to social media’s reactions. We utilize two types of social labels in tweets, i.e., hashtags and hyper-links. Hashtags are used to cluster documents into different topic sets. Also, a tweet with a hyper-link often highlights certain key points of the corresponding document. We synthesize a linked document cluster to form a reference summary which can cover most key points. To this aim, we adopt the ROUGE metrics to measure the coverage ratio, and develop an Integer Linear Programming solution to discover the sentence set reaching the upper bound of ROUGE. Since we allow summary sentences to be selected from both documents and highquality tweets, the generated reference summaries could be abstractive. Both informativeness and readability of the collected summaries are verified by manual judgment. In addition, we train a Support Vector Regression summarizer on DUC generic multi-document summarization benchmarks. With the collected data as extra training resource, the performance of the summarizer improves a lot on all the test sets. We release this dataset for further research.", "title": "" }, { "docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9", "text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.", "title": "" }, { "docid": "7bedcb8eb5f458ba238c82249c80657d", "text": "The spread of antibiotic-resistant bacteria is a growing problem and a public health issue. In recent decades, various genetic mechanisms involved in the spread of resistance genes among bacteria have been identified. Integrons - genetic elements that acquire, exchange, and express genes embedded within gene cassettes (GC) - are one of these mechanisms. Integrons are widely distributed, especially in Gram-negative bacteria; they are carried by mobile genetic elements, plasmids, and transposons, which promote their spread within bacterial communities. Initially studied mainly in the clinical setting for their involvement in antibiotic resistance, their role in the environment is now an increasing focus of attention. The aim of this review is to provide an in-depth analysis of recent studies of antibiotic-resistance integrons in the environment, highlighting their potential involvement in antibiotic-resistance outside the clinical context. We will focus particularly on the impact of human activities (agriculture, industries, wastewater treatment, etc.).", "title": "" }, { "docid": "b04ae3842293f5f81433afbaa441010a", "text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.", "title": "" }, { "docid": "9c0db9ac984a93d4a0019dd76e6ccdcf", "text": "This paper presents a high power efficient broad-band programmable gain amplifier with multi-band switching. The proposed two stage common-emitter amplifier, by using the current reuse topology with a magnetically coupled transformer and a MOS varactor bank as a frequency tunable load, achieves a 55.9% peak power added efficiency (PAE), a peak saturated power of +11.1 dBm, a variable gain from 1.8 to 16 dB, and a tunable large signal 3-dB bandwidth from 24.3 to 35 GHz. The design is fabricated in a commercial 0.18- μm SiGe BiCMOS technology and measured with an output 1-dB gain compression point which is better than +9.6 dBm and a maximum dc power consumption of 22.5 mW from a single 1.8 V supply. The core amplifier, excluding the measurement pads, occupies a die area of 500 μm×450 μm.", "title": "" }, { "docid": "b54215466bcdf86442f9a6e87e831069", "text": "In this paper, we consider the problem of tracking human motion with a 22-DOF kinematic model from depth images. In contrast to existing approaches, our system naturally scales to multiple sensors. The motivation behind our approach, termed Multiple Depth Camera Approach (MDCA), is that by using several cameras, we can significantly improve the tracking quality and reduce ambiguities as for example caused by occlusions. By fusing the depth images of all available cameras into one joint point cloud, we can seamlessly incorporate the available information from multiple sensors into the pose estimation. To track the high-dimensional human pose, we employ state-of-the-art annealed particle filtering and partition sampling. We compute the particle likelihood based on the truncated signed distance of each observed point to a parameterized human shape model. We apply a coarse-to-fine scheme to recognize a wide range of poses to initialize the tracker. In our experiments, we demonstrate that our approach can accurately track human motion in real-time (15Hz) on a GPGPU. In direct comparison to two existing trackers (OpenNI, Microsoft Kinect SDK), we found that our approach is significantly more robust for unconstrained motions and under (partial) occlusions.", "title": "" }, { "docid": "a5d8fa2e03cb51b30013a9e21477ef61", "text": "PURPOSE\nThe aim of this study was to establish the role of magnetic resonance imaging (MRI) in patients with Mayer-Rokitansky-Kuster-Hauser syndrome (MRKHS).\n\n\nMATERIALS AND METHODS\nSixteen female MRKHS patients (mean age, 19.4 years; range, 11-39 years) were studied using MRI. Two experienced radiologists evaluated all the images in consensus to assess the presence or absence of the ovaries, uterus, and vagina. Additional urogenital or vertebral pathologies were also noted.\n\n\nRESULTS\nOf the 16 patients, complete aplasia of uterus was seen in five patients (31.3%). Uterine hypoplasia or remnant uterus was detected in 11 patients (68.8%). Ovaries were clearly seen in 10 patients (62.5%), and in two of the 10 patients, no descent of ovaries was detected. In five patients, ovaries could not be detected on MRI. In one patient, agenesis of right ovary was seen, and the left ovary was in normal shape. Of the 16 cases, 11 (68.8%) had no other extragenital abnormalities. Additional abnormalities were detected in six patients (37.5%). Two of the six had renal agenesis, and one patient had horseshoe kidney; renal ectopy was detected in two patients, and one patient had urachal remnant. Vertebral abnormalities were detected in two patients; one had L5 posterior fusion defect, bilateral hemisacralization, and rotoscoliosis, and the other had coccygeal vertebral fusion.\n\n\nCONCLUSION\nMRI is a useful and noninvasive imaging method in the diagnosis and evaluation of patients with MRKHS.", "title": "" }, { "docid": "2e5789bd2089a4b15686a595b79eb7cc", "text": "Background: The growing prevalence of chronic diseases and home-based treatments has led to the introduction of a large number of instruments for assessing the caregiving-related problems associated with specific diseases, but our Family Strain Questionnaire (FSQ) was designed to provide a basis for general screening and comparison regardless of the disease. We here describe the final validation of its psychometric characteristics. Methods: The FSQ consists of a brief semi-structured interview and 44 dichotomic items, and has now been administered to 811 caregivers (285 were simultaneously administered other questionnaires assessing anxiety and depressive symptoms). After a factorial analysis confirmed the 5-factor structure identified in previous studies (emotional burden, problems in social involvement, need for knowledge about the disease, satisfaction with family relationships, and thoughts about death), we undertook correlation and reliability analyses, and a receiver operating characteristics curve analysis designed to determine the cut-off point for the emotional problems identified by the first factor. Finally, univariate ANOVA with Bonferroni's post-hoc test was used to compare the disease-specific scores. Results: The validity and reliability of the FSQ is good, and its factorial structure refers to areas that are internationally considered as being of general importance. The semi-structured interview collects information concerning the socio-economic status of caregivers and their convictions/interpretations concerning the diseases of their patients. Conclusions: The FSQ can be used as a single instrument for the general assessment of caregiving-related problems regardless of the reference disease. This makes it possible to reduce administration and analysis times, and compare the problems experienced by the caregivers of patients with different diseases.", "title": "" }, { "docid": "4c290421dc42c3a5a56c7a4b373063e5", "text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.", "title": "" }, { "docid": "5d546a8d21859a057d36cdbd3fa7f887", "text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.", "title": "" } ]
scidocsrr
a5e284358add46d05c289c0061080cb7
A Neural Attention Model for Sentence Summarization
[ { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "1f6b1757282fda5bae06cd0617054642", "text": "A crucial step toward the goal of automatic extraction of propositionalinformationfrom naturallanguagetext is the identificationof semanticrelations betweenconstituentsin sentences.We examinethe problemof distinguishing amongsevenrelationtypesthatcanoccurbetweentheentities“treatment”and “disease” in biosciencetext, and the problemof identifyingsuchentities.We comparefive generati ve graphicalmodels anda neuralnetwork, usinglexical, syntactic,andsemanticfeatures,finding that thelatterhelpachieve high classificationaccuracy.", "title": "" }, { "docid": "abc75d5b44323133d2b1ffef57a920f3", "text": "With the increasing adoption of mobile 4G LTE networks, video streaming as the major contributor of 4G LTE data traffic, has become extremely hot. However, the battery life has become the bottleneck when mobile users are using online video services. In this paper, we deploy a real mobile system for power measurement and profiling of online video streaming in 4G LTE networks. Based on some designed experiments with different configurations, we measure the power consumption for online video streaming, offline video playing, and mobile background. A RRC state study is taken to understand how RRC states impact power consumption. Then, we profile the power consumption of video streaming and show the results with different impact factors. According to our experimental statistics, the power saving room for online video streaming in 4G LTE networks can be up to 69%.", "title": "" }, { "docid": "028cdddc5d61865d0ea288180cef91c0", "text": "This paper investigates the use of Convolutional Neural Networks for classification of painted symbolic road markings. Previous work on road marking recognition is mostly based on either template matching or on classical feature extraction followed by classifier training which is not always effective and based on feature engineering. However, with the rise of deep neural networks and their success in ADAS systems, it is natural to investigate the suitability of CNN for road marking recognition. Unlike others, our focus is solely on road marking recognition and not detection; which has been extensively explored and conventionally based on MSER feature extraction of the IPM images. We train five different CNN architectures with variable number of convolution/max-pooling and fully connected layers, and different resolution of road mark patches. We use a publicly available road marking data set and incorporate data augmentation to enhance the size of this data set which is required for training deep nets. The augmented data set is randomly partitioned in 70% and 30% for training and testing. The best CNN network results in an average recognition rate of 99.05% for 10 classes of road markings on the test set.", "title": "" }, { "docid": "1d2de6042cb07b0b29d3e1f99483fa5c", "text": "The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a datadriven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with domain adaptation and multi-task learning in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.", "title": "" }, { "docid": "78c8331beb0d09570c4063fab7d21f2d", "text": "This paper presents a new single stage dc-dc boost converter topology with very large gain conversion ratio as a switched inductor multilevel boost converter (SIMLBC). It is a PWM-based dc-dc converter which combines the Switched-Inductor Structures and the switching capacitor function to provide a very large output voltage with different output dc levels which makes it suitable for multilevel inverter applications. The proposed topology has only single switch like the conventional dc-dc converter which can be controlled in a very simple way. In addition to, two inductors, 2N+2 diodes, N is the number of output dc voltage levels, and 2N-1 dc capacitors. A high switching frequency is employed to decrease the size of these components and thus much increasing the dynamic performance. The proposed topology has been compared with the existence dc-dc boost converters and it gives a higher voltage gain conversion ratio. The proposed converter has been analyzed, simulated and a prototype has been built and experimentally tested. Simulation and experimental results have been provided for validation.", "title": "" }, { "docid": "853b3fc8a979abd7e13a87b5c3b4a264", "text": "In this paper, we present a novel control law for longitudinal speed control of autonomous vehicles. The key contributions of the proposed work include the design of a control law that reactively integrates the longitudinal surface gradient of road into its operation. In contrast to the existing works, we found that integrating the path gradient into the control framework improves the speed tracking efficacy. Since the control law is implemented over a shrinking domain scheme, it minimizes the integrated error by recomputing the control inputs at every discretized step and consequently provides less reaction time. This makes our control law suitable for motion planning frameworks that are operating at high frequencies. Furthermore, our work is implemented using a generalized vehicle model and can be easily extended to other classes of vehicles. The performance of gradient aware shrinking domain based controller is implemented and tested on a stock electric vehicle on which a number of sensors are mounted. Results from the tests show the robustness of our control law for speed tracking on a terrain with varying gradient while also considering stringent time constraints imposed by the planning framework.", "title": "" }, { "docid": "937cb60b2eea0611e9c2b55dcbd85457", "text": "In the era of big data, with the increasing number of audit data features, human-centered smart intrusion detection system performance is decreasing in training time and classification accuracy, and many support vector machine (SVM)-based intrusion detection algorithms have been widely used to identify an intrusion quickly and accurately. This paper proposes the FWP-SVM-genetic algorithm (GA) (feature selection, weight, and parameter optimization of support vector machine based on the genetic algorithm) based on the characteristics of the GA and the SVM algorithm. The algorithm first optimizes the crossover probability and mutation probability of GA according to the population evolution algebra and fitness value; then, it subsequently uses a feature selection method based on the genetic algorithm with an innovation in the fitness function that decreases the SVM error rate and increases the true positive rate. Finally, according to the optimal feature subset, the feature weights and parameters of SVM are simultaneously optimized. The simulation results show that the algorithm accelerates the algorithm convergence, increases the true positive rate, decreases the error rate, and shortens the classification time. Compared with other SVM-based intrusion detection algorithms, the detection rate is higher and the false positive and false negative rates are lower.", "title": "" }, { "docid": "78e21364224b9aa95f86ac31e38916ef", "text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "90e218a8ae79dc1d53e53d4eb63839b8", "text": "Doubly fed induction generator (DFIG) technology is the dominant technology in the growing global market for wind power generation, due to the combination of variable-speed operation and a cost-effective partially rated power converter. However, the DFIG is sensitive to dips in supply voltage and without specific protection to “ride-through” grid faults, a DFIG risks damage to its power converter due to overcurrent and/or overvoltage. Conventional converter protection via a sustained period of rotor-crowbar closed circuit leads to poor power output and sustained suppression of the stator voltages. A new minimum-threshold rotor-crowbar method is presented in this paper, improving fault response by reducing crowbar application periods to 11-16 ms, successfully diverting transient overcurrents, and restoring good power control within 45 ms of both fault initiation and clearance, thus enabling the DFIG to meet grid-code fault-ride-through requirements. The new method is experimentally verified and evaluated using a 7.5-kW test facility.", "title": "" }, { "docid": "aaececc42ba6d87ec018788fa73bc792", "text": "This review set out to address three apparently simple questions:  What makes 'great teaching'?  What kinds of frameworks or tools could help us to capture it?  How could this promote better learning? Question 1: \" What makes great teaching? \" Great teaching is defined as that which leads to improved student progress We define effective teaching as that which leads to improved student achievement using outcomes that matter to their future success. Defining effective teaching is not easy. The research keeps coming back to this critical point: student progress is the yardstick by which teacher quality should be assessed. Ultimately, for a judgement about whether teaching is effective, to be seen as trustworthy, it must be checked against the progress being made by students. The six components of great teaching Schools currently use a number of frameworks that describe the core elements of effective teaching. The problem is that these attributes are so broadly defined that they can be open to wide and different interpretation whether high quality teaching has been observed in the classroom. It is important to understand these limitations when making assessments about teaching quality. Below we list the six common components suggested by research that teachers should consider when assessing teaching quality. We list these approaches, skills and knowledge in order of how strong the evidence is in showing that focusing on them can improve student outcomes. This should be seen as offering a 'starter kit' for thinking about effective pedagogy. Good quality teaching will likely involve a combination of these attributes manifested at different times; the very best teachers are those that demonstrate all of these features. The most effective teachers have deep knowledge of the subjects they teach, and when teachers' knowledge falls below a certain level it is a significant impediment to students' learning. As well as a strong understanding of the material being taught, teachers must also understand the ways students think about the content, be able to evaluate the thinking behind students' own methods, and identify students' common misconceptions. Includes elements such as effective questioning and use of assessment by teachers. Specific practices, like reviewing previous learning, providing model responses for students, giving adequate time for practice to embed skills securely Executive Summary 3 and progressively introducing new learning (scaffolding) are also elements of high quality instruction. 3. Classroom climate (Moderate evidence of impact on student outcomes) Covers …", "title": "" }, { "docid": "3c8cc4192ee6ddd126e53c8ab242f396", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "0c3387ec7ed161d931bc08151e722d10", "text": "New updated! The latest book from a very famous author finally comes out. Book of the tower of hanoi myths and maths, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.", "title": "" }, { "docid": "55f118976784a7244859e0256c4660e3", "text": "The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.", "title": "" }, { "docid": "fd3297e53076595bdffccabe78e17a46", "text": "The UrBan Interactions (UBI) research program, coordinated by the University of Oulu, has created a middleware layer on top of the panOULU wireless network and opened it up to ubiquitous-computing researchers, offering opportunities to enhance and facilitate communication between citizens and the government.", "title": "" }, { "docid": "6020b70701164e0a14b435153db1743e", "text": "Supply chain Management has assumed a significant role in firm's performance and has attracted serious research attention over the last few years. In this paper attempt has been made to review the literature on Supply Chain Management. A literature review reveals a considerable spurt in research in theory and practice of SCM. We have presented a literature review for 29 research papers for the period between 2005 and 2011. The aim of this study was to provide an up-to-date and brief review of the SCM literature that was focused on broad areas of the SCM concept.", "title": "" }, { "docid": "4076b5d1338a7552453e284019406129", "text": "Knowledge bases (KBs) are paramount in NLP. We employ multiview learning for increasing accuracy and coverage of entity type information in KBs. We rely on two metaviews: language and representation. For language, we consider high-resource and lowresource languages from Wikipedia. For representation, we consider representations based on the context distribution of the entity (i.e., on its embedding), on the entity’s name (i.e., on its surface form) and on its description in Wikipedia. The two metaviews language and representation can be freely combined: each pair of language and representation (e.g., German embedding, English description, Spanish name) is a distinct view. Our experiments on entity typing with fine-grained classes demonstrate the effectiveness of multiview learning. We release MVET, a large multiview – and, in particular, multilingual – entity typing dataset we created. Monoand multilingual finegrained entity typing systems can be evaluated on this dataset.", "title": "" }, { "docid": "0a0e33e03036ef025eb8450bedaf0c1f", "text": "Recently there has been considerable interest in EEG-based emotion recognition (EEG-ER), which is one of the utilization of BCI. However, it is not easy to realize the EEG-ER system which can recognize emotions with high accuracy because of the tendency for important information in EEG signals to be concealed by noises. Deep learning is the golden tool to grasp the features concealed in EEG data and enable highly accurate EEG-ER because deep neural networks (DNNs) may have higher recognition capability than humans'. The publicly available dataset named DEAP, which is for emotion analysis using EEG, was used in the experiment. The CNN and a conventional model used for comparison are evaluated by the tests according to 11-fold cross validation scheme. EEG raw data obtained from 16 electrodes without general preprocesses were used as input data. The models classify and recognize EEG signals according to the emotional states \"positive\" or \"negative\" which were caused by watching music videos. The results show that the more training data are, the much higher the accuracies of CNNs are (by over 20%). It also suggests that the increased training data need not to belong to the same person's EEG data as the test data so as to get the CNN recognizing emotions accurately. The results indicate that there are not only the considerable amount of the interpersonal difference but also commonality of EEG properties.", "title": "" }, { "docid": "49568236b0e221053c32b73b896d3dde", "text": "The continuous growth in the size and use of the Internet is creating difficulties in the search for information. A sophisticated method to organize the layout of the information and assist user navigation is therefore particularly important. In this paper, we evaluate the feasibility of using a self-organizing map (SOM) to mine web log data and provide a visual tool to assist user navigation. We have developed LOGSOM, a system that utilizes Kohonen’s self-organizing map to organize web pages into a two-dimensional map. The organization of the web pages is based solely on the users’ navigation behavior, rather than the content of the web pages. The resulting map not only provides a meaningful navigation tool (for web users) that is easily incorporated with web browsers, but also serves as a visual analysis tool for webmasters to better understand the characteristics and navigation behaviors of web users visiting their pages. D 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "f001d0801b892b9a85bd8cf4870f1007", "text": "Several supervised approaches have been proposed for causality identification by relying on shallow linguistic features. However, such features do not lead to improved performance. Therefore, novel sources of knowledge are required to achieve progress on this problem. In this paper, we propose a model for the recognition of causality in verb-noun pairs by employing additional types of knowledge along with linguistic features. In particular, we focus on identifying and employing semantic classes of nouns and verbs with high tendency to encode cause or non-cause relations. Our model incorporates the information about these classes to minimize errors in predictions made by a basic supervised classifier relying merely on shallow linguistic features. As compared with this basic classifier our model achieves 14.74% (29.57%) improvement in F-score (accuracy), respectively.", "title": "" }, { "docid": "bdd44aeacddeefdfc2e3a5abf1088f2c", "text": "Elevation data is an important component of geospatial database. This paper focuses on digital surface model (DSM) generation from high-resolution satellite imagery (HRSI). The HRSI systems, such as IKONOS and QuickBird have initialed a new era of Earth observation and digital mapping. The half-meter or better resolution imagery from Worldview-1 and the planned GeoEye-1 allows for accurate and reliable extraction and characterization of even more details of the earth surface. In this paper, the DSM is generated using an advanced image matching approach which integrates point and edge matching algorithms. This approach produces reliable, precise, and very dense 3D points for high quality digital surface models which also preserve discontinuities. Following the DSM generation, the accuracy of the DSM has been assessed and reported. To serve both as a reference surface and a basis for comparison, a lidar DSM has been employed in a testfield with differing terrain types and slope.", "title": "" } ]
scidocsrr
bf573ca20f03af584b2bda71f75f2d05
Training an Interactive Humanoid Robot Using Multimodal Deep Reinforcement Learning
[ { "docid": "5273e9fea51c85651255de7c253066a0", "text": "This paper presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive agents.", "title": "" } ]
[ { "docid": "db78f4fa7e3a795b14f423a7dfa99828", "text": "Music shares a very special relation with human emotions. We often choose to listen to a song or music which best fits our mood at that instant. In spite of this strong correlation, most of the music applications today are devoid of providing the facility of mood-aware playlist generation. We wish to contribute to automatic identification of mood in audio songs by utilizing their spectral and temporal audio features. Our current work involves analysis of various features in order to learn, train and test the model representing the moods of the audio songs. Focus of our work is on the Indian popular music pieces and our work continues to analyse, develop and improve the algorithms to produce a system to recognize the mood category of the audio files automatically.", "title": "" }, { "docid": "9a4fc12448d166f3a292bfdf6977745d", "text": "Enabled by the rapid development of virtual reality hardware and software, 360-degree video content has proliferated. From the network perspective, 360-degree video transmission imposes significant challenges because it consumes 4 6χ the bandwidth of a regular video with the same resolution. To address these challenges, in this paper, we propose a motion-prediction-based transmission mechanism that matches network video transmission to viewer needs. Ideally, if viewer motion is perfectly known in advance, we could reduce bandwidth consumption by 80%. Practically, however, to guarantee the quality of viewing experience, we have to address the random nature of viewer motion. Based on our experimental study of viewer motion (comprising 16 video clips and over 150 subjects), we found the viewer motion can be well predicted in 100∼500ms. We propose a machine learning mechanism that predicts not only viewer motion but also prediction deviation itself. The latter is important because it provides valuable input on the amount of redundancy to be transmitted. Based on such predictions, we propose a targeted transmission mechanism that minimizes overall bandwidth consumption while providing probabilistic performance guarantees. Real-data-based evaluations show that the proposed scheme significantly reduces bandwidth consumption while minimizing performance degradation, typically a 45% bandwidth reduction with less than 0.1% failure ratio.", "title": "" }, { "docid": "bf71f7f57def7633a5390b572e983bc9", "text": "With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.", "title": "" }, { "docid": "02dab9e102d1b8f5e4f6ab66e04b3aad", "text": "CHILD CARE PRACTICES ANTECEDING THREE PATTERNS OF PRESCHOOL BEHAVIOR. STUDIED SYSTEMATICALLY CHILD-REARING PRACTICES ASSOCIATED WITH COMPETENCE IN THE PRESCHOOL CHILD. 2015 American Psychological Association PDF documents require Adobe Acrobat Reader.Effects of Authoritative Parental Control on Child Behavior, Child. Child care practices anteceding three patterns of preschool behavior. Genetic.She is best known for her work on describing parental styles of child care and. Anteceding Three Patterns of Preschool Behavior, Genetic Psychology.Child care practices anteceding three patterns of preschool behavior.", "title": "" }, { "docid": "55f95c7b59f17fb210ebae97dbd96d72", "text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.", "title": "" }, { "docid": "0ff76204fcdf1a7cf2a6d13a5d3b1597", "text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.", "title": "" }, { "docid": "f5e2f0c8b3f7e7537751e5e411e665ce", "text": "Network Function Virtualization (NFV) applications are stateful. For example, a Content Distribution Network (CDN) caches web contents from remote servers and serves them to clients. Similarly, an Intrusion Detection System (IDS) and an Intrusion Prevention System (IPS) have both per-flow and multi-flow (shared) states to properly react to intrusions. On today's NFV infrastructures, security vulnerabilities many allow attackers to steal and manipulate the internal states of NFV applications that share a physical resource. In this paper, we propose a new protection scheme, S-NFV that incorporates Intel Software Guard Extensions (Intel SGX) to securely isolate the states of NFV applications.", "title": "" }, { "docid": "4e7f7b1444b253a63d4012b2502f5fa4", "text": "State-of-the-art techniques for 6D object pose recovery depend on occlusion-free point clouds to accurately register objects in 3D space. To deal with this shortcoming, we introduce a novel architecture called Iterative Hough Forest with Histogram of Control Points that is capable of estimating the 6D pose of an occluded and cluttered object, given a candidate 2D bounding box. Our Iterative Hough Forest (IHF) is learnt using parts extracted only from the positive samples. These parts are represented with Histogram of Control Points (HoCP), a “scale-variant” implicit volumetric description, which we derive from recently introduced Implicit B-Splines (IBS). The rich discriminative information provided by the scale-variant HoCP features is leveraged during inference. An automatic variable size part extraction framework iteratively refines the object’s roughly aligned initial pose due to the extraction of coarsest parts, the ones occupying the largest area in image pixels. The iterative refinement is accomplished based on finer (smaller) parts, which are represented with more discriminative control point descriptors by using our Iterative Hough Forest. Experiments conducted on a publicly available dataset report that our approach shows better registration performance than the state-of-the-art methods. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "804113bb0459eb04d9b163c086050207", "text": "The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field, which ultimately leads to crops management. The paper describes a software prototype system for rice disease detection based on the infected images of various rice plants. Images of the infected rice plants are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the plants. Then the infected part of the leaf has been used for the classification purpose using neural network. The methods evolved in this system are both image processing and soft computing technique applied on number of diseased rice plants.", "title": "" }, { "docid": "c32cecbc4adc812de6e43b3b0b05866b", "text": "Reinforcement learning for embodied agents is a challenging problem. The accumulated reward to be optimized is often a very rugged function, and gradient methods are impaired by many local optimizers. We demonstrate, in an experimental setting, that incorporating an intrinsic reward can smoothen the optimization landscape while preserving the global optimizers of interest. We show that policy gradient optimization for locomotion in a complex morphology is significantly improved when supplementing the extrinsic reward by an intrinsic reward defined in terms of the mutual information of time consecutive sensor readings.", "title": "" }, { "docid": "e7f9822daaf18371e53beb75d6e1fb30", "text": "In this paper1, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game.", "title": "" }, { "docid": "067fd264747d466b86710366c14a4495", "text": "We present Embodied Construction Grammar, a formalism for linguist ic analysis designed specifically for integration into a simulation-based model of language unders tanding. As in other construction grammars, linguistic constructions serve to map between phonological f orms and conceptual representations. In the model we describe, however, conceptual representations are als o constrained to be grounded in the body’s perceptual and motor systems, and more precisely to parameteri ze m ntal simulations using those systems. Understanding an utterance thus involves at least two dis inct processes: analysis to determine which constructions the utterance instantiates, and simulationaccording to the parameters specified by those constructions. In this chapter, we outline a constru ction formalism that is both representationally adequate for these purposes and specified precisely enough for se in a computational architecture.", "title": "" }, { "docid": "049f0308869c53bbb60337874789d569", "text": "In machine learning, one of the main requirements is to build computational models with a high ability to generalize well the extracted knowledge. When training e.g. artificial neural networks, poor generalization is often characterized by over-training. A common method to avoid over-training is the hold-out crossvalidation. The basic problem of this method represents, however, appropriate data splitting. In most of the applications, simple random sampling is used. Nevertheless, there are several sophisticated statistical sampling methods suitable for various types of datasets. This paper provides a survey of existing sampling methods applicable to the data splitting problem. Supporting experiments evaluating the benefits of the selected data splitting techniques involve artificial neural networks of the back-propagation type.", "title": "" }, { "docid": "8a6955ee53b9920a7c192143557ddf44", "text": "C utaneous metastases rarely develop in patients having cancer with solid tumors. The reported incidence of cutaneous metastases from a known primary malignancy ranges from 0.6% to 9%, usually appearing 2 to 3 years after the initial diagnosis.1-11 Skin metastases may represent the first sign of extranodal disease in 7.6% of patients with a primary oncologic diagnosis.1 Cutaneous metastases may also be the first sign of recurrent disease after treatment, with 75% of patients also having visceral metastases.2 Infrequently, cutaneous metastases may be seen as the primary manifestation of an undiagnosed malignancy.12 Prompt recognition of such tumors can be of great significance, affecting prognosis and management. The initial presentation of cutaneous metastases is frequently subtle and may be overlooked without proper index of suspicion, appearing as multiple or single nodules, plaques, and ulcers, in decreasing order of frequency. Commonly, a painless, mobile, erythematous papule is initially noted, which may enlarge to an inflammatory nodule over time.8 Such lesions may be misdiagnosed as cysts, lipomas, fibromas, or appendageal tumors. Clinical features of cutaneous metastases rarely provide information regarding the primary tumor, although the location of the tumor may be helpful because cutaneous metastases typically manifest in the same geographic region as the initial cancer. The most common primary tumors seen with cutaneous metastases are melanoma, breast, and squamous cell carcinoma of the head and neck.1 Cutaneous metastases are often firm, because of dermal or lymphatic involvement, or erythematous. These features may help rule out some nonvascular entities in the differential diagnosis (eg, cysts and fibromas). The presence of pigment most commonly correlates with cutaneous metastases from melanoma. Given the limited body of knowledge regarding distinct clinical findings, we sought to better elucidate the dermoscopic patterns of cutaneous metastases, with the goal of using this diagnostic tool to help identify these lesions. We describe 20 outpatients with biopsy-proven cutaneous metastases secondary to various underlying primary malignancies. Their clinical presentation is reviewed, emphasizing the dermoscopic findings, as well as the histopathologic correlation.", "title": "" }, { "docid": "080a097ddc53effd838494f40b7d39c2", "text": "This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyze the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The paper also highlights important open research challenges in the field.", "title": "" }, { "docid": "1efad72897441fb8b2f0fa4279a76e49", "text": "MOTIVATION\nIdentifying cells in an image (cell segmentation) is essential for quantitative single-cell biology via optical microscopy. Although a plethora of segmentation methods exists, accurate segmentation is challenging and usually requires problem-specific tailoring of algorithms. In addition, most current segmentation algorithms rely on a few basic approaches that use the gradient field of the image to detect cell boundaries. However, many microscopy protocols can generate images with characteristic intensity profiles at the cell membrane. This has not yet been algorithmically exploited to establish more general segmentation methods.\n\n\nRESULTS\nWe present an automatic cell segmentation method that decodes the information across the cell membrane and guarantees optimal detection of the cell boundaries on a per-cell basis. Graph cuts account for the information of the cell boundaries through directional cross-correlations, and they automatically incorporate spatial constraints. The method accurately segments images of various cell types grown in dense cultures that are acquired with different microscopy techniques. In quantitative benchmarks and comparisons with established methods on synthetic and real images, we demonstrate significantly improved segmentation performance despite cell-shape irregularity, cell-to-cell variability and image noise. As a proof of concept, we monitor the internalization of green fluorescent protein-tagged plasma membrane transporters in single yeast cells.\n\n\nAVAILABILITY AND IMPLEMENTATION\nMatlab code and examples are available at http://www.csb.ethz.ch/tools/cellSegmPackage.zip.", "title": "" }, { "docid": "f20e7c515d79f51fba660afc7cc3a7c5", "text": "We present an approach for the joint extraction of entities and relations in the context of opinion recognition and analysis. We identify two types of opinion-related entities — expressions of opinions and sources of opinions — along with the linking relation that exists between them. Inspired by Roth and Yih (2004), we employ an integer linear programming approach to solve the joint opinion recognition task, and show that global, constraint-based inference can significantly boost the performance of both relation extraction and the extraction of opinion-related entities. Performance further improves when a semantic role labeling system is incorporated. The resulting system achieves F-measures of 79 and 69 for entity and relation extraction, respectively, improving substantially over prior results in the area.", "title": "" }, { "docid": "c25a62b5798e7c08579efb61c35f2c66", "text": "In this paper, we propose a new adaptive stochastic gradient Langevin dynamics (ASGLD) algorithmic framework and its two specialized versions, namely adaptive stochastic gradient (ASG) and adaptive gradient Langevin dynamics(AGLD), for non-convex optimization problems. All proposed algorithms can escape from saddle points with at most $O(\\log d)$ iterations, which is nearly dimension-free. Further, we show that ASGLD and ASG converge to a local minimum with at most $O(\\log d/\\epsilon^4)$ iterations. Also, ASGLD with full gradients or ASGLD with a slowly linearly increasing batch size converge to a local minimum with iterations bounded by $O(\\log d/\\epsilon^2)$, which outperforms existing first-order methods.", "title": "" }, { "docid": "3dd266e768b989c24965a301984788a0", "text": "Security analytics and forensics applied to in-vehicle networks are growing research areas that gained relevance after recent reports of cyber-attacks against unmodified licensed vehicles. However, the application of security analytics algorithms and tools to the automotive domain is hindered by the lack of public specifications about proprietary data exchanged over in-vehicle networks. Since the controller area network (CAN) bus is the de-facto standard for the interconnection of automotive electronic control units, the lack of public specifications for CAN messages is a key issue. This paper strives to solve this problem by proposing READ: a novel algorithm for the automatic Reverse Engineering of Automotive Data frames. READ has been designed to analyze traffic traces containing unknown CAN bus messages in order to automatically identify and label different types of signals encoded in the payload of their data frames. Experimental results based on CAN traffic gathered from a licensed unmodified vehicle and validated against its complete formal specifications demonstrate that the proposed algorithm can extract and classify more than twice the signals with respect to the previous related work. Moreover, the execution time of signal extraction and classification is reduced by two orders of magnitude. Applications of READ to CAN messages generated by real vehicles demonstrate its usefulness in the analysis of CAN traffic.", "title": "" } ]
scidocsrr
d886efe936a8e226d501740d9937ad58
Low-resource OCR error detection and correction in French Clinical Texts
[ { "docid": "5510f5e1bcf352e3219097143200531f", "text": "Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.", "title": "" } ]
[ { "docid": "b911c86e5672f9a669e25c7771076d24", "text": "This paper discusses an implementation of Extended Kalman filter (EKF) in performing Simultaneous Localization and Mapping (SLAM). The implementation is divided into software and hardware phases. The software implementation applies EKF using Python on a library dataset to produce a map of the supposed environment. The result was verified against the original map and found to be relatively accurate with minor inaccuracies. In the hardware implementation stage, real life data was gathered from an indoor environment via a laser range finder and a pair of wheel encoders placed on a mobile robot. The resulting map shows at least five marked inaccuracies but the overall form is passable.", "title": "" }, { "docid": "e027e472740cea38ef29a347442b14d9", "text": "De-noising and segmentation are fundamental steps in processing of images. They can be used as preprocessing and post-processing step. They are used to enhance the image quality. Various medical imaging that are used in these days are Magnetic Resonance Images (MRI), Ultrasound, X-Ray, CT Scan etc. Various types of noises affect the quality of images which may lead to unpredictable results. Various noises like speckle noise, Gaussian noise and Rician noise is present in ultrasound, MRI respectively. With the segmentation region required for analysis and diagnosis purpose is extracted. Various algorithm for segmentation like watershed, K-mean clustering, FCM, thresholding, region growing etc. exist. In this paper, we propose an improved watershed segmentation using denoising filter. First of all, image will be de-noised with morphological opening-closing technique then watershed transform using linear correlation and convolution operations is applied to improve efficiency, accuracy and complexity of the algorithm. In this paper, watershed segmentation and various techniques which are used to improve the performance of watershed segmentation are discussed and comparative analysis is done.", "title": "" }, { "docid": "1da747ae58d80c218811618be4538a7b", "text": "Smartphones and other trendy mobile wearable devices are rapidly becoming the dominant sensing, computing and communication devices in peoples' daily lives. Mobile crowd sensing is an emerging technology based on the sensing and networking capabilities of such mobile wearable devices. MCS has shown great potential in improving peoples' quality of life, including healthcare and transportation, and thus has found a wide range of novel applications. However, user privacy and data trustworthiness are two critical challenges faced by MCS. In this article, we introduce the architecture of MCS and discuss its unique characteristics and advantages over traditional wireless sensor networks, which result in inapplicability of most existing WSN security solutions. Furthermore, we summarize recent advances in these areas and suggest some future research directions.", "title": "" }, { "docid": "ed9528fe8e4673c30de35d33130c728e", "text": "This paper introduces a friendly system to control the home appliances remotely by the use of mobile cell phones; this system is well known as “Home Automation System” (HAS).", "title": "" }, { "docid": "74bac9b30cb29eb67df0bdc71f3c4583", "text": "BACKGROUND\nMedical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems.\n\n\nMETHODS\nWe introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations.\n\n\nRESULTS\nWe perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients.\n\n\nCONCLUSIONS\nThe predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.", "title": "" }, { "docid": "da540860f3ecb9ca15148a7315b74a45", "text": "Learning mathematics is one of the most important aspects that determine the future of learners. However, mathematics as one of the subjects is often perceived as being complicated and not liked by the learners. Therefore, we need an application with the use of appropriate technology to create visualization effects which can attract more attention from learners. The application of Augmented Reality technology in digital game is a series of efforts made to create a better visualization effect. In addition, the system is also connected to a leaderboard web service in order to improve the learning motivation through competitive process. Implementation of Augmented Reality is proven to improve student's learning motivation moreover implementation of Augmented Reality in this game is highly preferred by students.", "title": "" }, { "docid": "46ab119ffd9850fe1e5ff35b6cda267d", "text": "Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.", "title": "" }, { "docid": "0ba036ae72811c02179842f1949974b6", "text": "The authors propose a new climatic drought index: the standardized precipitation evapotranspiration index (SPEI). The SPEI is based on precipitation and temperature data, and it has the advantage of combining multiscalar character with the capacity to include the effects of temperature variability on drought assessment. The procedure to calculate the index is detailed and involves a climatic water balance, the accumulation of deficit/surplus at different time scales, and adjustment to a log-logistic probability distribution. Mathematically, the SPEI is similar to the standardized precipitation index (SPI), but it includes the role of temperature. Because the SPEI is based on a water balance, it can be compared to the self-calibrated Palmer drought severity index (sc-PDSI). Time series of the three indices were compared for a set of observatories with different climate characteristics, located in different parts of the world. Under global warming conditions, only the sc-PDSI and SPEI identified an increase in drought severity associated with higher water demand as a result of evapotranspiration. Relative to the sc-PDSI, the SPEI has the advantage of being multiscalar, which is crucial for drought analysis and monitoring.", "title": "" }, { "docid": "5a8916d6019cf10784b2258299eb6ceb", "text": "In recent years, there is global demand for Islamic knowledge by both Muslims and non-Muslims. This has brought about number of automated applications that ease the retrieval of knowledge from the holy books. However current retrieval methods lack semantic information they are mostly base on keywords matching approach. In this paper we have proposed a Model that will make use of semantic Web technologies (ontology) to model Quran domain knowledge. The system will enhance Quran knowledge by enabling queries in natural language.", "title": "" }, { "docid": "e6d05a96665c2651c0b31f1bff67f04d", "text": "Detecting the neural processes like axons and dendrites needs high quality SEM images. This paper proposes an approach using perceptual grouping via a graph cut and its combinations with Convolutional Neural Network (CNN) to achieve improved segmentation of SEM images. Experimental results demonstrate improved computational efficiency with linear running time.", "title": "" }, { "docid": "4136eb42db90f60196cf828231039707", "text": "Most of the existing model verification and validation techniques are largely used in the industrial and system engineering fields. The agent-based modeling approach is different from traditional equation-based modeling approach in many aspects. As the agent-based modeling approach has recently become an attractive and efficient way for modeling large-scale complex systems, there are few formalized validation methodologies existing for model validation. In our proposed work, we design, develop, adapt, and apply various verification and validation techniques to an agent-based scientific model and investigate the sufficiency of these techniques for the validation of agent-based mod-", "title": "" }, { "docid": "3afa9f84c76bdca939c0a3dc645b4cbf", "text": "Recurrent neural networks are theoretically capable of learning complex temporal sequences, but training them through gradient-descent is too slow and unstable for practical use in reinforcement learning environments. Neuroevolution, the evolution of artificial neural networks using genetic algorithms, can potentially solve real-world reinforcement learning tasks that require deep use of memory, i.e. memory spanning hundreds or thousands of inputs, by searching the space of recurrent neural networks directly. In this paper, we introduce a new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons. We demonstrate the method in two POMDP tasks that involve temporal dependencies of up to thousands of time-steps, and show that it is faster and simpler than the current best conventional reinforcement learning system on these tasks.", "title": "" }, { "docid": "f70447a47fb31fc94d6b57ca3ef57ad3", "text": "BACKGROUND\nOn Aug 14, 2014, the US Food and Drug Administration approved the antiangiogenesis drug bevacizumab for women with advanced cervical cancer on the basis of improved overall survival (OS) after the second interim analysis (in 2012) of 271 deaths in the Gynecologic Oncology Group (GOG) 240 trial. In this study, we report the prespecified final analysis of the primary objectives, OS and adverse events.\n\n\nMETHODS\nIn this randomised, controlled, open-label, phase 3 trial, we recruited patients with metastatic, persistent, or recurrent cervical carcinoma from 81 centres in the USA, Canada, and Spain. Inclusion criteria included a GOG performance status score of 0 or 1; adequate renal, hepatic, and bone marrow function; adequately anticoagulated thromboembolism; a urine protein to creatinine ratio of less than 1; and measurable disease. Patients who had received chemotherapy for recurrence and those with non-healing wounds or active bleeding conditions were ineligible. We randomly allocated patients 1:1:1:1 (blocking used; block size of four) to intravenous chemotherapy of either cisplatin (50 mg/m2 on day 1 or 2) plus paclitaxel (135 mg/m2 or 175 mg/m2 on day 1) or topotecan (0·75 mg/m2 on days 1-3) plus paclitaxel (175 mg/m2 on day 1) with or without intravenous bevacizumab (15 mg/kg on day 1) in 21 day cycles until disease progression, unacceptable toxic effects, voluntary withdrawal by the patient, or complete response. We stratified randomisation by GOG performance status (0 vs 1), previous radiosensitising platinum-based chemotherapy, and disease status (recurrent or persistent vs metastatic). We gave treatment open label. Primary outcomes were OS (analysed in the intention-to-treat population) and adverse events (analysed in all patients who received treatment and submitted adverse event information), assessed at the second interim and final analysis by the masked Data and Safety Monitoring Board. The cutoff for final analysis was 450 patients with 346 deaths. This trial is registered with ClinicalTrials.gov, number NCT00803062.\n\n\nFINDINGS\nBetween April 6, 2009, and Jan 3, 2012, we enrolled 452 patients (225 [50%] in the two chemotherapy-alone groups and 227 [50%] in the two chemotherapy plus bevacizumab groups). By March 7, 2014, 348 deaths had occurred, meeting the prespecified cutoff for final analysis. The chemotherapy plus bevacizumab groups continued to show significant improvement in OS compared with the chemotherapy-alone groups: 16·8 months in the chemotherapy plus bevacizumab groups versus 13·3 months in the chemotherapy-alone groups (hazard ratio 0·77 [95% CI 0·62-0·95]; p=0·007). Final OS among patients not receiving previous pelvic radiotherapy was 24·5 months versus 16·8 months (0·64 [0·37-1·10]; p=0·11). Postprogression OS was not significantly different between the chemotherapy plus bevacizumab groups (8·4 months) and chemotherapy-alone groups (7·1 months; 0·83 [0·66-1·05]; p=0·06). Fistula (any grade) occurred in 32 (15%) of 220 patients in the chemotherapy plus bevacizumab groups (all previously irradiated) versus three (1%) of 220 in the chemotherapy-alone groups (all previously irradiated). Grade 3 fistula developed in 13 (6%) versus one (<1%). No fistulas resulted in surgical emergencies, sepsis, or death.\n\n\nINTERPRETATION\nThe benefit conferred by incorporation of bevacizumab is sustained with extended follow-up as evidenced by the overall survival curves remaining separated. After progression while receiving bevacizumab, we did not observe a negative rebound effect (ie, shorter survival after bevacizumab is stopped than after chemotherapy alone is stopped). These findings represent proof-of-concept of the efficacy and tolerability of antiangiogenesis therapy in advanced cervical cancer.\n\n\nFUNDING\nNational Cancer Institute.", "title": "" }, { "docid": "80d859e26c815e5c6a8c108ab0141462", "text": "StarCraft II poses a grand challenge for reinforcement learning. The main difficulties include huge state space, varying action space, long horizon, etc. In this paper, we investigate a set of techniques of reinforcement learning for the full-length game of StarCraft II. We investigate a hierarchical approach, where the hierarchy involves two levels of abstraction. One is the macro-actions extracted from expert’s demonstration trajectories, which can reduce the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture, which is modular and easy to scale. We also investigate a curriculum transfer learning approach that trains the agent from the simplest opponent to harder ones. On a 64×64 map and using restrictive units, we train the agent on a single machine with 4 GPUs and 48 CPU threads. We achieve a winning rate of more than 99% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93% winning rate against the most difficult non-cheating built-in AI (level-7) within days. We hope this study could shed some light on the future research of large-scale reinforcement learning.", "title": "" }, { "docid": "d2304dae0f99bf5e5b46d4ceb12c0d69", "text": "The ultimate goal of this indoor mapping research is to automatically reconstruct a floorplan simply by walking through a house with a smartphone in a pocket. This paper tackles this problem by proposing FloorNet, a novel deep neural architecture. The challenge lies in the processing of RGBD streams spanning a large 3D space. FloorNet effectively processes the data through three neural network branches: 1) PointNet with 3D points, exploiting the 3D information; 2) CNN with a 2D point density image in a top-down view, enhancing the local spatial reasoning; and 3) CNN with RGB images, utilizing the full image information. FloorNet exchanges intermediate features across the branches to exploit the best of all the architectures. We have created a benchmark for floorplan reconstruction by acquiring RGBD video streams for 155 residential houses or apartments with Google Tango phones and annotating complete floorplan information. Our qualitative and quantitative evaluations demonstrate that the fusion of three branches effectively improves the reconstruction quality. We hope that the paper together with the benchmark will be an important step towards solving a challenging vector-graphics reconstruction problem. Code and data are available at https://github.com/art-programmer/FloorNet.", "title": "" }, { "docid": "b64945127e8e8e23d3a5013d3aa7788a", "text": "The process of extraction of interesting patterns or knowledge from the bulk of data refers to the data mining technique. “It is the process of discovering meaningful, new correlation patterns and trends through non-trivial extraction of implicit, previously unknown information from large amount of data stored in repositories using pattern recognition as well as statistical and mathematical techniques”. Due to the wide deployment of Internet and information technology, storage and processing of data technologies, the ever-growing privacy concern has been a major issue in data mining for information sharing. This gave rise to a new path in research, known as Privacy Preserving Data Mining (PPDM). The literature paper discusses various privacy preserving data mining algorithms and provide a wide analyses for the representative techniques for privacy preserving data mining along with their merits and demerits. The paper describes an overview of some of the well-known PPDM algorithms. Most of the algorithms are usually a modification of a well-known data-mining algorithm along with some privacy preserving techniques. This paper also focuses on the problems and directions for the future research here. The paper finally discusses the comparative analysis of some well-known privacy preservation techniques that are used. This paper is intended to be a summary and an overview of PPDM.", "title": "" }, { "docid": "3a3a2261e1063770a9ccbd0d594aa561", "text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.", "title": "" }, { "docid": "48d7946228c33ba82f3870e0e08acf0d", "text": "Trajectory prediction of objects in moving objects databases (MODs) has garnered wide support in a variety of applications and is gradually becoming an active research area. The existing trajectory prediction algorithms focus on discovering frequent moving patterns or simulating the mobility of objects via mathematical models. While these models are useful in certain applications, they fall short in describing the position and behavior of moving objects in a network-constraint environment. Aiming to solve this problem, a hidden Markov model (HMM)-based trajectory prediction algorithm is proposed, called Hidden Markov model-based Trajectory Prediction (HMTP). By analyzing the disadvantages of HMTP, a self-adaptive parameter selection algorithm called HMTP * is proposed, which captures the parameters necessary for real-world scenarios in terms of objects with dynamically changing speed. In addition, a density-based trajectory partition algorithm is introduced, which helps improve the efficiency of prediction. In order to evaluate the effectiveness and efficiency of the proposed algorithms, extensive experiments were conducted, and the experimental results demonstrate that the effect of critical parameters on the prediction accuracy in the proposed paradigm, with regard to HMTP *, can greatly improve the accuracy when compared with HMTP, when subjected to randomly changing speeds. Moreover, it has higher positioning precision than HMTP due to its capability of self-adjustment.", "title": "" }, { "docid": "322d23354a9bf45146e4cb7c733bf2ec", "text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: [email protected] Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: [email protected]", "title": "" }, { "docid": "1548b993c52505372128332be1b2ddf6", "text": "This paper presents generalizations of Bayes likelihood-ratio updating rule which facilitate an asynchronous propagation of the impacts of new beliefs and/or new evidence in hierarchically organized inference structures with multi-hypotheses variables. The computational scheme proposed specifies a set of belief parameters, communication messages and updating rules which guarantee that the diffusion of updated beliefs is accomplished in a single pass and complies with the tenets of Bayes calculus.", "title": "" } ]
scidocsrr
c95116a2e08e4eba28e0bf7851f1f49a
Is physics-based liveness detection truly possible with a single image?
[ { "docid": "b40129a15767189a7a595db89c066cf8", "text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.", "title": "" } ]
[ { "docid": "0d48e7715f3e0d74407cc5a21f2c322a", "text": "Every teacher of linear algebra should be familiar with the matrix singular value decomposition (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well known theory of diagonalization for symmetric matrices makes the topic immediately accessible to linear algebra teachers, and indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Strang was aware of these facts when he introduced the SVD in his now classical text [22, page 142], observing", "title": "" }, { "docid": "946ad58856b018604d59a3e0e08a48a7", "text": "The well known approaches of tuning and self-tuning of data management systems are essential in the context of the Cloud environment, which promises self management properties, such as elasticity, scalability, and fault tolerance. Moreover, the intricate Cloud storage systems criteria, such as their modular, distributed, and multi-layered architecture, add to the complexity of the tuning process and necessity of the self-tuning process. Furthermore, if we have one or more applications with one or more workloads with contradicting and possibly changing optimization goals, we are faced with the question of how to tune the underlying storage system cluster to best achieve the optimization goals of all workloads. Based on that, we define the tuning problem as finding the cluster configuration out of a set of possible configurations that would minimize the aggregated cost value for all workloads while still fulfilling their performance thresholds. In order to solve such a problem, we investigate the design and implementation of a Cloud storage system agnostic (self-)tuning framework. This framework consists of components to observe, and model different performance criteria of the underlying Cloud storage system. It also includes a decision model to configure tuning parameters based on applications requirements. To model the performance of the underlying Cloud storage system, we use statistical machine learning techniques. The statistical data that is needed to model the performance can be generated in a training phase. For that we designed a training component that generates workloads and automates the testing process with different cluster configurations. As part of our evaluation, we address the essential problem of tuning the cluster size of the Cloud storage system while minimizing the latency for the targeted workloads. In order to do that, we model the latency in relation to cluster size and workload characteristics. The predictive models can then be used by the decision component to search for the optimal allocation of nodes to workloads. We also evaluate different alternatives for the search algorithms as part of the decision component implementation. These alternatives include brute-force, and genetic algorithm approaches.", "title": "" }, { "docid": "10d9758469a1843d426f56a379c2fecb", "text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.", "title": "" }, { "docid": "c4fcd7db5f5ba480d7b3ecc46bef29f6", "text": "In this paper, we propose an indoor action detection system which can automatically keep the log of users' activities of daily life since each activity generally consists of a number of actions. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive and less annoying to the users, too. We regard the series of images of an action as a set of key-poses in images of the interested user which are arranged in a certain temporal order and use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses. In this work, two kinds of features are proposed. The first is the histogram of depth difference value which can encode the shape of the human poses. The second is the location-signified feature which can capture the spatial relations among the person, floor, and other static objects. Moreover, we find that some incorrect detection results of certain type of action are usually associated with another certain type of action. Therefore, we design an algorithm that tries to automatically discover the action pairs which are the most difficult to be differentiable, and suppress the incorrect detection outcomes. To validate our system, experiments have been conducted, and the experimental results have shown effectiveness and robustness of our proposed method.", "title": "" }, { "docid": "dd0a1a3d6de377efc0a97004376749b6", "text": "Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing time series. Here, each layer is a recurrent network which receives the hidden state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the structure of time series. We show that they reach state-of-the-art performance for recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent time scales.", "title": "" }, { "docid": "9aab4a607de019226e9465981b82f9b8", "text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.", "title": "" }, { "docid": "3b0f2413234109c6df1b643b61dc510b", "text": "Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.", "title": "" }, { "docid": "5d243f8492f12a135d68bd15ddf2488d", "text": "The success of \"infinite-inventory\" retailers such as Amazon.com and Netflix has been ascribed to a \"long tail\" phenomenon. To wit, while the majority of their inventory is not in high demand, in aggregate these \"worst sellers,\" unavailable at limited-inventory competitors, generate a significant fraction of total revenue. The long tail phenomenon, however, is in principle consistent with two fundamentally different theories. The first, and more popular hypothesis, is that a majority of consumers consistently follow the crowds and only a minority have any interest in niche content; the second hypothesis is that everyone is a bit eccentric, consuming both popular and specialty products. Based on examining extensive data on user preferences for movies, music, Web search, and Web browsing, we find overwhelming support for the latter theory. However, the observed eccentricity is much less than what is predicted by a fully random model whereby every consumer makes his product choices independently and proportional to product popularity; so consumers do indeed exhibit at least some a priori propensity toward either the popular or the exotic.\n Our findings thus suggest an additional factor in the success of infinite-inventory retailers, namely, that tail availability may boost head sales by offering consumers the convenience of \"one-stop shopping\" for both their mainstream and niche interests. This hypothesis is further supported by our theoretical analysis that presents a simple model in which shared inventory stores, such as Amazon Marketplace, gain a clear advantage by satisfying tail demand, helping to explain the emergence and increasing popularity of such retail arrangements. Hence, we believe that the return-on-investment (ROI) of niche products goes beyond direct revenue, extending to second-order gains associated with increased consumer satisfaction and repeat patronage. More generally, our findings call into question the conventional wisdom that specialty products only appeal to a minority of consumers.", "title": "" }, { "docid": "e234686126b22695d8295f79083506a7", "text": "In computer vision the most difficult task is to recognize the handwritten digit. Since the last decade the handwritten digit recognition is gaining more and more fame because of its potential range of applications like bank cheque analysis, recognizing postal addresses on postal cards, etc. Handwritten digit recognition plays a very vital role in day to day life, like in a form of recording of information and style of communication even with the addition of new emerging techniques. The performance of Handwritten digit recognition system is highly depend upon two things: First it depends on feature extraction techniques which is used to increase the performance of the system and improve the recognition rate and the second is the neural network approach which takes lots of training data and automatically infer the rule for matching it with the correct pattern. In this paper we have focused on different methods of handwritten digit recognition that uses both feature extraction techniques and neural network approaches and presented a comparative analysis while discussing pros and cons of each method.", "title": "" }, { "docid": "4b557c498499c9bbb900d4983cc28426", "text": "Document clustering has not been well received as an information retrieval tool. Objections to its use fall into two main categories: first, that clustering is too slow for large corpora (with running time often quadratic in the number of documents); and second, that clustering does not appreciably improve retrieval.\nWe argue that these problems arise only when clustering is used in an attempt to improve conventional search techniques. However, looking at clustering as an information access tool in its own right obviates these objections, and provides a powerful new access paradigm. We present a document browsing technique that employs document clustering as its primary operation. We also present fast (linear time) clustering algorithms which support this interactive browsing paradigm.", "title": "" }, { "docid": "1bbb2888fd1111b3e24c54f198064941", "text": "This paper presents a simple and active calibration technique of camera-projector systems based on planar homography. From the camera image of a planar calibration pattern, we generate a projector image of the pattern through the homography between the camera and the projector. To determine the coordinates of the pattern corners from the view of the projector, we actively project a corner marker from the projector to align the marker with the printed pattern corners. Calibration is done in two steps. First, four outer corners of the pattern are identified. Second, all other inner corners are identified. The pattern image from the projector is then used to calibrate the projector. Experimental results of two types of camera-projector systems show that the projection errors of both camera and projector are less than 1 pixel.", "title": "" }, { "docid": "dfc6455cb7c12037faeb8c02c0027570", "text": "This paper proposes efficient and powerful deep networks for action prediction from partially observed videos containing temporally incomplete action executions. Different from after-the-fact action recognition, action prediction task requires action labels to be predicted from these partially observed videos. Our approach exploits abundant sequential context information to enrich the feature representations of partial videos. We reconstruct missing information in the features extracted from partial videos by learning from fully observed action videos. The amount of the information is temporally ordered for the purpose of modeling temporal orderings of action segments. Label information is also used to better separate the learned features of different categories. We develop a new learning formulation that enables efficient model training. Extensive experimental results on UCF101, Sports-1M and BIT datasets demonstrate that our approach remarkably outperforms state-of-the-art methods, and is up to 300x faster than these methods. Results also show that actions differ in their prediction characteristics, some actions can be correctly predicted even though only the beginning 10% portion of videos is observed.", "title": "" }, { "docid": "ccd5f02b97643b3c724608a4e4a67fdb", "text": "Modular robotic systems that integrate distally with commercially available endoscopic equipment have the potential to improve the standard-of-care in therapeutic endoscopy by granting clinicians with capabilities not present in commercial tools, such as precision dexterity and feedback sensing. With the desire to integrate both sensing and actuation distally for closed-loop position control in fully deployable, endoscope-based robotic modules, commercial sensor and actuator options that acquiesce to the strict form-factor requirements are sparse or nonexistent. Herein, we describe a proprioceptive angle sensor for potential closed-loop position control applications in distal robotic modules. Fabricated monolithically using printed-circuit MEMS, the sensor employs a kinematic linkage and the principle of light intensity modulation to sense the angle of articulation with a high degree of fidelity. Onboard temperature and environmental irradiance measurements, coupled with linear regression techniques, provide robust angle measurements that are insensitive to environmental disturbances. The sensor is capable of measuring $\\pm$45 degrees of articulation with an RMS error of 0.98 degrees. An ex vivo demonstration shows that the sensor can give real-time proprioceptive feedback when coupled with an actuator module, opening up the possibility of fully distal closed-loop control.", "title": "" }, { "docid": "913c8819f7f0bea4d356051442d074db", "text": "From GuI to TuI Humans have evolved a heightened ability to sense and manipulate the physical world, yet the digital world takes little advantage of our capacity for hand-eye coordination. A tangible user interface (TUI) builds upon our dexterity by embodying digital information in physical space. Tangible design expands the affordances of physical objects so they can Graphical user interfaces (GUIs) let users see digital information only through a screen, as if looking into a pool of water, as depicted in Figure 1 on page 40. We interact with the forms below through remote controls, such as a mouse, a keyboard, or a touchscreen (Figure 1a). Now imagine an iceberg, a mass of ice that penetrates the surface of the water and provides a handle for the mass beneath. This metaphor describes tangible user interfaces: They act as physical manifestations of computation, allowing us to interact directly with the portion that is made tangible—the “tip of the iceberg” (Figure 1b). Radical Atoms takes a leap beyond tangible interfaces by assuming a hypothetical generation of materials that can change form and CoVer STorY", "title": "" }, { "docid": "c2845a8a4f6c2467c7cd3a1a95a0ca37", "text": "In this report I introduce ReSuMe a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learni ng method for control of movement for the physically disabled. Howeve r, thorough analysis of the ReSuMe method reveals its suitability not on ly to the task of movement control, but also to other real-life applicatio ns including modeling, identification and control of diverse non-statio nary, nonlinear objects. ReSuMe integrates the idea of learning windows, known from t he spikebased Hebbian rules, with a novel concept of remote supervis ion. General overview of the method, the basic definitions, the netwo rk architecture and the details of the learning algorithm are presented . The properties of ReSuMe such as locality, computational simplicity a nd the online processing suitability are discussed. ReSuMe learning abi lities are illustrated in a verification experiment.", "title": "" }, { "docid": "be48b00ee50c872d42ab95e193ac774b", "text": "T profitability of remanufacturing systems for different cost, technology, and logistics structures has been extensively investigated in the literature. We provide an alternative and somewhat complementary approach that considers demand-related issues, such as the existence of green segments, original equipment manufacturer competition, and product life-cycle effects. The profitability of a remanufacturing system strongly depends on these issues as well as on their interactions. For a monopolist, we show that there exist thresholds on the remanufacturing cost savings, the green segment size, market growth rate, and consumer valuations for the remanufactured products, above which remanufacturing is profitable. More important, we show that under competition remanufacturing can become an effective marketing strategy, which allows the manufacturer to defend its market share via price discrimination.", "title": "" }, { "docid": "a2cbec8144197125cc5530aa6755196f", "text": "This paper provides a survey of the research done on optimization in dynamic environments over the past decade. We show an analysis of the most commonly used problems, methods and measures together with the newer approaches and trends, as well as their interrelations and common ideas. The survey is supported by a public web repository, located at http://www.dynamic-optimization. org where the collected bibliography is manually organized and tagged according to different categories.", "title": "" }, { "docid": "510504cec355ec68a92fad8f10527beb", "text": "This paper presents a 1.2V/2.5V tolerant I/O buffer design with only thin gate-oxide devices. The novel floating N-well and gate-tracking circuits in mixed-voltage I/O buffer are proposed to overcome the problem of leakage current, which will occur in the conventional CMOS I/O buffer when using in the mixedvoltage I/O interfaces. The new proposed 1.2V/2.5V tolerant I/O buffer design has been successfully verified in a 0.13-μm salicided CMOS process, which can be also applied in other CMOS processes to serve different mixed-voltage I/O interfaces.", "title": "" }, { "docid": "d5bc3147e23f95a070bce0f37a96c2a8", "text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.", "title": "" } ]
scidocsrr
fb4c8f46186e497d3099f19e91d2c6b6
On the Necessity of a Prescribed Block Validity Consensus: Analyzing Bitcoin Unlimited Mining Protocol
[ { "docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1", "text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.", "title": "" }, { "docid": "ed447f3f4bbe8478e9e1f3c4593dbf1b", "text": "We revisit the fundamental question of Bitcoin's security against double spending attacks. While previous work has bounded the probability that a transaction is reversed, we show that no such guarantee can be effectively given if the attacker can choose when to launch the attack. Other approaches that bound the cost of an attack have erred in considering only limited attack scenarios, and in fact it is easy to show that attacks may not cost the attacker at all. We therefore provide a different interpretation of the results presented in previous papers and correct them in several ways. We provide different notions of the security of transactions that provide guarantees to different classes of defenders: merchants who regularly receive payments, miners, and recipients of large one-time payments. We additionally consider an attack that can be launched against lightweight clients, and show that these are less secure than their full node counterparts and provide the right strategy for defenders in this case as well. Our results, overall, improve the understanding of Bitcoin's security guarantees and provide correct bounds for those wishing to safely accept transactions.", "title": "" } ]
[ { "docid": "45c48cce2e2d7fbdea1afc51c7c6ad26", "text": "9", "title": "" }, { "docid": "a4fbb63fa62ec2985b395521d51191dd", "text": "Deep Neural Networks expose a high degree of parallelism, making them amenable to highly data parallel architectures. However, data-parallel architectures often accept inefficiency in individual computations for the sake of overall efficiency. We show that on average, activation values of convolutional layers during inference in modern Deep Convolutional Neural Networks (CNNs) contain 92% zero bits. Processing these zero bits entails ineffectual computations that could be skipped. We propose Pragmatic (PRA), a massively data-parallel architecture that eliminates most of the ineffectual computations on-the-fly, improving performance and energy efficiency compared to state-of-the-art high-performance accelerators [5]. The idea behind PRA is deceptively simple: use serial-parallel shift-and-add multiplication while skipping the zero bits of the serial input. However, a straightforward implementation based on shift-and-add multiplication yields unacceptable area, power and memory access overheads compared to a conventional bit-parallel design. PRA incorporates a set of design decisions to yield a practical, area and energy efficient design.\n Measurements demonstrate that for convolutional layers, PRA is 4.31X faster than DaDianNao [5] (DaDN) using a 16-bit fixed-point representation. While PRA requires 1.68X more area than DaDN, the performance gains yield a 1.70X increase in energy efficiency in a 65nm technology. With 8-bit quantized activations, PRA is 2.25X faster and 1.31X more energy efficient than an 8-bit version of DaDN.", "title": "" }, { "docid": "aa45f36e893c17fd364051b7b8d4c9b4", "text": "Identifying the location of performance bottlenecks is a non-trivial challenge when scaling n-tier applications in computing clouds. Specifically, we observed that an n-tier application may experience significant performance loss when there are transient bottlenecks in component servers. Such transient bottlenecks arise frequently at high resource utilization and often result from transient events (e.g., JVM garbage collection) in an n-tier system and bursty workloads. Because of their short lifespan (e.g., milliseconds), these transient bottlenecks are difficult to detect using current system monitoring tools with sampling at intervals of seconds or minutes. We describe a novel transient bottleneck detection method that correlates throughput (i.e., request service rate) and load (i.e., number of concurrent requests) of each server in an n-tier system at fine time granularity. Both throughput and load can be measured through passive network tracing at millisecond-level time granularity. Using correlation analysis, we can identify the transient bottlenecks at time granularities as short as 50ms. We validate our method experimentally through two case studies on transient bottlenecks caused by factors at the system software layer (e.g., JVM garbage collection) and architecture layer (e.g., Intel SpeedStep).", "title": "" }, { "docid": "cab91b728b363f362535758dd9ac57b3", "text": "The multimodal nature of speech is often ignored in human-computer interaction, but lip deformations and other body motion, such as those of the head, convey additional information. We integrate speech cues from many sources and this improves intelligibility, especially when the acoustic signal is degraded. This paper shows how this additional, often complementary, visual speech information can be used for speech recognition. Three methods for parameterizing lip image sequences for recognition using hidden Markov models are compared. Two of these are top-down approaches that fit a model of the inner and outer lip contours and derive lipreading features from a principal component analysis of shape or shape and appearance, respectively. The third, bottom-up, method uses a nonlinear scale-space analysis to form features directly from the pixel intensity. All methods are compared on a multitalker visual speech recognition task of isolated letters.", "title": "" }, { "docid": "9027f5db4917113f9dd658caddda4f88", "text": "In this paper, two different types of ultra-fast electromechanical actuators are compared using a multi-physical finite element simulation model that has been experimentally validated. They are equipped with a single-sided Thomson coil (TC) and a double-sided drive coil (DSC), respectively. The former consists of a spirally-wound flat coil with a copper armature on top, while the latter consists of two mirrored spiral coils that are connected in series. Initially, the geometry and construction of each of the actuating schemes are discussed. Subsequently, the theory behind the two force generation principles are described. Furthermore, the current, magnetic flux densities, accelerations, and induced stresses are analyzed. Moreover, mechanical loadability simulations are performed to study the impact on the requirements of the charging unit, the sensitivity of the parameters, and evaluate the degree of influence on the performance of both drives. Finally, it is confirmed that although the DSC is mechanically more complex, it has a greater efficiency than that of the TC.", "title": "" }, { "docid": "79c1237142f82b3e316e784c45fd68c6", "text": "The incidence of chronic osteomyelitis is increasing because of the prevalence of predisposing conditions such as diabetes mellitus and peripheral vascular disease. The increased availability of sensitive imaging tests, such as magnetic resonance imaging and bone scintigraphy, has improved diagnostic accuracy and the ability to characterize the infection. Plain radiography is a useful initial investigation to identify alternative diagnoses and potential complications. Direct sampling of the wound for culture and antimicrobial sensitivity is essential to target treatment. The increased incidence of methicillin-resistant Staphylococcus aureus osteomyelitis complicates antibiotic selection. Surgical debridement is usually necessary in chronic cases. The recurrence rate remains high despite surgical intervention and long-term antibiotic therapy. Acute hematogenous osteomyelitis in children typically can be treated with a four-week course of antibiotics. In adults, the duration of antibiotic treatment for chronic osteomyelitis is typically several weeks longer. In both situations, however, empiric antibiotic coverage for S. aureus is indicated.", "title": "" }, { "docid": "51ddbc18a9e5a460038676b7d5dc6f10", "text": "The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.", "title": "" }, { "docid": "fe3afe69ec27189400e65e8bdfc5bf0b", "text": "speech learning changes over the life span and to explain why \"earlier is better\" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (Le., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.", "title": "" }, { "docid": "4d59fd865447cfd1d54623e267af491c", "text": "Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.", "title": "" }, { "docid": "9af37841feed808345c39ee96ddff914", "text": "Wake-up receivers (WuRXs) are low-power radios that continuously monitor the RF environment to wake up a higher-power radio upon detection of a predetermined RF signature. Prior-art WuRXs have 100s of kHz of bandwidth [1] with low signature-to-wake-up-signal latency to help synchronize communication amongst nominally asynchronous wireless devices. However, applications such as unattended ground sensors and smart home appliances wake-up infrequently in an event-driven manner, and thus WuRX bandwidth and latency are less critical; instead, the most important metrics are power consumption and sensitivity. Unfortunately, current state-of-the-art WuRXs utilizing direct envelope-detecting [2] and IF/uncertain-IF [1,3] architectures (Fig. 24.5.1) achieve only modest sensitivity at low-power (e.g., −39dBm at 104nW [2]), or achieve excellent sensitivity at higher-power (e.g., −97dBm at 99µW [3]) via active IF gain elements. Neither approach meets the needs of next-generation event-driven sensing networks.", "title": "" }, { "docid": "5ca5cfcd0ed34d9b0033977e9cde2c74", "text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We …nd that RP signi…cantly reduces both brand-name and generic prices, and results in signi…cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi…cant cost-savings, and that patients’ copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi…cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for …nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: [email protected]. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: [email protected]. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: [email protected].", "title": "" }, { "docid": "b7524787cce58c3bf34a9d7fd3c8af90", "text": "Convolutional Neural Networks and Graphics Processing Units have been at the core of a paradigm shift in computer vision research that some researchers have called “the algorithmic perception revolution.” This thesis presents the implementation and analysis of several techniques for performing artistic style transfer using a Convolutional Neural Network architecture trained for large-scale image recognition tasks. We present an implementation of an existing algorithm for artistic style transfer in images and video. The neural algorithm separates and recombines the style and content of arbitrary images. Additionally, we present an extension of the algorithm to perform weighted artistic style transfer.", "title": "" }, { "docid": "f4562d3b45761d01e64f1f72bee5eec7", "text": "We introduce two Python frameworks to train neural networks on large datasets: Blocks and Fuel. Blocks is based on Theano, a linear algebra compiler with CUDA-support (Bastien et al., 2012; Bergstra et al., 2010). It facilitates the training of complex neural network models by providing parametrized Theano operations, attaching metadata to Theano’s symbolic computational graph, and providing an extensive set of utilities to assist training the networks, e.g. training algorithms, logging, monitoring, visualization, and serialization. Fuel provides a standard format for machine learning datasets. It allows the user to easily iterate over large datasets, performing many types of pre-processing on the fly.", "title": "" }, { "docid": "7f9515a848cca72fb1864c55e6e52e50", "text": "William 111. Waite ver the past five years, our group has developed the Eli’ system to reduce the cost of producing compilers. Eli has been used to construct complete compilers for standard programming languages extensions to standard programming languages, and specialpurpose languages. For the remainder of this article, we will use the term compiler when referring to language processors. One of the most important ways to enhance productivity in software engineering is to provide more appropriate descriptions of problems and their", "title": "" }, { "docid": "b6d8e6b610eff993dfa93f606623e31d", "text": "Data journalism designates journalistic work inspired by digital data sources. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; since 2012, however, it has increasingly focused on the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web (social and otherwise). These trends have been noticed by computer scientists working in the industry and academia. Thus, a very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools. Our tutorial: (i) Outlines the current state of affairs in the area of digital (or computational) fact-checking in newsrooms, by journalists, NGO workers, scientists and IT companies; (ii) Shows which areas of digital content management research, in particular those relying on the Web, can be leveraged to help fact-checking, and gives a comprehensive survey of efforts in this area; (iii) Highlights ongoing trends, unsolved problems, and areas where we envision future scientific and practical advances. PVLDB Reference Format: S. Cazalens, J. Leblay, P. Lamarre, I. Manolescu, X. Tannier. Computational Fact Checking: A Content Management Perspective. PVLDB, 11 (12): 2110-2113, 2018. DOI: https://doi.org/10.14778/3229863.3229880 This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For any use beyond those covered by this license, obtain permission by emailing [email protected]. Proceedings of the VLDB Endowment, Vol. 11, No. 12 Copyright 2018 VLDB Endowment 2150-8097/18/8. DOI: https://doi.org/10.14778/3229863.3229880 1. OUTLINE In Section 1.1, we provide a short history of journalistic fact-checking and presents its most recent and visible actors, from the media and/or NGO communities. Section 1.2 discusses the scientific content management areas which bring useful tools for computational fact-checking. 1.1 Data journalism and fact-checking While data of some form is a natural ingredient of all reporting, the increasing volumes and complexity of digital data lead to a qualitative jump, where technical skills, and in particular data science skills, are stringently needed in journalistic work. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community; it referred to the task of identifying and checking factual claims present in media content, which dedicated newsroom personnel would then check for factual accuracy. The goal of such checking was to avoid misinformation, to protect the journal reputation and avoid legal actions. Starting around 2012, first in the United States (FactCheck.org), then in Europe, and soon after in all areas of the world, journalists have started to take advantage of modern technologies for processing content, such as text, video, structured and unstructured data, in order to automate, at least partially, the knowledge finding, reasoning, and analysis tasks which had been previously performed completely by humans. Over time, the focus of fact-checking shifted from verifying claims made by media outlets, toward the claims made by politicians and other public figures. This trend coincided with the parallel (but distinct) evolution toward asking Government Open Data, that is: the idea that governing bodies should share with the public precise information describing their functioning, so that the people have means to assess the quality of their elected representation. Government Open Data became quickly available, in large volumes, e.g. through data.gov in the US, data.gov.uk in the UK, data.gouv.fr in France etc.; journalists turned out to be the missing link between the newly available data and comprehension by the public. Data journalism thus found http://factcheck.org", "title": "" }, { "docid": "e16d89d3a6b3d38b5823fae977087156", "text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.", "title": "" }, { "docid": "691f992fe99d6e16a97f694375014d16", "text": "Database fragmentation allows reducing irrelevant data accesses by grouping data frequently accessed together in dedicated segments. In this paper, we address multimedia database fragmentation to take into account the rich characteristics of multimedia objects. We particularly discuss multimedia primary horizontal fragmentation and focus on semantic-based textual predicates implication required as a pre-process in current fragmentation algorithms in order to partition multimedia data efficiently. Identifying semantic implication between similar queries (if a user searches for the images containing a car, he would probably mean auto, vehicle, van or sport-car as well) will improve the fragmentation process. Making use of the neighborhood concept in knowledge bases to identify semantic implications constitutes the core of our proposal. A prototype has been implemented to evaluate the performance of our approach.", "title": "" }, { "docid": "08cf1e6353fa3c9969188d946874c305", "text": "In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.", "title": "" }, { "docid": "ae4b651ea8bd6b4c7c6efcc52f76516e", "text": "We study a regularization framework where we feed an original clean data point and a nearby point through a mapping, which is then penalized by the Euclidian distance between the corresponding outputs. The nearby point may be chosen randomly or adversarially. A more general form of this framework has been presented in (Bachman et al., 2014). We relate this framework to many existing regularization methods: It is a stochastic estimate of penalizing the Frobenius norm of the Jacobian of the mapping as in Poggio & Girosi (1990), it generalizes noise regularization (Sietsma & Dow, 1991), and it is a simplification of the canonical regularization term by the ladder networks in Rasmus et al. (2015). We also investigate the connection to virtual adversarial training (VAT) (Miyato et al., 2016) and show how VAT can be interpreted as penalizing the largest eigenvalue of a Fisher information matrix. Our contribution is discovering connections between the studied and other existing regularization methods.", "title": "" }, { "docid": "eef1e51e4127ed481254f97963496f48", "text": "-Vehicular ad hoc networks (VANETs) are wireless networks that do not require any fixed infrastructure. Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. Adaptive techniques for VANETs usually consider features related to the vehicles in the scenario, such as their density, speed, and position, to adapt the performance of the dissemination process. These approaches are not useful when trying to warn the highest number of vehicles about dangerous situations in realistic vehicular environments. The Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. PAWDS system that dynamically modifies some of the key parameters of the propagation process and it cannot detect the vehicles which are in the dangerous position. Proposed system identifies the vehicles which are in the dangerous position and to send warning messages immediately. The vehicles must make use of all the available information efficiently to predict the position of nearby vehicles. Keywords— PAWDS, VANET, Ad hoc network , OBU , RSU, GPS.", "title": "" } ]
scidocsrr
eb3ab27f99915abd020a21b269292bca
MahNMF: Manhattan Non-negative Matrix Factorization
[ { "docid": "a21d1956026b29bc67b92f8508a62e1c", "text": "We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.", "title": "" }, { "docid": "9edfe5895b369c0bab8d83838661ea0a", "text": "(57) Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By moni toring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (un structured data) into discrete-phase-space states, and hence into a graph (Structured data) for extraction of condition change. ABSTRACT", "title": "" }, { "docid": "e2867713be67291ee8c25afa3e2d1319", "text": "In recent years the <i>l</i><sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the <i>l</i><sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of <i>l</i><sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the <i>l</i><sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in <i>O</i>(<i>n</i> log <i>n</i>) time and <i>O</i>(<i>n</i>) memory where <i>n</i> is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that <i>l</i><sub>1</sub>, <sub>∞</sub> leads to better performance than both <i>l</i><sub>2</sub> and <i>l</i><sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.", "title": "" } ]
[ { "docid": "2d87e26389b9d4ebf896bd9cbd281e69", "text": "Finger-vein biometrics has been extensively investigated for personal authentication. One of the open issues in finger-vein verification is the lack of robustness against image-quality degradation. Spurious and missing features in poor-quality images may degrade the system’s performance. Despite recent advances in finger-vein quality assessment, current solutions depend on domain knowledge. In this paper, we propose a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge. Driven by the primary target of biometric quality assessment, i.e., verification error minimization, we assume that low-quality images are falsely rejected in a verification system. Based on this assumption, the low- and high-quality images are labeled automatically. We then train a DNN on the resulting data set to predict the image quality. To further improve the DNN’s robustness, the finger-vein image is divided into various patches, on which a patch-based DNN is trained. The deepest layers associated with the patches form together a complementary and an over-complete representation. Subsequently, the quality of each patch from a testing image is estimated and the quality scores from the image patches are conjointly input to probabilistic support vector machines (P-SVM) to boost quality-assessment performance. To the best of our knowledge, this is the first proposed work of deep learning-based quality assessment, not only for finger-vein biometrics, but also for other biometrics in general. The experimental results on two public finger-vein databases show that the proposed scheme accurately identifies high- and low-quality images and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.", "title": "" }, { "docid": "bb1554d174df80e7db20e943b4a69249", "text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.", "title": "" }, { "docid": "c7c5fde8197d87f2551a2897d5fd4487", "text": "The Parallel Meaning Bank is a corpus of translations annotated with shared, formal meaning representations comprising over 11 million words divided over four languages (English, German, Italian, and Dutch). Our approach is based on cross-lingual projection: automatically produced (and manually corrected) semantic annotations for English sentences are mapped onto their word-aligned translations, assuming that the translations are meaning-preserving. The semantic annotation consists of five main steps: (i) segmentation of the text in sentences and lexical items; (ii) syntactic parsing with Combinatory Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and (v) compositional semantic analysis based on Discourse Representation Theory. These steps are performed using statistical models trained in a semisupervised manner. The employed annotation models are all language-neutral. Our first results are promising.", "title": "" }, { "docid": "0efc0e61946979158277aa9314227426", "text": "Many chronic diseases possess a shared biology. Therapies designed for patients at risk of multiple diseases need to account for the shared impact they may have on related diseases to ensure maximum overall well-being. Learning from data in this setting differs from classical survival analysis methods since the incidence of an event of interest may be obscured by other related competing events. We develop a semiparametric Bayesian regression model for survival analysis with competing risks, which can be used for jointly assessing a patient’s risk of multiple (competing) adverse outcomes. We construct a Hierarchical Bayesian Mixture (HBM) model to describe survival paths in which a patient’s covariates influence both the estimation of the type of adverse event and the subsequent survival trajectory through Multivariate Random Forests. In addition variable importance measures, which are essential for clinical interpretability are induced naturally by our model. We aim with this setting to provide accurate individual estimates but also interpretable conclusions for use as a clinical decision support tool. We compare our method with various state-of-the-art benchmarks on both synthetic and clinical data.", "title": "" }, { "docid": "e28ba2ea209537cf9867428e3cf7fdd7", "text": "People take their mobile phones everywhere they go. In Saudi Arabia, the mobile penetration is very high and students use their phones for different reasons in the classroom. The use of mobile devices in classroom triggers an alert of the impact it might have on students’ learning. This study investigates the association between the use of mobile phones during classroom and the learners’ performance and satisfaction. Results showed that students get distracted, and that this diversion of their attention is reflected in their academic success. However, this is not applicable for all. Some students received high scores even though they declared using mobile phones in classroom, which triggers a request for a deeper study.", "title": "" }, { "docid": "443191f41aba37614c895ba3533f80ed", "text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.", "title": "" }, { "docid": "d6d275b719451982fa67d442c55c186c", "text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.", "title": "" }, { "docid": "17dce24f26d7cc196e56a889255f92a8", "text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.", "title": "" }, { "docid": "ee5b04d7b62186775a7b6ab77b8bbd60", "text": "Answers submitted to CQA forums are often elaborate, contain spam, are marred by slurs and business promotions. It is difficult for a reader to go through numerous such answers to gauge community opinion. As a result summarization becomes a prioritized task. However, there is a dearth of neural approaches for CQA summarization due to the lack of large scale annotated dataset. We create CQASUMM, the first annotated CQA summarization dataset by filtering the 4.4 million Yahoo! Answers L6 dataset. We sample threads where the best answer can double up as a reference and build hundred word summaries from them. We provide scripts1 to reconstruct the dataset and introduce the new task of Community Question Answering Summarization.\n Multi document summarization(MDS) has been widely studied using news corpora. However documents in CQA have higher variance and contradicting opinion. We compare the popular MDS techniques and evaluate their performance on our CQA corpora. We find that most MDS workflows are built for the entirely factual news corpora, whereas our corpus has a fair share of opinion based instances too. We therefore introduce OpinioSumm, a new MDS which outperforms the best baseline by 4.6% w.r.t ROUGE-1 score.", "title": "" }, { "docid": "51a750fcc6cff4e51095aa80ce25c7d2", "text": "We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.", "title": "" }, { "docid": "3770720cff3a36596df097835f4f10a9", "text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.", "title": "" }, { "docid": "b145483a8c91b846876f571f5a138f48", "text": "Please cite this article in press as: N. Gra doi:10.1016/j.imavis.2008.04.014 This paper presents a novel approach for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion. To promote execution speed in building large area mosaics, the mosaic space is divided into disjoint regions of image intersection based on a geometric criterion. Pair-wise image blending is performed independently in each region by means of watershed segmentation and graph cut optimization. A contribution of this work – use of watershed segmentation on image differences to find possible cuts over areas of low photometric difference – allows for searching over a much smaller set of watershed segments, instead of over the entire set of pixels in the intersection zone. Watershed transform seeks areas of low difference when creating boundaries of each segment. Constraining the overall cutting lines to be a sequence of watershed segment boundaries results in significant reduction of search space. The solution is found efficiently via graph cut, using a photometric criterion. The proposed method presents several advantages. The use of graph cuts over image pairs guarantees the globally optimal solution for each intersection region. The independence of such regions makes the algorithm suitable for parallel implementation. The separated use of the geometric and photometric criteria leads to reduced memory requirements and a compact storage of the input data. Finally, it allows the efficient creation of large mosaics, without user intervention. We illustrate the performance of the approach on image sequences with prominent 3-D content and moving objects. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2f2c36452ab45c4234904d9b11f28eb7", "text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.", "title": "" }, { "docid": "2265121606a423d581ca696a9b7cee31", "text": "Heterochromatin protein 1 (HP1) was first described in Drosophila melanogaster as a heterochromatin associated protein with dose-dependent effect on gene silencing. The HP1 family is evolutionarily highly conserved and there are multiple members within the same species. The multi-functionality of HP1 reflects its ability to interact with diverse nuclear proteins, ranging from histones and transcriptional co-repressors to cohesion and DNA replication factors. As its name suggests, HP1 is well-known as a silencing protein found at pericentromeres and telomeres. In contrast to previous views that heterochromatin is transcriptionally inactive; noncoding RNAs transcribed from heterochromatic DNA repeats regulates the assembly and function of heterochromatin ranging from fission yeast to animals. Moreover, more recent progress has shed light on the paradoxical properties of HP1 in the nucleus and has revealed, unexpectedly, its existence in the euchromatin. Therefore, HP1 proteins might participate in both transcription repression in heterochromatin and euchromatin.", "title": "" }, { "docid": "6b01a80b6502cb818024e0ac3b00114b", "text": "BACKGROUND\nArithmetical skills are essential to the effective exercise of citizenship in a numerate society. How these skills are acquired, or fail to be acquired, is of great importance not only to individual children but to the organisation of formal education and its role in society.\n\n\nMETHOD\nThe evidence on the normal and abnormal developmental progression of arithmetical abilities is reviewed; in particular, evidence for arithmetical ability arising from innate specific cognitive skills (innate numerosity) vs. general cognitive abilities (the Piagetian view) is compared.\n\n\nRESULTS\nThese include evidence from infancy research, neuropsychological studies of developmental dyscalculia, neuroimaging and genetics. The development of arithmetical abilities can be described in terms of the idea of numerosity -- the number of objects in a set. Early arithmetic is usually thought of as the effects on numerosity of operations on sets such as set union. The child's concept of numerosity appears to be innate, as infants, even in the first week of life, seem to discriminate visual arrays on the basis of numerosity. Development can be seen in terms of an increasingly sophisticated understanding of numerosity and its implications, and in increasing skill in manipulating numerosities. The impairment in the capacity to learn arithmetic -- dyscalculia -- can be interpreted in many cases as a deficit in the concept in the child's concept of numerosity. The neuroanatomical bases of arithmetical development and other outstanding issues are discussed.\n\n\nCONCLUSIONS\nThe evidence broadly supports the idea of an innate specific capacity for acquiring arithmetical skills, but the effects of the content of learning, and the timing of learning in the course of development, requires further investigation.", "title": "" }, { "docid": "3ba011d181a4644c8667b139c63f50ff", "text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.", "title": "" }, { "docid": "0e54be77f69c6afbc83dfabc0b8b4178", "text": "Spinal muscular atrophy (SMA) is a neurodegenerative disease characterized by loss of motor neurons in the anterior horn of the spinal cord and resultant weakness. The most common form of SMA, accounting for 95% of cases, is autosomal recessive proximal SMA associated with mutations in the survival of motor neurons (SMN1) gene. Relentless progress during the past 15 years in the understanding of the molecular genetics and pathophysiology of SMA has resulted in a unique opportunity for rational, effective therapeutic trials. The goal of SMA therapy is to increase the expression levels of the SMN protein in the correct cells at the right time. With this target in sight, investigators can now effectively screen potential therapies in vitro, test them in accurate, reliable animal models, move promising agents forward to clinical trials, and accurately diagnose patients at an early or presymptomatic stage of disease. A major challenge for the SMA community will be to prioritize and develop the most promising therapies in an efficient, timely, and safe manner with the guidance of the appropriate regulatory agencies. This review will take a historical perspective to highlight important milestones on the road to developing effective therapies for SMA.", "title": "" }, { "docid": "6bbbddca9ba258afb25d6e8af9bfec82", "text": "With the ever increasing popularity of electronic commerce, the evaluation of antecedents and of customer satisfaction have become very important for the cyber shopping store (CSS) and for researchers. The various models of customer satisfaction that researchers have provided so far are mostly based on the traditional business channels and thus may not be appropriate for CSSs. This research has employed case and survey methods to study the antecedents of customer satisfaction. Though case methods a research model with hypotheses is developed. And through survey methods, the relationships between antecedents and satisfaction are further examined and analyzed. We find five antecedents of customer satisfaction to be more appropriate for online shopping on the Internet. Among them homepage presentation is a new and unique antecedent which has not existed in traditional marketing.", "title": "" }, { "docid": "df5ef1235844aa1593203f96cd2130bd", "text": "It is generally well acknowledged that humans are capable of having a theory of mind (ToM) of others. We present here a model which borrows mechanisms from three dissenting explanations of how ToM develops and functions, and show that our model behaves in accordance with various ToM experiments (Wellman, Cross, & Watson, 2001; Leslie, German, & Polizzi, 2005).", "title": "" }, { "docid": "ed23845ded235d204914bd1140f034c3", "text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao ([email protected]) †Can Yang ([email protected]) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9", "title": "" } ]
scidocsrr
b3a429a245088e0a5defbc505c4091b6
Can Computer Playfulness and Cognitive Absorption Lead to Problematic Technology Usage?
[ { "docid": "6dc4e4949d4f37f884a23ac397624922", "text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.", "title": "" }, { "docid": "2a617a0388cc6653e4d014fc3019e724", "text": "What kinds of psychological features do people have when they are overly involved in usage of the internet? Internet users in Korea were investigated in terms of internet over-use and related psychological profiles by the level of internet use. We used a modified Young's Internet Addiction Scale, and 13,588 users (7,878 males, 5,710 females), out of 20 million from a major portal site in Korea, participated in this study. Among the sample, 3.5% had been diagnosed as internet addicts (IA), while 18.4% of them were classified as possible internet addicts (PA). The Internet Addiction Scale showed a strong relationship with dysfunctional social behaviors. More IA tried to escape from reality than PA and Non-addicts (NA). When they got stressed out by work or were just depressed, IA showed a high tendency to access the internet. The IA group also reported the highest degree of loneliness, depressed mood, and compulsivity compared to the other groups. The IA group seemed to be more vulnerable to interpersonal dangers than others, showing an unusually close feeling for strangers. Further study is needed to investigate the direct relationship between psychological well-being and internet dependency.", "title": "" } ]
[ { "docid": "9003a12f984d2bf2fd84984a994770f0", "text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.", "title": "" }, { "docid": "6d6e21d332a022cc747325439b7cac74", "text": "We present a computational analysis of the language of drug users when talking about their drug experiences. We introduce a new dataset of over 4,000 descriptions of experiences reported by users of four main drug types, and show that we can predict with an F1-score of up to 88% the drug behind a certain experience. We also perform an analysis of the dominant psycholinguistic processes and dominant emotions associated with each drug type, which sheds light on the characteristics of drug users.", "title": "" }, { "docid": "c00c6539b78ed195224063bcff16fb12", "text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.", "title": "" }, { "docid": "a3e730ef71a91e1303d4cd92407fed26", "text": "Purpose – This paper investigates the interplay among the configuration dimensions (network structure, network flow, relationship governance, and service architecture) of LastMile Supply Networks (LMSN) and the underlying mechanisms influencing omnichannel performance. Design/methodology/approach – Based on mixed-method design incorporating a multiple embedded case study, mapping, survey and archival records, this research involved undertaking in-depth withinand cross-case analyses to examine seven LMSNs, employing a configuration approach. Findings – The existing literature in the operations management (OM) field was shown to provide limited understanding of LMSNs within the emerging omnichannel context. Case results suggest that particular configurations have intrinsic capabilities, and that these directly influence omnichannel performance. The study further proposes a taxonomy of LMSNs comprising six forms, with two hybrids, supporting the notion of equifinality in configuration theory. Propositions are developed to further explore interdependencies between configurational attributes, refining the relationships between LMSN types and factors influencing LMSN performance. Practical implications – The findings provide retailers a set of design parameters for the (re)configuration of LMSNs and facilitate performance evaluation using the concept of fit between configurational attributes. The developed model sheds light on the consequential effects when certain configurational attributes are altered, providing design indications. Given the global trend in urbanization, improved LMSN performance would have positive societal impacts in terms of service and resource efficiency. Originality/value – This is one of the first studies in the OM field to critically analyze LMSNs and their behaviors in omnichannel. Additionally, the paper offers several important avenues for future research.", "title": "" }, { "docid": "ea3fd6ece19949b09fd2f5f2de57e519", "text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.", "title": "" }, { "docid": "65ddfd636299f556117e53b5deb7c7e5", "text": "BACKGROUND\nMobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.\n\n\nMETHODS\nA questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).\n\n\nRESULTS\nMultivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.\n\n\nCONCLUSIONS\nThe relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.", "title": "" }, { "docid": "66df2a7148d67ffd3aac5fc91e09ee5d", "text": "Tree boosting, which combines weak learners (typically decision trees) to generate a strong learner, is a highly effective and widely used machine learning method. However, the development of a high performance tree boosting model is a time-consuming process that requires numerous trial-and-error experiments. To tackle this issue, we have developed a visual diagnosis tool, BOOSTVis, to help experts quickly analyze and diagnose the training process of tree boosting. In particular, we have designed a temporal confusion matrix visualization, and combined it with a t-SNE projection and a tree visualization. These visualization components work together to provide a comprehensive overview of a tree boosting model, and enable an effective diagnosis of an unsatisfactory training process. Two case studies that were conducted on the Otto Group Product Classification Challenge dataset demonstrate that BOOSTVis can provide informative feedback and guidance to improve understanding and diagnosis of tree boosting algorithms.", "title": "" }, { "docid": "97cb7718c75b266a086441912e4b22c3", "text": "Introduction Teacher education finds itself in a critical stage. The pressure towards more school-based programs which is visible in many countries is a sign that not only teachers, but also parents and politicians, are often dissatisfied with the traditional approaches in teacher education In some countries a major part of preservice teacher education has now become the responsibility of the schools, creating a situation in which to a large degree teacher education takes the form of 'training on the job'. The argument for this tendency is that traditional teacher education programs are said to fail in preparing prospective teachers for the realities of the classroom (Goodlad, 1990). Many teacher educators object that a professional teacher should acquire more than just practical tools for managing classroom situations and that it is their job to present student teachers with a broader view on education and to offer them a proper grounding in psychology, sociology, etcetera. This is what Clandinin (1995) calls \" the sacred theory-practice story \" : teacher education conceived as the translation of theory on good teaching into practice. However, many studies have shown that the transfer of theory to practice is meager or even non-existent. Zeichner and Tabachnick (1981), for example, showed that many notions and educational conceptions, developed during preservice teacher education, were \"washed out\" during field experiences. Comparable findings were reported by Cole and Knowles (1993) and Veenman (1984), who also points towards the severe problems teachers experience once they have left preservice teacher education. Lortie (1975) presented us with another early study into the socialization process of teachers, showing the dominant role of practice in shaping teacher development. At Konstanz University in Germany, research has been carried out into the phenomenon of the \"transition shock\" (Müller-Fohrbrodt et al. It showed that, during their induction in the profession, teachers encounter a huge gap between theory and practice. As a consequence, they pass through a quite distinct attitude shift during their first year of teaching, in general creating an adjustment to current practices in the schools and not to recent scientific insights into learning and teaching.", "title": "" }, { "docid": "a73917d842c18ed9c36a13fe9187ea4c", "text": "Brain Magnetic Resonance Image (MRI) plays a non-substitutive role in clinical diagnosis. The symptom of many diseases corresponds to the structural variants of brain. Automatic structure segmentation in brain MRI is of great importance in modern medical research. Some methods were developed for automatic segmenting of brain MRI but failed to achieve desired accuracy. In this paper, we proposed a new patch-based approach for automatic segmentation of brain MRI using convolutional neural network (CNN). Each brain MRI acquired from a small portion of public dataset is firstly divided into patches. All of these patches are then used for training CNN, which is used for automatic segmentation of brain MRI. Experimental results showed that our approach achieved better segmentation accuracy compared with other deep learning methods.", "title": "" }, { "docid": "ec1120018899c6c9fe16240b8e35efac", "text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.", "title": "" }, { "docid": "8a0cc5438a082ed9afd28ad8ed272034", "text": "Researchers analyzed 23 blockchain implementation projects, each tracked for design decisions and architectural alignment showing benefits, detriments, or no effects from blockchain use. The results provide the basis for a framework that lets engineers, architects, investors, and project leaders evaluate blockchain technology’s suitability for a given application. This analysis also led to an understanding of why some domains are inherently problematic for blockchains. Blockchains can be used to solve some trust-based problems but aren’t always the best or optimal technology. Some problems that can be solved using them can also be solved using simpler methods that don’t necessitate as big an investment.", "title": "" }, { "docid": "eea86b8c7d332edb903c213c5df89a53", "text": "We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Syntactic scaffolds avoid expensive syntactic processing at runtime, only making use of a treebank during training, through a multitask objective. We improve over strong baselines on PropBank semantics, frame semantics, and coreference resolution, achieving competitive performance on all three tasks.", "title": "" }, { "docid": "0a1f6c27cd13735858e7a6686fc5c2c9", "text": "We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer’s policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.", "title": "" }, { "docid": "fd4cd4edfd9fa8fe463643f02b90b21a", "text": "We propose a generic method for iteratively approximating various second-order gradient steps-Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient-in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.", "title": "" }, { "docid": "5a011a87ce3f37dc6b944d2686fa2f73", "text": "Agents are self-contained objects within a software model that are capable of autonomously interacting with the environment and with other agents. Basing a model around agents (building an agent-based model, or ABM) allows the user to build complex models from the bottom up by specifying agent behaviors and the environment within which they operate. This is often a more natural perspective than the system-level perspective required of other modeling paradigms, and it allows greater flexibility to use agents in novel applications. This flexibility makes them ideal as virtual laboratories and testbeds, particularly in the social sciences where direct experimentation may be infeasible or unethical. ABMs have been applied successfully in a broad variety of areas, including heuristic search methods, social science models, combat modeling, and supply chains. This tutorial provides an introduction to tools and resources for prospective modelers, and illustrates ABM flexibility with a basic war-gaming example.", "title": "" }, { "docid": "39838881287fd15b29c20f18b7e1d1eb", "text": "In the software industry, a challenge firms often face is how to effectively commercialize innovations. An emerging business model increasingly embraced by entrepreneurs, called freemium, combines “free” and “premium” consumption in association with a product or service. In a nutshell, this model involves giving away for free a certain level or type of consumption while making money on premium consumption. We develop a unifying multi-period microeconomic framework with network externalities embedded into consumer learning in order to capture the essence of conventional for-fee models, several key freemium business models such as feature-limited or time-limited, and uniform market seeding models. Under moderate informativeness of word-of-mouth signals, we fully characterize conditions under which firms prefer freemium models, depending on consumer priors on the value of individual software modules, perceptions of crossmodule synergies, and overall value distribution across modules. Within our framework, we show that uniform seeding is always dominated by either freemium models or conventional for-fee models. We further discuss managerial and policy implications based on our analysis. Interestingly, we show that freemium, in one form or another, is always preferred from the social welfare perspective, and we provide guidance on when the firms need to be incentivized to align their interests with the society’s. Finally, we discuss how relaxing some of the assumptions of our model regarding costs or informativeness and heterogeneity of word of mouth may reduce the profit gap between seeding and the other models, and potentially lead to seeding becoming the preferred approach for the firm.", "title": "" }, { "docid": "81f82ecbc43653566319c7e04f098aeb", "text": "Social microblogs such as Twitter and Weibo are experiencing an explosive growth with billions of global users sharing their daily observations and thoughts. Beyond public interests (e.g., sports, music), microblogs can provide highly detailed information for those interested in public health, homeland security, and financial analysis. However, the language used in Twitter is heavily informal, ungrammatical, and dynamic. Existing data mining algorithms require extensive manually labeling to build and maintain a supervised system. This paper presents STED, a semi-supervised system that helps users to automatically detect and interactively visualize events of a targeted type from twitter, such as crimes, civil unrests, and disease outbreaks. Our model first applies transfer learning and label propagation to automatically generate labeled data, then learns a customized text classifier based on mini-clustering, and finally applies fast spatial scan statistics to estimate the locations of events. We demonstrate STED’s usage and benefits using twitter data collected from Latin America countries, and show how our system helps to detect and track example events such as civil unrests and crimes.", "title": "" }, { "docid": "fcd0c523e74717c572c288a90c588259", "text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.", "title": "" }, { "docid": "387e9609e2fe3c6893b8ce0a1613f98a", "text": "Many fault-tolerant and intrusion-tolerant systems require the ability to execute unsafe programs in a realistic environment without leaving permanent damages. Virtual machine technology meets this requirement perfectly because it provides an execution environment that is both realistic and isolated. In this paper, we introduce an OS level virtual machine architecture for Windows applications called Feather-weight Virtual Machine (FVM), under which virtual machines share as many resources of the host machine as possible while still isolated from one another and from the host machine. The key technique behind FVM is namespace virtualization, which isolates virtual machines by renaming resources at the OS system call interface. Through a copy-on-write scheme, FVM allows multiple virtual machines to physically share resources but logically isolate their resources from each other. A main technical challenge in FVM is how to achieve strong isolation among different virtual machines and the host machine, due to numerous namespaces and interprocess communication mechanisms on Windows. Experimental results demonstrate that FVM is more flexible and scalable, requires less system resource, incurs lower start-up and run-time performance overhead than existing hardware-level virtual machine technologies, and thus makes a compelling building block for security and fault-tolerant applications.", "title": "" } ]
scidocsrr
0ea7a1202a3a2df640f7dbf9a0451d2d
Exploitation and exploration in a performance based contextual advertising system
[ { "docid": "341b0588f323d199275e89d8c33d6b47", "text": "We propose novel multi-armed bandit (explore/exploit) schemes to maximize total clicks on a content module published regularly on Yahoo! Intuitively, one can ``explore'' each candidate item by displaying it to a small fraction of user visits to estimate the item's click-through rate (CTR), and then ``exploit'' high CTR items in order to maximize clicks. While bandit methods that seek to find the optimal trade-off between explore and exploit have been studied for decades, existing solutions are not satisfactory for web content publishing applications where dynamic set of items with short lifetimes, delayed feedback and non-stationary reward (CTR) distributions are typical. In this paper, we develop a Bayesian solution and extend several existing schemes to our setting. Through extensive evaluation with nine bandit schemes, we show that our Bayesian solution is uniformly better in several scenarios. We also study the empirical characteristics of our schemes and provide useful insights on the strengths and weaknesses of each. Finally, we validate our results with a ``side-by-side'' comparison of schemes through live experiments conducted on a random sample of real user visits to Yahoo!", "title": "" }, { "docid": "cce513c48e630ab3f072f334d00b67dc", "text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press", "title": "" } ]
[ { "docid": "7d08501a0123d773f9fe755f1612e57e", "text": "Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent-pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.", "title": "" }, { "docid": "5b3ca1cc607d2e8f0394371f30d9e83a", "text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.", "title": "" }, { "docid": "d81cadc01ab599fd34d2ccfa8377de51", "text": "1. The Situation in Cognition The situated cognition movement in the cognitive sciences, like those sciences themselves, is a loose-knit family of approaches to understanding the mind and cognition. While it has both philosophical and psychological antecedents in thought stretching back over the last century (see Gallagher, this volume, Clancey, this volume,), it has developed primarily since the late 1970s as an alternative to, or a modification of, the then predominant paradigms for exploring the mind within the cognitive sciences. For this reason it has been common to characterize situated cognition in terms of what it is not, a cluster of \"anti-isms\". Situated cognition has thus been described as opposed to Platonism, Cartesianism, individualism, representationalism, and even", "title": "" }, { "docid": "aeba4012971d339a9a953a7b86f57eb8", "text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "title": "" }, { "docid": "f4b270b09649ba05dd22d681a2e3e3b7", "text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.", "title": "" }, { "docid": "5eb65797b9b5e90d5aa3968d5274ae72", "text": "Blockchains enable tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statistics obtained from the blockchain, and the truthfulness of trustless nodes are financially enforced by smart contract terms. Further, we propose an incentive-compatible consensus mechanism based on proof-of-stake to financially encourage the cache helpers to stay active in service. We model the interaction between the cache helpers and the content providers as a Chinese restaurant game. Based on the theoretical analysis regarding the Nash equilibrium of the game, we propose a decentralized strategy-searching algorithm using sequential best response. The simulation results demonstrate both the efficiency and reliability of the proposed equilibrium searching algorithm.", "title": "" }, { "docid": "e4914b41b7d38ff04b0e5a9b88cf1dc6", "text": "In this paper, we investigate the secure nearest neighbor (SNN) problem, in which a client issues an encrypted query point E(q) to a cloud service provider and asks for an encrypted data point in E(D) (the encrypted database) that is closest to the query point, without allowing the server to learn the plaintexts of the data or the query (and its result). We show that efficient attacks exist for existing SNN methods [21], [15], even though they were claimed to be secure in standard security models (such as indistinguishability under chosen plaintext or ciphertext attacks). We also establish a relationship between the SNN problem and the order-preserving encryption (OPE) problem from the cryptography field [6], [5], and we show that SNN is at least as hard as OPE. Since it is impossible to construct secure OPE schemes in standard security models [6], [5], our results imply that one cannot expect to find the exact (encrypted) nearest neighbor based on only E(q) and E(D). Given this hardness result, we design new SNN methods by asking the server, given only E(q) and E(D), to return a relevant (encrypted) partition E(G) from E(D) (i.e., G ⊆ D), such that that E(G) is guaranteed to contain the answer for the SNN query. Our methods provide customizable tradeoff between efficiency and communication cost, and they are as secure as the encryption scheme E used to encrypt the query and the database, where E can be any well-established encryption schemes.", "title": "" }, { "docid": "4a7a4db8497b0d13c8411100dab1b207", "text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.", "title": "" }, { "docid": "f9b99ad1fcf9963cca29e7ddfca20428", "text": "Nested Named Entities (nested NEs), one containing another, are commonly seen in biomedical text, e.g., accounting for 16.7% of all named entities in GENIA corpus. While many works have been done in recognizing non-nested NEs, nested NEs have been largely neglected. In this work, we treat the task as a binary classification problem and solve it using Support Vector Machines. For each token in nested NEs, we use two schemes to set its class label: labeling as the outmost entity or the inner entity. Our preliminary results show that while the outmost labeling tends to work better in recognizing the outmost entities, the inner labeling recognizes the inner NEs better. This result should be useful for recognition of nested NEs.", "title": "" }, { "docid": "90125582272e3f16a34d5d0c885f573a", "text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.", "title": "" }, { "docid": "a4ddf6920fa7a5c09fa0f62f9b96a2e3", "text": "In this paper, a class of single-phase Z-source (ZS) ac–ac converters is proposed with high-frequency transformer (HFT) isolation. The proposed HFT isolated (HFTI) ZS ac–ac converters possess all the features of their nonisolated counterparts, such as providing wide range of buck-boost output voltage with reversing or maintaining the phase angle, suppressing the in-rush and harmonic currents, and improved reliability. In addition, the proposed converters incorporate HFT for electrical isolation and safety, and therefore can save an external bulky line frequency transformer, for applications such as dynamic voltage restorers, etc. The proposed HFTI ZS converters are obtained from conventional (nonisolated) ZS ac–ac converters by adding only one extra bidirectional switch, and replacing two inductors with an HFT, thus saving one magnetic core. The switching signals for buck and boost modes are presented with safe-commutation strategy to remove the switch voltage spikes. A quasi-ZS-based HFTI ac–ac is used to discuss the operation principle and circuit analysis of the proposed class of HFTI ZS ac–ac converters. Various ZS-based HFTI proposed ac–ac converters are also presented thereafter. Moreover, a laboratory prototype of the proposed converter is constructed and experiments are conducted to produce output voltage of 110 Vrms / 60 Hz, which verify the operation of the proposed converters.", "title": "" }, { "docid": "7e6573b3e080481949a2b45eb6c68a42", "text": "We study the problem of minimizing the sum of a smooth convex function and a convex blockseparable regularizer and propose a new randomized coordinate descent method, which we call ALPHA. Our method at every iteration updates a random subset of coordinates, following an arbitrary distribution. No coordinate descent methods capable to handle an arbitrary sampling have been studied in the literature before for this problem. ALPHA is a remarkably flexible algorithm: in special cases, it reduces to deterministic and randomized methods such as gradient descent, coordinate descent, parallel coordinate descent and distributed coordinate descent – both in nonaccelerated and accelerated variants. The variants with arbitrary (or importance) sampling are new. We provide a complexity analysis of ALPHA, from which we deduce as a direct corollary complexity bounds for its many variants, all matching or improving best known bounds.", "title": "" }, { "docid": "d68bf9cd549c6d3fe067f343bd38c439", "text": "Most multiobjective evolutionary algorithms are based on Pareto dominance for measuring the quality of solutions during their search, among them NSGA-II is well-known. A very few algorithms are based on decomposition and implicitly or explicitly try to optimize aggregations of the objectives. MOEA/D is a very recent such an algorithm. One of the major advantages of MOEA/D is that it is very easy to use well-developed single optimization local search within it. This paper compares the performance of MOEA/D and NSGA-II on the multiobjective travelling salesman problem and studies the effect of local search on the performance of MOEA/D.", "title": "" }, { "docid": "5190176eb4e743b8ac356fa97c06aa7c", "text": "This paper presents a flexible control technique of active and reactive power for single phase grid-tied photovoltaic inverter, supplied from PV array, based on quarter cycle phase delay methodology to generate the fictitious quadrature signal in order to emulate the PQ theory of three-phase systems. The investigated scheme is characterized by independent control of active and reactive power owing to the independent PQ reference signals that can satisfy the features and new functions of modern grid-tied inverters fed from renewable energy resources. The study is conducted on 10 kW PV array using PSIM program. The obtained results demonstrate the high capability to provide quick and accurate control of the injected active and reactive power to the main grid. The harmonic spectra of power components and the resultant grid current indicate that the single-phase PQ control scheme guarantees and satisfies the power quality requirements and constrains, which permits application of such scheme on a wide scale integrated with other PV inverters where independent PQ reference signals would be generated locally by energy management unit in case of microgrid, or from remote data center in case of smart grid.", "title": "" }, { "docid": "8c9155ce72bc3ba11bd4680d46ad69b5", "text": "Many theorists assume that the cognitive system is composed of a collection of encapsulated processing components or modules, each dedicated to performing a particular cognitive function. On this view, selective impairments of cognitive tasks following brain damage, as evidenced by double dissociations, are naturally interpreted in terms of the loss of particular processing components. By contrast, the current investigation examines in detail a double dissociation between concrete and abstract work reading after damage to a connectionist network that pronounces words via meaning and yet has no separable components (Plaut & Shallice, 1993). The functional specialization in the network that gives rise to the double dissociation is not transparently related to the network's structure, as modular theories assume. Furthermore, a consideration of the distribution of effects across quantitatively equivalent individual lesions in the network raises specific concerns about the interpretation of single-case studies. The findings underscore the necessity of relating neuropsychological data to cognitive theories in the context of specific computational assumptions about how the cognitive system operates normally and after damage.", "title": "" }, { "docid": "aafaffb28d171e2cddadbd9b65539e21", "text": "LCD column drivers have traditionally used nonlinear R-string style digital-to-analog converters (DAC). This paper describes an architecture that uses 840 linear charge redistribution 10/12-bit DACs to implement a 420-output column driver. Each DAC performs its conversion in less than 15 /spl mu/s and draws less than 5 /spl mu/A. This architecture allows 10-bit independent color control in a 17 mm/sup 2/ die for the LCD television market.", "title": "" }, { "docid": "480c066863a97bde11b0acc32b427f4e", "text": "When computer security incidents occur, it's critical that organizations be able to handle them in a timely manner. The speed with which an organization can recognize, analyze, and respond to an incident will affect the damage and lower recovery costs. Organized incident management requires defined, repeatable processes and the ability to learn from incidents that threaten the confidentiality, availability, and integrity of critical systems and data. Some organizations assign responsibility for incident management to a defined group of people or a designated unit, such as a computer security incident response team. This article looks at the development, purpose, and evolution of such specialized teams; the evolving nature of attacks they must deal with; and methods to evaluate the performance of such teams as well as the emergence of information sharing as a core service.", "title": "" }, { "docid": "026a49cd48c7100b5b9f8f7197e71a1f", "text": "In-wheel motors have tremendous potential to create an advanced all-wheel drive system. In this paper, a novel power assisted steering technology and its torque distribution control system were proposed, due to the independent driving characteristics of four-wheel-independent-drive electric vehicle. The first part of this study deals with the full description of the basic theory of differential drive assisted steering system. After that, 4-wheel-drive (4WD) electric vehicle dynamics model as well as driver model were built. Furthermore, the differential drive assisted steering control system, as well as the drive torque distribution and compensation control system, was also presented. Therein, the proportional–integral (PI) feedback control loop was employed to track the reference steering effort by controlling the drive torque distribution between the two sides wheels of the front axle. After that, the direct yaw moment control subsystem and the traction control subsystem were introduced, which were both employed to make the differential drive assisted steering work as well as wished. Finally, the open-loop and closed-loop simulation for validation were performed. The results verified that, the proposed differential drive torque assisted steering system cannot only reduce the steering efforts significantly, as well as ensure a stiffer steering feel at high vehicle speed and improve the returnability of the vehicle, but also keep the lateral stability of the vehicle. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b05a72a6fa5e381b341ba8c9107a690c", "text": "Acknowledgments are widely used in scientific articles to express gratitude and credit collaborators. Despite suggestions that indexing acknowledgments automatically will give interesting insights, there is currently, to the best of our knowledge, no such system to track acknowledgments and index them. In this paper we introduce AckSeer, a search engine and a repository for automatically extracted acknowledgments in digital libraries. AckSeer is a fully automated system that scans items in digital libraries including conference papers, journals, and books extracting acknowledgment sections and identifying acknowledged entities mentioned within. We describe the architecture of AckSeer and discuss the extraction algorithms that achieve a F1 measure above 83%. We use multiple Named Entity Recognition (NER) tools and propose a method for merging the outcome from different recognizers. The resulting entities are stored in a database then made searchable by adding them to the AckSeer index along with the metadata of the containing paper/book.\n We build AckSeer on top of the documents in CiteSeerx digital library yielding more than 500,000 acknowledgments and more than 4 million mentioned entities.", "title": "" }, { "docid": "2b0969dd0089bd2a2054957477ea4ce1", "text": "A self-signaling action is an action chosen partly to secure good news about one’s traits or abilities, even when the action has no causal impact on these traits and abilities. We discuss some of the odd things that happen when self-signaling is introduced into an otherwise rational conception of action. We employ a signaling game perspective in which the diagnostic signals are an endogenous part of the equilibrium choice. We are interested (1) in pure self-signaling, separate from any desire to be regarded well by others, and (2) purely diagnostic motivation, that is, caring about what an action might reveal about a trait even when that action has no causal impact on it. When diagnostic motivation is strong, the person’s actions exhibit a rigidity characteristic of personal rules. Our model also predicts that a boost in self-image positively affects actions even though it leaves true preferences unchanged — we call this a “moral placebo effect.” 1 The chapter draws on (co-authored) Chapter 3 of Bodner’s doctoral dissertation (Bodner, 1995) and an unpublished MIT working paper (Bodner and Prelec, 1997). The authors thank Bodner’s dissertation advisors France Leclerc and Richard Thaler, workshop discussants Thomas Schelling, Russell Winer, and Mathias Dewatripont, and George Ainslie, Michael Bratman, Juan Carillo, Itzakh Gilboa, George Loewenstein, Al Mela, Matthew Rabin, Duncan Simester and Florian Zettelmeyer for comments on these ideas (with the usual disclaimer). We are grateful to Birger Wernerfelt for drawing attention to Bernheim's work on social conformity. Author addresses: Bodner – Director, Learning Innovations, 13\\4 Shimshon St., Jerusalem, 93501, Israel, [email protected]; Prelec — E56-320, MIT, Sloan School, 38 Memorial Drive, Cambridge, MA 02139, [email protected]. 1 Psychological evidence When we make a choice we reveal something of our inner traits or dispositions, not only to others, but also to ourselves. After the fact, this can be a source of pleasure or pain, depending on whether we were impressed or disappointed by our actions. Before the fact, the anticipation of future pride or remorse can influence what we choose to do. In a previous paper (Bodner and Prelec, 1997), we described how the model of a utility maximizing individual could be expanded to include diagnostic utility as a separate motive for action. We review the basic elements of that proposal here. The inspiration comes directly from signaling games in which actions of one person provide an informative signal to others, which in turn affects esteem (Bernheim, 1994). Here, however, actions provide a signal to ourselves, that is, actions are selfsignaling. For example, a person who takes the daily jog in spite of the rain may see that as a gratifying signal of willpower, dedication, or future well being. For someone uncertain about where he or she stands with respect to these dispositions, each new choice can provide a bit of good or bad \"news.” We incorporate the value of such \"news\" into the person's utility function. The notion that a person may draw inferences from an action he enacted partially in order to gain that inference has been posed as a philosophical paradox (e.g. Campbell and Sawden, 1985; Elster, 1985, 1989). A key problem is the following: Suppose that the disposition in question is altruism, and a person interprets a 25¢ donation to a panhandler as evidence of altruism. If the boost in self-esteem makes it worth giving the quarter even when there is no concern for the poor, than clearly, such a donation is not valid evidence of altruism. Logically, giving is valid evidence of high altruism only if a person with low altruism would not have given the quarter. This reasoning motivates our equilibrium approach, in which inferences from actions are an endogenous part of the equilibrium choice. As an empirical matter several studies have demonstrated that diagnostic considerations do indeed affect behavior (Quattrone and Tversky, 1984; Shafir and Tversky, 1992; Bodner, 1995). An elegant experiment by Quattrone and Tversky (1984) both defines the self-signaling phenomenon and demonstrates its existence. Quattrone and Tversky first asked each subject to take a cold pressor pain test in which the subject's arm is submerged in a container of cold water until the subject can no longer tolerate the pain. Subsequently the subject was told that recent medical studies had discovered a certain inborn heart condition, and that people with this condition are “frequently ill, prone to heart-disease, and have shorter-than-average life expectancy.” Subjects were also told that this type could be identified by the effect of exercise on the cold pressor test. Subjects were randomly assigned to one of two conditions in which they were told that the bad type of heart was associated with either increases or with decreases in tolerance to the cold water after exercise. Subjects then repeated the cold pressor test, after riding an Exercycle for one minute. As predicted, the vast majority of subjects showed changes in tolerance on the second cold pressor trial in the direction correlated of “good news”—if told that decreased tolerance is diagnostic of a bad heart they endured the near-freezing water longer (and vice versa). The result shows that people are willing to bear painful consequences for a behavior that is a signal, though not a cause, of a medical diagnosis. An experiment by Shafir and Tversky (1992) on \"Newcomb's paradox\" reinforces the same point. In the philosophical version of the paradox, a person is (hypothetically) presented with two boxes, A and B. Box A contains either nothing or some large amount of money deposited by an \"omniscient being.\" Box B contains a small amount of money for sure. The decision-maker doesn’t know what Box A contains choice, and has to choose whether to take the contents of that box (A) or of both boxes (A+B). What makes the problem a paradox is that the person is asked to believe that the omniscient being has already predicted her choice, and on that basis has already either \"punished\" a greedy choice of (A+B) with no deposit in A or \"rewarded\" a choice of (A) with a large deposit. The dominance principle argues in favor of choosing both boxes, because the deposits are fixed at the moment of choice. This is the philosophical statement of the problem. In the actual experiment, Shafir and Tversky presented a variant of Newcomb’s problem at the end of another, longer experiment, in which subjects repeatedly played a Prisoner’s Dilemma game against (virtual) opponents via computer terminals. After finishing these games, a final “bonus” problem appeared, with the two Newcomb boxes, and subjects had to choose whether to take money from one box or from both boxes. The experimental cover story did not mention an omniscient being but instead informed the subjects that \"a program developed at MIT recently was applied during the entire session [of Prisoner’s Dilemma choices] to analyze the pattern of your preference.” Ostensibly, this mighty program could predict choices, one or two boxes, with 85% accuracy, and, of course, if the program predicted a choice of both boxes it would then put nothing in Box A. Although it was evident that the money amounts were already set at the moment of choice, most experimental subjects opted for the single box. It is “as if” they believed that by declining to take the money in Box B, they could change the amount of money already deposited in box A. Although these are relatively recent experiments, their results are consistent with a long stream of psychological research, going back at least to the James-Lange theory of emotions which claimed that people infer their own states from behavior (e.g., they feel afraid if they see themselves running). The notion that people adopt the perspective of an outside observer when interpreting their own actions has been extensively explored in the research on self-perception (Bem, 1972). In a similar vein, there is an extensive literature confirming the existence of “self-handicapping” strategies, where a person might get too little sleep or under-prepare for an examination. In such a case, a successful performance could be attributed to ability while unsuccessful performance could be externalized as due to the lack of proper preparation (e.g. Berglas and Jones, 1978; Berglas and Baumeister, 1993). This broader context of psychological research suggests that we should view the results of Quattrone and Tversky, and Shafir and Tversky not as mere curiosities, applying to only contrived experimental situations, but instead as evidence of a general motivational “short circuit.” Motivation does not require causality, even when the lack of causality is utterly transparent. If anything, these experiments probably underestimate the impact of diagnosticity in realistic decisions, where the absence of causal links between actions and dispositions is less evident. Formally, our model distinguishes between outcome utility — the utility of the anticipated causal consequences of choice — and diagnostic utility — the value of the adjusted estimate of one’s disposition, adjusted in light of the choice. Individuals act so as to maximize some combination of the two sources of utility, and (in one version of the model) make correct inferences about what their choices imply about their dispositions. When diagnostic utility is sufficiently important, the individual chooses the same action independent of disposition. We interpret this as a personal rule. We describe other ways in which the behavior of self-signaling individuals is qualitatively different from that of standard economic agents. First, a self-signaling person will be more likely to reveal discrepancies between resolutions and actions when resolutions pertain to actions that are contingent or delayed. Thus she might honestly commit to do some worthy action if the circumstances requiring t", "title": "" } ]
scidocsrr
ae915c34345204fff23600f7737930a7
Treatment planning of the edentulous mandible
[ { "docid": "0ad4432a79ea6b3eefbe940adf55ff7b", "text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.", "title": "" } ]
[ { "docid": "525f9a7321a7b45111a19f458c9b976a", "text": "This paper provides a literature review on Adaptive Line Enhancer (ALE) methods based on adaptive noise cancellation systems. Such methods have been used in various applications, including communication systems, biomedical engineering, and industrial applications. Developments in ALE in noise cancellation are reviewed, including the principles, adaptive algorithms, and recent modifications on the filter design proposed to increase the convergence rate and reduce the computational complexity for future implementation. The advantages and drawbacks of various adaptive algorithms, such as the Least Mean Square, Recursive Least Square, Affine Projection Algorithm, and their variants, are discussed in this review. Design modifications of filter structures used in ALE are also evaluated. Such filters include Finite Impulse Response, Infinite Impulse Response, lattice, and nonlinear adaptive filters. These structural modifications aim to achieve better adaptive filter performance in ALE systems. Finally, a perspective of future research on ALE systems is presented for further consideration.", "title": "" }, { "docid": "188df015d60168b57f37e39089f3b14e", "text": "Implementation of a nutrition programme for team sports involves application of scientific research together with the social skills necessary to work with a sports medicine and coaching staff. Both field and court team sports are characterized by intermittent activity requiring a heavy reliance on dietary carbohydrate sources to maintain and replenish glycogen. Energy and substrate demands are high during pre-season training and matches, and moderate during training in the competitive season. Dietary planning must include enough carbohydrate on a moderate energy budget, while also meeting protein needs. Strength and power team sports require muscle-building programmes that must be accompanied by adequate nutrition, and simple anthropometric measurements can help the nutrition practitioner monitor and assess body composition periodically. Use of a body mass scale and a urine specific gravity refractometer can help identify athletes prone to dehydration. Sports beverages and caffeine are the most common supplements, while opinion on the practical effectiveness of creatine is divided. Late-maturing adolescent athletes become concerned about gaining size and muscle, and assessment of maturity status can be carried out with anthropometric procedures. An overriding consideration is that an individual approach is needed to meet each athlete's nutritional needs.", "title": "" }, { "docid": "1d3eb22e6f244fbe05d0cc0f7ee37b84", "text": "Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.", "title": "" }, { "docid": "bf08bc98eb9ef7a18163fc310b10bcf6", "text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.", "title": "" }, { "docid": "5f5258cec772f97c18a5ccda25f7a617", "text": "While most prognostics approaches focus on accurate computation of the degradation rate and the Remaining Useful Life (RUL) of individual components, it is the rate at which the performance of subsystems and systems degrade that is of greater interest to the operators and maintenance personnel of these systems. Accurate and reliable predictions make it possible to plan the future operations of the system, optimize maintenance scheduling activities, and maximize system life. In system-level prognostics, we are interested in determining when the performance of a system will fall below pre-defined levels of acceptable performance. Our focus in this paper is on developing a comprehensive methodology for system-level prognostics under uncertainty that combines the use of an estimation scheme that tracks system state and degradation parameters, along with a prediction scheme that computes the RUL as a stochastic distribution over the life of the system. Two parallel methods have been developed for prediction: (1) methods based on stochastic simulation and (2) optimization methods, such as first order reliability method (FORM). We compare the computational complexity and the accuracy of the two prediction approaches using a case study of a system with several degrading components.", "title": "" }, { "docid": "80fd067dd6cf2fe85ade3c632e82c04c", "text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.03.046 * Corresponding author. Tel.: +98 09126121921. E-mail address: [email protected] (M. Sha Recommender systems are powerful tools that allow companies to present personalized offers to their customers and defined as a system which recommends an appropriate product or service after learning the customers’ preferences and desires. Extracting users’ preferences through their buying behavior and history of purchased products is the most important element of such systems. Due to users’ unlimited and unpredictable desires, identifying their preferences is very complicated process. In most researches, less attention has been paid to user’s preferences varieties in different product categories. This may decrease quality of recommended items. In this paper, we introduced a technique of recommendation in the context of online retail store which extracts user preferences in each product category separately and provides more personalized recommendations through employing product taxonomy, attributes of product categories, web usage mining and combination of two well-known filtering methods: collaborative and content-based filtering. Experimental results show that proposed technique improves quality, as compared to similar approaches. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ef1f901e0fb01a99728282f743cc1c65", "text": "Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.", "title": "" }, { "docid": "1cbc333cce4870cc0f465bb76b6e4d3c", "text": "This note attempts to raise awareness within the network research community about the security of the interdomain routing infrastructure. We identify several attack objectives and mechanisms, assuming that one or more BGP routers have been compromised. Then, we review the existing and proposed countermeasures, showing that they are either generally ineffective (route filtering), or probably too heavyweight to deploy (S-BGP). We also review several recent proposals, and conclude by arguing that a significant research effort is urgently needed in the area of routing security.", "title": "" }, { "docid": "04476184ca103b9d8012827615fc84a5", "text": "In order to investigate the local filtering behavior of the Retinex model, we propose a new implementation in which paths are replaced by 2-D pixel sprays, hence the name \"random spray Retinex.\" A peculiar feature of this implementation is the way its parameters can be controlled to perform spatial investigation. The parameters' tuning is accomplished by an unsupervised method based on quantitative measures. This procedure has been validated via user panel tests. Furthermore, the spray approach has faster performances than the path-wise one. Tests and results are presented and discussed", "title": "" }, { "docid": "760edd83045a80dbb2231c0ffbef2ea7", "text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.", "title": "" }, { "docid": "8785e51ebe39057012b81c37a6ddc097", "text": "In this paper, we present a set of distributed algorithms for estimating the electro-mechanical oscillation modes of large power system networks using synchrophasors. With the number of phasor measurement units (PMUs) in the North American grid scaling up to the thousands, system operators are gradually inclining toward distributed cyber-physical architectures for executing wide-area monitoring and control operations. Traditional centralized approaches, in fact, are anticipated to become untenable soon due to various factors such as data volume, security, communication overhead, and failure to adhere to real-time deadlines. To address this challenge, we propose three different communication and computational architectures by which estimators located at the control centers of various utility companies can run local optimization algorithms using local PMU data, and thereafter communicate with other estimators to reach a global solution. Both synchronous and asynchronous communications are considered. Each architecture integrates a centralized Prony-based algorithm with several variants of alternating direction method of multipliers (ADMM). We discuss the relative advantages and bottlenecks of each architecture using simulations of IEEE 68-bus and IEEE 145-bus power system, as well as an Exo-GENI-based software defined network.", "title": "" }, { "docid": "2eac0a94204b24132e496639d759f545", "text": "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.", "title": "" }, { "docid": "6aebae4d8ed0af23a38a945b85c3b6ff", "text": "Modern web applications are conglomerations of JavaScript written by multiple authors: application developers routinely incorporate code from third-party libraries, and mashup applications synthesize data and code hosted at different sites. In current browsers, a web application’s developer and user must trust third-party code in libraries not to leak the user’s sensitive information from within applications. Even worse, in the status quo, the only way to implement some mashups is for the user to give her login credentials for one site to the operator of another site. Fundamentally, today’s browser security model trades privacy for flexibility because it lacks a sufficient mechanism for confining untrusted code. We present COWL, a robust JavaScript confinement system for modern web browsers. COWL introduces label-based mandatory access control to browsing contexts in a way that is fully backwardcompatible with legacy web content. We use a series of case-study applications to motivate COWL’s design and demonstrate how COWL allows both the inclusion of untrusted scripts in applications and the building of mashups that combine sensitive information from multiple mutually distrusting origins, all while protecting users’ privacy. Measurements of two COWL implementations, one in Firefox and one in Chromium, demonstrate a virtually imperceptible increase in page-load latency.", "title": "" }, { "docid": "4a9474c0813646708400fc02c344a976", "text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.", "title": "" }, { "docid": "5e2c4ebf3c2b4f0e9aabc5eacd2d4b80", "text": "Manually annotating object bounding boxes is central to building computer vision datasets, and it is very time consuming (annotating ILSVRC [53] took 35s for one high-quality box [62]). It involves clicking on imaginary comers of a tight box around the object. This is difficult as these comers are often outside the actual object and several adjustments are required to obtain a tight box. We propose extreme clicking instead: we ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural and these points are easy to find. We crowd-source extreme point annotations for PASCAL VOC 2007 and 2012 and show that (1) annotation time is only 7s per box, 5 × faster than the traditional way of drawing boxes [62]: (2) the quality of the boxes is as good as the original ground-truth drawn the traditional way: (3) detectors trained on our annotations are as accurate as those trained on the original ground-truth. Moreover, our extreme clicking strategy not only yields box coordinates, but also four accurate boundary points. We show (4) how to incorporate them into GrabCut to obtain more accurate segmentations than those delivered when initializing it from bounding boxes: (5) semantic segmentations models trained on these segmentations outperform those trained on segmentations derived from bounding boxes.", "title": "" }, { "docid": "0453d395af40160b4f66787bb9ac8e96", "text": "Two aspect of programming languages, recursive definitions and type declarations are analyzed in detail. Church's %-calculus is used as a model of a programming language for purposes of the analysis. The main result on recursion is an analogue to Kleene's first recursion theorem: If A = FA for any %-expressions A and F, then A is an extension of YF in the sense that if E[YF], any expression containing YF, has a normal form then E[YF] = E[A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y. A system of types and type declarations is developed for the %-calculus and its semantic assumptions are identified. The system is shown to be adequate in the sense that it permits a preprocessor to check formulae prior to evaluation to prevent type errors. It is shown that any formula with a valid assignment of types to all its subexpressions must have a normal form. Thesis Supervisor: John M. Wozencraft Title: Professor of Electrical Engineering", "title": "" }, { "docid": "5809c27155986612b0e4a9ef48b3b930", "text": "Using the same technologies for both work and private life is an intensifying phenomenon. Mostly driven by the availability of consumer IT in the marketplace, individuals—more often than not—are tempted to use privately-owned IT rather than enterprise IT in order to get their job done. However, this dual-use of technologies comes at a price. It intensifies the blurring of the boundaries between work and private life—a development in stark contrast to the widely spread desire of employees to segment more clearly between their two lives. If employees cannot follow their segmentation preference, it is proposed that this misfit will result in work-to-life conflict (WtLC). This paper investigates the relationship between organizational encouragement for dual use and WtLC. Via a quantitative survey, we find a significant relationship between the two concepts. In line with boundary theory, the effect is stronger for people that strive for work-life segmentation.", "title": "" }, { "docid": "5cc07ca331deb81681b3f18355c0e586", "text": "BACKGROUND\nHyaluronic acid (HA) formulations are used for aesthetic applications. Different cross-linking technologies result in HA dermal fillers with specific characteristic visco-elastic properties.\n\n\nOBJECTIVE\nBio-integration of three CE-marked HA dermal fillers, a cohesive (monophasic) polydensified, a cohesive (monophasic) monodensified and a non-cohesive (biphasic) filler, was analysed with a follow-up of 114 days after injection. Our aim was to study the tolerability and inflammatory response of these fillers, their patterns of distribution in the dermis, and influence on tissue integrity.\n\n\nMETHODS\nThree HA formulations were injected intradermally into the iliac crest region in 15 subjects. Tissue samples were analysed after 8 and 114 days by histology and immunohistochemistry, and visualized using optical and transmission electron microscopy.\n\n\nRESULTS\nHistological results demonstrated that the tested HA fillers showed specific characteristic bio-integration patterns in the reticular dermis. Observations under the optical and electron microscopes revealed morphological conservation of cutaneous structures. Immunohistochemical results confirmed absence of inflammation, immune response and granuloma.\n\n\nCONCLUSION\nThe three tested dermal fillers show an excellent tolerability and preservation of the dermal cells and matrix components. Their tissue integration was dependent on their visco-elastic properties. The cohesive polydensified filler showed the most homogeneous integration with an optimal spreading within the reticular dermis, which is achieved by filling even the smallest spaces between collagen bundles and elastin fibrils, while preserving the structural integrity of the latter. Absence of adverse reactions confirms safety of the tested HA dermal fillers.", "title": "" }, { "docid": "d646a27556108caebd7ee5691c98d642", "text": "■ Abstract Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.", "title": "" }, { "docid": "9adb3374f58016ee9bec1daf7392a64e", "text": "To develop a less genotype-dependent maize-transformation procedure, we used 10-month-old Type I callus as target tissue for microprojectile bombardment. Twelve transgenic callus lines were obtained from two of the three anther-culture-derived callus cultures representing different gentic backgrounds. Multiple fertile transgenic plants (T0) were regenerated from each transgenic callus line. Transgenic leaves treated with the herbicide Basta showed no symptoms, indicating that one of the two introduced genes, bar, was functionally expressing. Data from DNA hybridization analysis confirmed that the introduced genes (bar and uidA) were integrated into the plant genome and that all lines derived from independent transformation events. Transmission of the introduced genes and the functional expression of bar in T1 progeny was also confirmed. Germination of T1 immature embryos in the presence of bialaphos was used as a screen for functional expression of bar; however, leaf painting of T1 plants proved a more accurate predictor of bar expression in plants. This study suggests that maize Type I callus can be transformed efficiently through microprojectile bombardment and that fertile transgenic plants can be recovered. This system should facilitate the direct introduction of agronomically important genes in to commercial genotypes.", "title": "" } ]
scidocsrr
6983c21d0a12808f443e462b3ce3de13
Lucid dreaming treatment for nightmares: a pilot study.
[ { "docid": "5bcccfe91c68d12b8bf78017a477c979", "text": "SUMMARY\nThe occurrence of lucid dreaming (dreaming while being conscious that one is dreaming) has been verified for 5 selected subjects who signaled that they knew they were dreaming while continuing to dream during unequivocal REM sleep. The signals consisted of particular dream actions having observable concomitants and were performed in accordance with pre-sleep agreement. The ability of proficient lucid dreamers to signal in this manner makes possible a new approach to dream research--such subjects, while lucid, could carry out diverse dream experiments marking the exact time of particular dream events, allowing derivation of of precise psychophysiological correlations and methodical testing of hypotheses.", "title": "" } ]
[ { "docid": "4a6d231ce704e4acf9320ac3bd5ade14", "text": "Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.", "title": "" }, { "docid": "e4cefd3932ea07682e4eef336dda278b", "text": "Rubinstein-Taybi syndrome (RSTS) is a developmental disorder characterized by a typical face and distal limbs abnormalities, intellectual disability, and a vast number of other features. Two genes are known to cause RSTS, CREBBP in 60% and EP300 in 8-10% of clinically diagnosed cases. Both paralogs act in chromatin remodeling and encode for transcriptional co-activators interacting with >400 proteins. Up to now 26 individuals with an EP300 mutation have been published. Here, we describe the phenotype and genotype of 42 unpublished RSTS patients carrying EP300 mutations and intragenic deletions and offer an update on another 10 patients. We compare the data to 308 individuals with CREBBP mutations. We demonstrate that EP300 mutations cause a phenotype that typically resembles the classical RSTS phenotype due to CREBBP mutations to a great extent, although most facial signs are less marked with the exception of a low-hanging columella. The limb anomalies are more similar to those in CREBBP mutated individuals except for angulation of thumbs and halluces which is very uncommon in EP300 mutated individuals. The intellectual disability is variable but typically less marked whereas the microcephaly is more common. All types of mutations occur but truncating mutations and small rearrangements are most common (86%). Missense mutations in the HAT domain are associated with a classical RSTS phenotype but otherwise no genotype-phenotype correlation is detected. Pre-eclampsia occurs in 12/52 mothers of EP300 mutated individuals versus in 2/59 mothers of CREBBP mutated individuals, making pregnancy with an EP300 mutated fetus the strongest known predictor for pre-eclampsia. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "d52c31b947ee6edf59a5ef416cbd0564", "text": "Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as background priors, it does not work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency-based method to detect the visual objects by using the background priors. For a video, we integrate multiple pairs of scale-invariant feature transform flows from long-range frames, and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph-based structure using spatiotemporal background priors is put forward in the computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging data sets show that the proposed method robustly and accurately detects the video objects in both simple and complex scenes and achieves better performance compared with other the state-of-the-art video saliency models.", "title": "" }, { "docid": "c56daed0cc2320892fad3ac34ce90e09", "text": "In this paper we describe the open source data analytics platform KNIME, focusing particularly on extensions and modules supporting fuzzy sets and fuzzy learning algorithms such as fuzzy clustering algorithms, rule induction methods, and interactive clustering tools. In addition we outline a number of experimental extensions, which are not yet part of the open source release and present two illustrative examples from real world applications to demonstrate the power of the KNIME extensions.", "title": "" }, { "docid": "806ae85b278c98a9107adeb1f55b8808", "text": "The present studies report the effects on neonatal rats of oral exposure to genistein during the period from birth to postnatal day (PND) 21 to generate data for use in assessing human risk following oral ingestion of genistein. Failure to demonstrate significant exposure of the newborn pups via the mothers milk led us to subcutaneously inject genistein into the pups over the period PND 1-7, followed by daily gavage dosing to PND 21. The targeted doses throughout were 4 mg/kg/day genistein (equivalent to the average exposure of infants to total isoflavones in soy milk) and a dose 10 times higher than this (40 mg/kg genistein). The dose used during the injection phase of the experiment was based on plasma determinations of genistein and its major metabolites. Diethylstilbestrol (DES) at 10 micro g/kg was used as a positive control agent for assessment of changes in the sexually dimorphic nucleus of the preoptic area (SDN-POA). Administration of 40 mg/kg genistein increased uterus weights at day 22, advanced the mean day of vaginal opening, and induced permanent estrus in the developing female pups. Progesterone concentrations were also decreased in the mature females. There were no effects in females dosed with 4 mg/kg genistein, the predicted exposure level for infants drinking soy-based infant formulas. There were no consistent effects on male offspring at either dose level of genistein. Although genistein is estrogenic at 40 mg/kg/day, as illustrated by the effects described above, this dose does not have the same repercussions as DES in terms of the organizational effects on the SDN-POA.", "title": "" }, { "docid": "7df7377675ac0dfda5bcd22f2f5ba22b", "text": "Background and Aim. Esthetic concerns in primary teeth have been studied mainly from the point of view of parents. The aim of this study was to study compare the opinions of children aged 5-8 years to have an opinion regarding the changes in appearance of their teeth due to dental caries and the materials used to restore those teeth. Methodology. A total of 107 children and both of their parents (n = 321), who were seeking dental treatment, were included in this study. A tool comprising a questionnaire and pictures of carious lesions and their treatment arranged in the form of a presentation was validated and tested on 20 children and their parents. The validated tool was then tested on all participants. Results. Children had acceptable validity statistics for the tool suggesting that they were able to make informed decisions regarding esthetic restorations. There was no difference between the responses of the children and their parents on most points. Zirconia crowns appeared to be the most acceptable full coverage restoration for primary anterior teeth among both children and their parents. Conclusion. Within the limitations of the study it can be concluded that children in their sixth year of life are capable of appreciating the esthetics of the restorations for their anterior teeth.", "title": "" }, { "docid": "7926ab6b5cd5837a9b3f59f8a1b3f5ac", "text": "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.", "title": "" }, { "docid": "bd24772c4f75f90fe51841aeb9632e4f", "text": "Fifty years have passed since the publication of the first regression tree algorithm. New techniques have added capabilities that far surpass those of the early methods. Modern classification trees can partition the data with linear splits on subsets of variables and fit nearest neighbor, kernel density, and other models in the partitions. Regression trees can fit almost every kind of traditional statistical model, including least-squares, quantile, logistic, Poisson, and proportional hazards models, as well as models for longitudinal and multiresponse data. Greater availability and affordability of software (much of which is free) have played a significant role in helping the techniques gain acceptance and popularity in the broader scientific community. This article surveys the developments and briefly reviews the key ideas behind some of the major algorithms.", "title": "" }, { "docid": "17598d7543d81dcf7ceb4cb354fb7c81", "text": "Bitcoin is the first decentralized crypto-currency that is currently by far the most popular one in use. The bitcoin transaction syntax is expressive enough to setup digital contracts whose fund transfer can be enforced automatically. In this paper, we design protocols for the bitcoin voting problem, in which there are n voters, each of which wishes to fund exactly one of two candidates A and B. The winning candidate is determined by majority voting, while the privacy of individual vote is preserved. Moreover, the decision is irrevocable in the sense that once the outcome is revealed, the winning candidate is guaranteed to have the funding from all n voters. As in previous works, each voter is incentivized to follow the protocol by being required to put a deposit in the system, which will be used as compensation if he deviates from the protocol. Our solution is similar to previous protocols used for lottery, but needs an additional phase to distribute secret random numbers via zero-knowledge-proofs. Moreover, we have resolved a security issue in previous protocols that could prevent compensation from being paid.", "title": "" }, { "docid": "6897a459e95ac14772de264545970726", "text": "There is a need for a system which provides real-time local environmental data in rural crop fields for the detection and management of fungal diseases. This paper presents the design of an Internet of Things (IoT) system consisting of a device capable of sending real-time environmental data to cloud storage and a machine learning algorithm to predict environmental conditions for fungal detection and prevention. The stored environmental data on conditions such as air temperature, relative air humidity, wind speed, and rain fall is accessed and processed by a remote computer for analysis and management purposes. A machine learning algorithm using Support Vector Machine regression (SVMr) was developed to process the raw data and predict short-term (day-to-day) air temperature, relative air humidity, and wind speed values to assist in predicting the presence and spread of harmful fungal diseases through the local crop field. Together, the environmental data and environmental predictions made easily accessible by this IoT system will ultimately assist crop field managers by facilitating better management and prevention of fungal disease spread.", "title": "" }, { "docid": "704bd445fd9ff34a2d71e8e5b196760c", "text": "Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a “unidirectional” bottom-up feed-forward fashion. However, biological evidence suggests that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work introduces “bidirectional” architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as latent variables in a global energy function. We call our models convolutional latentvariable models (CLVMs). From a theoretical perspective, CLVMs unify several approaches for recognition, including CNNs, generative deep models (e.g., Boltzmann machines), and discriminative latent-variable models (e.g., DPMs). From a practical perspective, CLVMs are particularly well-suited for multi-task learning. We describe a single architecture that simultaneously achieves state-of-the-art accuracy for tasks spanning both high-level recognition (part detection/localization) and low-level grouping (pixel segmentation). Bidirectional reasoning is particularly helpful for detailed low-level tasks, since they can take advantage of top-down feedback. Our architectures are quite efficient, capable of processing an image in milliseconds. We present results on benchmark datasets with both part/keypoint labels and segmentation masks (such as PASCAL and LFW) that demonstrate a significant improvement over prior art, in both speed and accuracy.", "title": "" }, { "docid": "5745ed6c874867ad2de84b040e40d336", "text": "The chemokine (C-X-C motif) ligand 1 (CXCL1) regulates tumor-stromal interactions and tumor invasion. However, the precise role of CXCL1 on gastric tumor growth and patient survival remains unclear. In the current study, protein expressions of CXCL1, vascular endothelial growth factor (VEGF) and phospho-signal transducer and activator of transcription 3 (p-STAT3) in primary tumor tissues from 98 gastric cancer patients were measured by immunohistochemistry (IHC). CXCL1 overexpressed cell lines were constructed using Lipofectamine 2000 reagent or lentiviral vectors. Effects of CXCL1 on VEGF expression and local tumor growth were evaluated in vitro and in vivo. CXCL1 was positively expressed in 41.4% of patients and correlated with VEGF and p-STAT3 expression. Higher CXCL1 expression was associated with advanced tumor stage and poorer prognosis. In vitro studies in AGS and SGC-7901 cells revealed that CXCL1 increased cell migration but had little effect on cell proliferation. CXCL1 activated VEGF signaling in gastric cancer (GC) cells, which was inhibited by STAT3 or chemokine (C-X-C motif) receptor 2 (CXCR2) blockade. CXCL1 also increased p-STAT3 expression in GC cells. In vivo, CXCL1 increased xenograft local tumor growth, phospho-Janus kinase 2 (p-JAK2), p-STAT3 levels, VEGF expression and microvessel density. These results suggested that CXCL1 increased local tumor growth through activation of VEGF signaling which may have mechanistic implications for the observed inferior GC survival. The CXCL1/CXCR2 pathway might be potent to improve anti-angiogenic therapy for gastric cancer.", "title": "" }, { "docid": "4737fe7f718f79c74595de40f8778da2", "text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.", "title": "" }, { "docid": "7f711c94920e0bfa8917ad1b5875813c", "text": "With the increasing acceptance of Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies, a radical transformation is currently occurring inside network providers infrastructures. The trend of Software-based networks foreseen with the 5th Generation of Mobile Network (5G) is drastically changing requirements in terms of how networks are deployed and managed. One of the major changes requires the transaction towards a distributed infrastructure, in which nodes are built with standard commodity hardware. This rapid deployment of datacenters is paving the way towards a different type of environment in which the computational resources are deployed up to the edge of the network, referred to as Multi-access Edge Computing (MEC) nodes. However, MEC nodes do not usually provide enough resources for executing standard virtualization technologies typically used in large datacenters. For this reason, software containerization represents a lightweight and viable virtualization alternative for such scenarios. This paper presents an architecture based on the Open Baton Management and Orchestration (MANO) framework combining different infrastructural technologies supporting the deployment of container-based network services even at the edge of the network.", "title": "" }, { "docid": "ba39b85859548caa2d3f1d51a7763482", "text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.", "title": "" }, { "docid": "93a39df6ee080e359f50af46d02cdb71", "text": "Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks. MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server. In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server. In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately. We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints. In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy. Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem. Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH.", "title": "" }, { "docid": "a1fed0bcce198ad333b45bfc5e0efa12", "text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.", "title": "" }, { "docid": "fa62c54cf22c7d0822c7a4171a3d8bcd", "text": "Interaction with robot systems for specification of manufacturing tasks and motions needs to be simple, to enable wide-spread use of robots in SMEs. In the best case, existing practices from manual work could be used, to smoothly let current employees start using robot technology as a natural part of their work. Our aim is to simplify the robot programming task by allowing the user to simply make technical drawings on a sheet of paper. Craftsman use paper and raw sketches for several situations; to share ideas, to get a better imagination or to remember the customer situation. Currently these sketches have either to be interpreted by the worker when producing the final product by hand, or transferred into CAD file using an according tool. The former means that no automation is included, the latter means extra work and much experience in using the CAD tool. Our approach is to use the digital pen and paper from Anoto as input devices for SME robotic tasks, thereby creating simpler and more user friendly alternatives for programming, parameterization and commanding actions. To this end, the basic technology has been investigated and fully working prototypes have been developed to explore the possibilities and limitation in the context of typical SME applications. Based on the encouraging experimental results, we believe that drawings on digital paper will, among other means of human-robot interaction, play an important role in manufacturing SMEs in the future. Index Terms — CAD, Human machine interfaces, Industrial Robots, Robot programming.", "title": "" }, { "docid": "6f679c5678f1cc5fed0af517005cb6f5", "text": "In today's world of globalization, there is a serious need of incorporating semantics in Education Domain which is very significant with an ultimate goal of providing an efficient, adaptive and personalized learning environment. An attempt towards this goal has been made to develop an Education based Ontology with some capability to describe a semantic web based sharable knowledge. So as a contribution, this paper presents a revisit towards amalgamating Semantics in Education. In this direction, an effort has been made to construct an Education based Ontology using Protege 5.2.0, where a hierarchy of classes and subclasses have been defined along with their properties, relations, and instances. Finally, at the end of this paper an implementation is also presented involving query retrieval using DLquery illustrations.", "title": "" }, { "docid": "f5ce4a13a8d081243151e0b3f0362713", "text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the", "title": "" } ]
scidocsrr
0c5b9acb058ce6a0f3a8c55bed479885
Hypergraph Models and Algorithms for Data-Pattern-Based Clustering
[ { "docid": "b7a4eec912eb32b3b50f1b19822c44a1", "text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.", "title": "" }, { "docid": "1c5f53fe8d663047a3a8240742ba47e4", "text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.", "title": "" } ]
[ { "docid": "4bbb2191088155c823bc152fce0dec89", "text": "Image Segmentation is an important and challenging factor in the field of medical sciences. It is widely used for the detection of tumours. This paper deals with detection of brain tumour from MR images of the brain. The brain is the anterior most part of the nervous system. Tumour is a rapid uncontrolled growth of cells. Magnetic Resonance Imaging (MRI) is the device required to diagnose brain tumour. The normal MR images are not that suitable for fine analysis, so segmentation is an important process required for efficiently analyzing the tumour images. Clustering is suitable for biomedical image segmentation as it uses unsupervised learning. This paper work uses K-Means clustering where the detected tumour shows some abnormality which is then rectified by the use of morphological operators along with basic image processing techniques to meet the goal of separating the tumour cells from the normal cells.", "title": "" }, { "docid": "a7db9f3f1bb5883f6a5a873dd661867b", "text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.", "title": "" }, { "docid": "643d75042a38c24b0e4130cb246fc543", "text": "Silicon carbide (SiC) switching power devices (MOSFETs, JFETs) of 1200 V rating are now commercially available, and in conjunction with SiC diodes, they offer substantially reduced switching losses relative to silicon (Si) insulated gate bipolar transistors (IGBTs) paired with fast-recovery diodes. Low-voltage industrial variable-speed drives are a key application for 1200 V devices, and there is great interest in the replacement of the Si IGBTs and diodes that presently dominate in this application with SiC-based devices. However, much of the performance benefit of SiC-based devices is due to their increased switching speeds ( di/dt, dv/ dt), which raises the issues of increased electromagnetic interference (EMI) generation and detrimental effects on the reliability of inverter-fed electrical machines. In this paper, the tradeoff between switching losses and the high-frequency spectral amplitude of the device switching waveforms is quantified experimentally for all-Si, Si-SiC, and all-SiC device combinations. While exploiting the full switching-speed capability of SiC-based devices results in significantly increased EMI generation, the all-SiC combination provides a 70% reduction in switching losses relative to all-Si when operated at comparable dv/dt. It is also shown that the loss-EMI tradeoff obtained with the Si-SiC device combination can be significantly improved by driving the IGBT with a modified gate voltage profile.", "title": "" }, { "docid": "d3b24655e01cbb4f5d64006222825361", "text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "545998c2badee9554045c04983b1d11b", "text": "This paper presents a new control approach for nonlinear network-induced time delay systems by combining online reset control, neural networks, and dynamic Bayesian networks. We use feedback linearization to construct a nominal control for the system then use reset control and a neural network to compensate for errors due to the time delay. Finally, we obtain a stochastic model of the Networked Control System (NCS) using a Dynamic Bayesian Network (DBN) and use it to design a predictive control. We apply our control methodology to a nonlinear inverted pendulum and evaluate its performance through numerical simulations. We also test our approach with real-time experiments on a dc motor-load NCS with wireless communication implemented using a Ubiquitous Sensor Network (USN). Both the simulation and experimental results demonstrate the efficacy of our control methodology.", "title": "" }, { "docid": "b0a37782d653fa03843ecdc118a56034", "text": "Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth images. As a consequence, research on joint learning of visual features and speech classification from multiple views is limited. In this work, we present an end-to-end multi-view lipreading system based on Bidirectional Long-Short Memory (BLSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and performs visual speech classification from multiple views and also achieves state-of-the-art performance. The model consists of multiple identical streams, one for each view, which extract features directly from different poses of mouth images. The temporal dynamics in each stream/view are modelled by a BLSTM and the fusion of multiple streams/views takes place via another BLSTM. An absolute average improvement of 3% and 3.8% over the frontal view performance is reported on the OuluVS2 database when the best two (frontal and profile) and three views (frontal, profile, 45◦) are combined, respectively. The best three-view model results in a 10.5% absolute improvement over the current multi-view state-of-the-art performance on OuluVS2, without using external databases for training, achieving a maximum classification accuracy of 96.9%.", "title": "" }, { "docid": "3cb0e239ecfc9949afe89fe80a92cfd5", "text": "The purpose of this study is to measure the impact of product perceive quality on purchase intention with level of satisfaction, for meeting this purpose the data was collected by individually through 122 questionnaires by adopting the convenience techniques. Using statistical software hypothesis shows that these variables have positive significant relationship. Practical contribution shows that this study can be used as a guideline to management and marketers to improve the product quality.", "title": "" }, { "docid": "2dde173faac8d5cbb63aed8d379308fa", "text": "Delineating infarcted tissue in ischemic stroke lesions is crucial to determine the extend of damage and optimal treatment for this life-threatening condition. However, this problem remains challenging due to high variability of ischemic strokes’ location and shape. Recently, fully-convolutional neural networks (CNN), in particular those based on U-Net [27], have led to improved performances for this task [7]. In this work, we propose a novel architecture that improves standard U-Net based methods in three important ways. First, instead of combining the available image modalities at the input, each of them is processed in a different path to better exploit their unique information. Moreover, the network is densely-connected (i.e., each layer is connected to all following layers), both within each path and across different paths, similar to HyperDenseNet [11]. This gives our model the freedom to learn the scale at which modalities should be processed and combined. Finally, inspired by the Inception architecture [32], we improve standard U-Net modules by extending inception modules with two convolutional blocks with dilated convolutions of different scale. This helps handling the variability in lesion sizes. We split the 93 stroke datasets into training and validation sets containing 83 and 9 examples respectively. Our network was trained on a NVidia TITAN XP GPU with 16 GBs RAM, using ADAM as optimizer and a learning rate of 1×10−5 during 200 epochs. Training took around 5 hours and segmentation of a whole volume took between 0.2 and 2 seconds, as average. The performance on the test set obtained by our method is compared to several baselines, to demonstrate the effectiveness of our architecture, and to a state-of-art architecture that employs factorized dilated convolutions, i.e., ERFNet [26].", "title": "" }, { "docid": "a3914095f36b87d74b4c737a06eaa2a8", "text": "In this study, the swing-up of a double inverted pendulum is controlled by nonlinear model predictive control (NMPC). The fast computation algorithm called C/GMRES (continuation/generalized minimal residual) is applied to solve a nonlinear two-point boundary value problem over a receding horizon in real time. The goal is to swing-up and stabilize two pendulums from the downward to upright position. To make the tuning process of the performance index simpler, the terminal cost in the performance index is given by a solution of the algebraic Riccati equation. The simulation results show that C/GMRES can solve the NMPC problem in real time and swingup the double inverted pendulum with a significant reduction in the computational cost compared with Newton’s method.", "title": "" }, { "docid": "d5d160d536b72bd8f40d42bc609640f5", "text": "Weight pruning has been introduced as an efficient model compression technique. Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information. Moreover, computations associated with such sparse matrix formats are slow because sequential sparse matrix decoding process does not utilize highly parallel computing systems efficiently. As an attempt to compress index information while keeping the decoding process parallelizable, Viterbi-based pruning was suggested. Decoding non-zero weights, however, is still sequential in Viterbi-based pruning. In this paper, we propose a new sparse matrix format in order to enable a highly parallel decoding process of the entire sparse matrix. The proposed sparse matrix is constructed by combining pruning and weight quantization. For the latest RNN models on PTB and WikiText-2 corpus, LSTM parameter storage requirement is compressed 19× using the proposed sparse matrix format compared to the baseline model. Compressed weight and indices can be reconstructed into a dense matrix fast using Viterbi encoders. Simulation results show that the proposed scheme can feed parameters to processing elements 20 % to 106 % faster than the case where the dense matrix values directly come from DRAM.", "title": "" }, { "docid": "3b167c48f2e658b1001ddbfab02d2729", "text": "We consider the recognition of activities from passive entities by analysing radio-frequency (RF)-channel fluctuation. In particular, we focus on the recognition of activities by active Software-defined-radio (SDR)-based Device-free Activity Recognition (DFAR) systems and investigate the localisation of activities performed, the generalisation of features for alternative environments and the distinction between walking speeds. Furthermore, we conduct case studies for Received Signal Strength (RSS)-based active and continuous signal-based passive systems to exploit the accuracy decrease in these related cases. All systems are compared to an accelerometer-based recognition system.", "title": "" }, { "docid": "4852971924b06e1314b8946078e15b44", "text": "In this work we introduce a graph theoretical method to compare MEPs, which is independent of molecular alignment. It is based on the edit distance of weighted rooted trees, which encode the geometrical and topological information of Negative Molecular Isopotential Surfaces. A meaningful chemical classification of a set of 46 molecules with different functional groups was achieved. Structure--activity relationships for the corticosteroid binding affinity (CBG) of 31 steroids by means of hierarchical clustering resulted in a clear partitioning in high, intermediate, and low activity groups, whereas the results from quantitative structure--activity relationships, obtained from a partial least-squares analysis, showed comparable or better cross-validated correlation coefficients than the ones reported for previous methods based solely in the MEP.", "title": "" }, { "docid": "320bde052bb8d325c90df45cb21ac5de", "text": "The power generated by solar photovoltaic (PV) module depends on surrounding irradiance, temperature and shading conditions. Under partial shading conditions (PSC) the power from the PV module can be dramatically reduced and maximum power point tracking (MPPT) control will be affected. This paper presents a hybrid simulation model of PV cell/module and system using Matlab®/Simulink® and Pspice®. The hybrid simulation model includes the solar PV cells and the converter power stage and can be expanded to add MPPT control and other functions. The model is able to simulate both the I-V characteristics curves and the P-V characteristics curves of PV modules under uniform shading conditions (USC) and PSC. The model is used to study different parameters variations effects on the PV array. The developed model is suitable to simulate several homogeneous or/and heterogeneous PV cells or PV panels connected in series and/or in parallel.", "title": "" }, { "docid": "6a4cd21704bfbdf6fb3707db10f221a8", "text": "Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.", "title": "" }, { "docid": "f267f73e9770184fbe617446ee4782c0", "text": "Juvenile dermatomyositis (JDM) is a rare, potentially life-threatening systemic autoimmune disease primarily affecting muscle and skin. Recent advances in the recognition, standardised assessment and treatment of JDM have been greatly facilitated by large collaborative research networks. Through these networks, a number of immunogenetic risk factors have now been defined, as well as a number of potential pathways identified in the aetio-pathogenesis of JDM. Myositis-associated and myositis-specific autoantibodies are helping to sub-phenotype JDM, defined by clinical features, outcomes and immunogenetic risk factors. Partially validated tools to assess disease activity and damage have assisted in standardising outcomes. Aggressive treatment approaches, including multiple initial therapies, as well as new drugs and biological therapies for refractory disease, offer promise of improved outcomes and less corticosteroid-related toxicity.", "title": "" }, { "docid": "70e88fe5fc43e0815a1efa05e17f7277", "text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.", "title": "" }, { "docid": "5bb36646f4db3d2efad8e0ee828b3022", "text": "PURPOSE\nWhile modern clinical CT scanners under normal circumstances produce high quality images, severe artifacts degrade the image quality and the diagnostic value if metal prostheses or other metal objects are present in the field of measurement. Standard methods for metal artifact reduction (MAR) replace those parts of the projection data that are affected by metal (the so-called metal trace or metal shadow) by interpolation. However, while sinogram interpolation methods efficiently remove metal artifacts, new artifacts are often introduced, as interpolation cannot completely recover the information from the metal trace. The purpose of this work is to introduce a generalized normalization technique for MAR, allowing for efficient reduction of metal artifacts while adding almost no new ones. The method presented is compared to a standard MAR method, as well as MAR using simple length normalization.\n\n\nMETHODS\nIn the first step, metal is segmented in the image domain by thresholding. A 3D forward projection identifies the metal trace in the original projections. Before interpolation, the projections are normalized based on a 3D forward projection of a prior image. This prior image is obtained, for example, by a multithreshold segmentation of the initial image. The original rawdata are divided by the projection data of the prior image and, after interpolation, denormalized again. Simulations and measurements are performed to compare normalized metal artifact reduction (NMAR) to standard MAR with linear interpolation and MAR based on simple length normalization.\n\n\nRESULTS\nPromising results for clinical spiral cone-beam data are presented in this work. Included are patients with hip prostheses, dental fillings, and spine fixation, which were scanned at pitch values ranging from 0.9 to 3.2. Image quality is improved considerably, particularly for metal implants within bone structures or in their proximity. The improvements are evaluated by comparing profiles through images and sinograms for the different methods and by inspecting ROIs. NMAR outperforms both other methods in all cases. It reduces metal artifacts to a minimum, even close to metal regions. Even for patients with dental fillings, which cause most severe artifacts, satisfactory results are obtained with NMAR. In contrast to other methods, NMAR prevents the usual blurring of structures close to metal implants if the metal artifacts are moderate.\n\n\nCONCLUSIONS\nNMAR clearly outperforms the other methods for both moderate and severe artifacts. The proposed method reliably reduces metal artifacts from simulated as well as from clinical CT data. Computationally efficient and inexpensive compared to iterative methods, NMAR can be used as an additional step in any conventional sinogram inpainting-based MAR method.", "title": "" }, { "docid": "1a41bd991241ed1751beda2362465a0d", "text": "Over the last decade, Convolutional Neural Networks (CNN) saw a tremendous surge in performance. However, understanding what a network has learned still proves to be a challenging task. To remedy this unsatisfactory situation, a number of groups have recently proposed different methods to visualize the learned models. In this work we suggest a general taxonomy to classify and compare these methods, subdividing the literature into three main categories and providing researchers with a terminology to base their works on. Furthermore, we introduce the FeatureVis library for MatConvNet: an extendable, easy to use open source library for visualizing CNNs. It contains implementations from each of the three main classes of visualization methods and serves as a useful tool for an enhanced understanding of the features learned by intermediate layers, as well as for the analysis of why a network might fail for certain examples.", "title": "" }, { "docid": "47f1d6df5ec3ff30d747fb1fcbc271a7", "text": "a r t i c l e i n f o Experimental studies routinely show that participants who play a violent game are more aggressive immediately following game play than participants who play a nonviolent game. The underlying assumption is that nonviolent games have no effect on aggression, whereas violent games increase it. The current studies demonstrate that, although violent game exposure increases aggression, nonviolent video game exposure decreases aggressive thoughts and feelings (Exp 1) and aggressive behavior (Exp 2). When participants assessed after a delay were compared to those measured immediately following game play, violent game players showed decreased aggressive thoughts, feelings and behavior, whereas nonviolent game players showed increases in these outcomes. Experiment 3 extended these findings by showing that exposure to nonviolent puzzle-solving games with no expressly prosocial content increases prosocial thoughts, relative to both violent game exposure and, on some measures, a no-game control condition. Implications of these findings for models of media effects are discussed. A major development in mass media over the last 25 years has been the advent and rapid growth of the video game industry. From the earliest arcade-based console games, video games have been immediately and immensely popular, particularly among young people and their subsequent introduction to the home market only served to further elevate their prevalence (Gentile, 2009). Given their popularity, social scientists have been concerned with the potential effects of video games on those who play them, focusing particularly on games with violent content. While a large percentage of games have always involved the destruction of enemies, recent advances in technology have enabled games to become steadily more realistic. Coupled with an increase in the number of adult players, these advances have enabled the development of games involving more and more graphic violence. Over the past several years, the majority of best-selling games have involved frequent and explicit acts of violence as a central gameplay theme (Smith, Lachlan, & Tamborini, 2003). A video game is essentially a simulated experience. Virtually every major theory of human aggression, including social learning theory, predicts that repeated simulation of antisocial behavior will produce an increase in antisocial behavior (e.g., aggression) and a decrease in prosocial behavior (e.g., helping) outside the simulated environment (i.e., in \" real life \"). In addition, an increase in the perceived realism of the simulation is posited to increase the strength of negative effects (Gentile & Anderson, 2003). Meta-analyses …", "title": "" }, { "docid": "041772bbad50a5bf537c0097e1331bdd", "text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.", "title": "" } ]
scidocsrr
09af52aae82202026c94950754f17f7c
Wireless body sensor networks for health-monitoring applications.
[ { "docid": "a75ab88f3b7f672bc357429793e74635", "text": "To save life, casualty care requires that trauma injuries are accurately and expeditiously assessed in the field. This paper describes the initial bench testing of a wireless wearable pulse oximeter developed based on a small forehead mounted sensor. The battery operated device employs a lightweight optical reflectance sensor and incorporates an annular photodetector to reduce power consumption. The system also has short range wireless communication capabilities to transfer arterial oxygen saturation (SpO2), heart rate (HR), body acceleration, and posture information to a PDA. It has the potential for use in combat casualty care, such as for remote triage, and by first responders, such as firefighters", "title": "" } ]
[ { "docid": "b120095067684a67fe3327d18860e760", "text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.", "title": "" }, { "docid": "e6cd81dfc8c6c505161e84faaf51fa04", "text": "It was assumed that the degraded image H was of the form H= W*S, where W is the original image, S is the point spread function, and * denotes the operation of convolution. It was also assumed that W, S, and H are discrete probability-frequency functions, not necessarily normalized. That is, the numerical value of a point of W, S, or H is considered as a measure of the frequency of the occurrence of an event at that point. S is usually in normalized form. Units of energy (which may be considered unique events) originating at a point in W are distributed at points in H according to the frequencies indicated by S. H then represents the resulting sums of the effects of the units of energy originating at all points of W. In what follows, each of the three letters has two uses when subscripted. For example, Wi indicates either the ith location in the array W or the value associated with the ith location. The unsubscripted letter refers to the entire array or the value associated with the array as in W = E i Wi. The doublesubscripted Wi j in two dimensions is interpreted similarly to Wi in one dimension. In the approximation formulas, a subscript r appears, which is the number of the iteration.", "title": "" }, { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" }, { "docid": "d54e33049b3f5170ec8bd09d8f17c05c", "text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "title": "" }, { "docid": "48b2d263a0f547c5c284c25a9e43828e", "text": "This paper presents hierarchical topic models for integrating sentiment analysis with collaborative filtering. Our goal is to automatically predict future reviews to a given author from previous reviews. For this goal, we focus on differentiating author's preference, while previous sentiment analysis models process these review articles without this difference. Therefore, we propose a Latent Evaluation Topic model (LET) that infer each author's preference by introducing novel latent variables into author and his/her document layer. Because these variables distinguish the variety of words in each article by merging similar word distributions, LET incorporates the difference of writers' preferences into sentiment analysis. Consequently LET can determine the attitude of writers, and predict their reviews based on like-minded writers' reviews in the collaborative filtering approach. Experiments on review articles show that the proposed model can reduce the dimensionality of reviews to the low-dimensional set of these latent variables, and is a significant improvement over standard sentiment analysis models and collaborative filtering algorithms.", "title": "" }, { "docid": "49e875364e2551dda40b682bd37d4ea6", "text": "The short-circuit current calculation of any equipment in the power system is very important for selection of appropriate relay characteristics and circuit breaker for the protection of the system. The power system is undergoing changes because of large scale penetration of renewable energy sources in the conventional system. Major renewable sources which are included in the power system are wind energy and solar energy sources. The wind energy is supplied by wind turbine generators of various types. Type III generators i.e. Doubly Fed Induction Generator (DFIG) is the most common types of generator employed offering different behavior compared to conventionally employed synchronous generators. In this paper; the short circuit current contribution of DFIG is calculated analytically and the same is validated by PSCAD/EMTDC software under various wind speeds and by considering certain voltage drops of the generator output.", "title": "" }, { "docid": "370b1775eddfb6241078285872e1a009", "text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL’03, ACE’05, and ClueWeb’09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.", "title": "" }, { "docid": "9fefe5e216dec9b11f389c7d62175742", "text": "Physical interaction in robotics is a complex problem that requires not only accurate reproduction of the kinematic trajectories but also of the forces and torques exhibited during the movement. We base our approach on Movement Primitives (MP), as MPs provide a framework for modelling complex movements and introduce useful operations on the movements, such as generalization to novel situations, time scaling, and others. Usually, MPs are trained with imitation learning, where an expert demonstrates the trajectories. However, MPs used in physical interaction either require additional learning approaches, e.g., reinforcement learning, or are based on handcrafted solutions. Our goal is to learn and generate movements for physical interaction that are learned with imitation learning, from a small set of demonstrated trajectories. The Probabilistic Movement Primitives (ProMPs) framework is a recent MP approach that introduces beneficial properties, such as combination and blending of MPs, and represents the correlations present in the movement. The ProMPs provides a variable stiffness controller that reproduces the movement but it requires a dynamics model of the system. Learning such a model is not a trivial task, and, therefore, we introduce the model-free ProMPs, that are learning jointly the movement and the necessary actions from a few demonstrations. We derive a variable stiffness controller analytically. We further extent the ProMPs to include force and torque signals, necessary for physical interaction. We evaluate our approach in simulated and real robot tasks.", "title": "" }, { "docid": "016891dcefdf3668b6359d95617536b3", "text": "While most steps in the modern object detection methods are learnable, the region feature extraction step remains largely handcrafted, featured by RoI pooling methods. This work proposes a general viewpoint that unifies existing region feature extraction methods and a novel method that is end-to-end learnable. The proposed method removes most heuristic choices and outperforms its RoI pooling counterparts. It moves further towards fully learnable object detection.", "title": "" }, { "docid": "8c174dbb8468b1ce6f4be3676d314719", "text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.", "title": "" }, { "docid": "3f157067ce2d5d6b6b4c9d9faaca267b", "text": "The rise of network forms of organization is a key consequence of the ongoing information revolution. Business organizations are being newly energized by networking, and many professional militaries are experimenting with flatter forms of organization. In this chapter, we explore the impact of networks on terrorist capabilities, and consider how this development may be associated with a move away from emphasis on traditional, episodic efforts at coercion to a new view of terror as a form of protracted warfare. Seen in this light, the recent bombings of U.S. embassies in East Africa, along with the retaliatory American missile strikes, may prove to be the opening shots of a war between a leading state and a terror network. We consider both the likely context and the conduct of such a war, and offer some insights that might inform policies aimed at defending against and countering terrorism.", "title": "" }, { "docid": "a45ac7298f57a1be7bf5a968a3d4f10b", "text": "Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network’s input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network’s Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.", "title": "" }, { "docid": "5174b54a546002863a50362c70921176", "text": "The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain's abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.", "title": "" }, { "docid": "ea5a455bca9ff0dbb1996bd97d89dfe5", "text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.", "title": "" }, { "docid": "95b23060ff9ee6393acc7b8a7f0c0535", "text": "The increased price and the limited supply of rare-earth materials have been recognized as a problem by the international clean energy community. Rare-earth permanent magnets are widely used in electrical motors in hybrid and pure electrical vehicles, which are prized for improving fuel efficiency and reducing carbon dioxide (CO2) emissions. Such motors must have characteristics of high efficiency, compactness, and high torque density, as well as a wide range of operating speeds. So far, these demands have not been achieved without the use of rare-earth permanent magnets. Here, we show that a switched reluctance motor that is competitive with rare-earth permanent-magnet motors can be designed. The developed motor contains no rare-earth permanent magnets, but rather, employs high-silicon steel with low iron loss to improve efficiency. Experiments showed that the developed motor has competitive or better efficiency, torque density, compactness, and range of operating speeds compared with a standard rare-earth permanent-magnet motor. Our results demonstrate how a rare-earth-free motor could be developed to be competitive with rare-earth permanent-magnet motors, for use as a more affordable and sustainable alternative, not only in electric and hybrid vehicles, but also in the wide variety of industrial applications.", "title": "" }, { "docid": "e9b438cfe853e98f05b661f9149c0408", "text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.", "title": "" }, { "docid": "f80458241f0a33aebd8044bf85bd25ec", "text": "Brachial–ankle pulse wave velocity (baPWV) is a promising technique to assess arterial stiffness conveniently. However, it is not known whether baPWV is associated with well-established indices of central arterial stiffness. We determined the relation of baPWV with aortic (carotid-femoral) PWV, leg (femoral-ankle) PWV, and carotid augmentation index (AI) by using both cross-sectional and interventional approaches. First, we studied 409 healthy adults aged 18–76 years. baPWV correlated significantly with aortic PWV (r=0.76), leg PWV (r=0.76), and carotid AI (r=0.52). A stepwise regression analysis revealed that aortic PWV was the primary independent correlate of baPWV, explaining 58% of the total variance in baPWV. Additional 23% of the variance was explained by leg PWV. Second, 13 sedentary healthy men were studied before and after a 16-week moderate aerobic exercise intervention (brisk walking to jogging; 30–45 min/day; 4–5 days/week). Reductions in aortic PWV observed with the exercise intervention were significantly and positively associated with the corresponding changes in baPWV (r=0.74). A stepwise regression analysis revealed that changes in aortic PWV were the only independent correlate of changes in baPWV (β=0.74), explaining 55% of the total variance. These results suggest that baPWV may provide qualitatively similar information to those derived from central arterial stiffness although some portions of baPWV may be determined by peripheral arterial stiffness.", "title": "" }, { "docid": "5455a8fd6e6be03e3a4163665425247d", "text": "The change in spring phenology is recognized to exert a major influence on carbon balance dynamics in temperate ecosystems. Over the past several decades, several studies focused on shifts in spring phenology; however, large uncertainties still exist, and one understudied source could be the method implemented in retrieving satellite-derived spring phenology. To account for this potential uncertainty, we conducted a multimethod investigation to quantify changes in vegetation green-up date from 1982 to 2010 over temperate China, and to characterize climatic controls on spring phenology. Over temperate China, the five methods estimated that the vegetation green-up onset date advanced, on average, at a rate of 1.3 ± 0.6 days per decade (ranging from 0.4 to 1.9 days per decade) over the last 29 years. Moreover, the sign of the trends in vegetation green-up date derived from the five methods were broadly consistent spatially and for different vegetation types, but with large differences in the magnitude of the trend. The large intermethod variance was notably observed in arid and semiarid vegetation types. Our results also showed that change in vegetation green-up date is more closely correlated with temperature than with precipitation. However, the temperature sensitivity of spring vegetation green-up date became higher as precipitation increased, implying that precipitation is an important regulator of the response of vegetation spring phenology to change in temperature. This intricate linkage between spring phenology and precipitation must be taken into account in current phenological models which are mostly driven by temperature.", "title": "" }, { "docid": "3b9b49f8c2773497f8e05bff4a594207", "text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%[email protected] on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.", "title": "" }, { "docid": "feef714b024ad00086a5303a8b74b0a4", "text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.", "title": "" } ]
scidocsrr
144dece26525a57f4c531eb4f1d3760b
Dynamic trees as search trees via Euler tours, applied to the network simplex algorithm
[ { "docid": "5e5780bbd151ccf981fe69d5eb70b067", "text": "We give efficient algorithms for maintaining a minimum spanning forest of a planar graph subject to on-line modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices. To implement the algorithms, we develop a data structure called an edge-or&reck dynumic tree, which is a variant of the dynamic tree data structure of Sleator and Tarjan. Using this data structure, our algorithms run in O(logn) time per operation and O(n) space. The algorithms can be used to maintain the connected components of a dynamic planar graph in O(logn) time per operation. *Computer Science Laboratory, Xerox PARC, 3333 Coyote Hill Rd., Palo Alto, CA 94304. This work was done while the author was at the Department of Computer Science, Columbia University, New York, NY 10027. **Department of Computer Science, Columbia University, New York, NY 10027 and Dipartmento di Informatica e Sistemistica, Universitb di Roma, Rome, Italy. ***Department of Computer Science, Brown University, Box 1910, Providence, RI 02912-1910. #Department of Computer Science, Princeton University, Princeton, NJ 08544, and AT&T Bell Laboratories, Murray Hill, New Jersey 07974. ##Department of Computer Science, Stanford University, Stanford, CA 94305. This work was done while the author was at Department of Computer Science, Princeton University, Princeton, NJ 08544. ###IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY 10598. + Research supported in part by NSF grant CCR-8X-14977, NSF grant DCR-86-05962, ONR Contract N00014-87-H-0467 and Esprit II Basic Research Actions Program of the European Communities Contract No. 3075.", "title": "" } ]
[ { "docid": "1f1c4c69a4c366614f0cc9ecc24365ba", "text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.", "title": "" }, { "docid": "07381e533ec04794a74abc0560d7c8af", "text": "Many applications in several domains such as telecommunications, network security, large-scale sensor networks, require online processing of continuous data flows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static configurations that lead to either under or overprovisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation, and a thorough evaluation of the scalability and elasticity of the fully implemented system.", "title": "" }, { "docid": "662ae9d792b3889dbd0450a65259253a", "text": "We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing.", "title": "" }, { "docid": "5f21a1348ad836ded2fd3d3264455139", "text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.", "title": "" }, { "docid": "88530d3d70df372b915556eab919a3fe", "text": "The airway mucosa is lined by a continuous epithelium comprised of multiple cell phenotypes, several of which are secretory. Secretions produced by these cells mix with a variety of macromolecules, ions and water to form a respiratory tract fluid that protects the more distal airways and alveoli from injury and infection. The present article highlights the structure of the mucosa, particularly its secretory cells, gives a synopsis of the structure of mucus, and provides new information on the localization of mucin (MUC) genes that determine the peptide sequence of the protein backbone of the glycoproteins, which are a major component of mucus. Airway secretory cells comprise the mucous, serous, Clara and dense-core granulated cells of the surface epithelium, and the mucous and serous acinar cells of the submucosal glands. Several transitional phenotypes may be found, especially during irritation or disease. Respiratory tract mucins constitute a heterogeneous group of high molecular weight, polydisperse richly glycosylated molecules: both secreted and membrane-associated forms of mucin are found. Several mucin (MUC) genes encoding the protein core of mucin have been identified. We demonstrate the localization of MUC gene expression to a number of distinct cell types and their upregulation both in response to experimentally administered lipopolysaccharide and cystic fibrosis.", "title": "" }, { "docid": "654b7a674977969237301cd874bda5d1", "text": "This paper and its successor examine the gap between ecotourism theory as revealed in the literature and ecotourism practice as indicated by its on-site application. A framework is suggested which, if implemented through appropriate management, can help to achieve a balance between conservation and development through the promotion of synergistic relationships between natural areas, local populations and tourism. The framework can also be used to assess the status of ecotourism at particular sites. ( 1999 Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "c15618df21bce45cbad6766326de3dbd", "text": "The birth of intersexed infants, babies born with genitals that are neither clearly male nor clearly female, has been documented throughout recorded time.' In the late twentieth century, medical technology has advanced to allow scientists to determine chromosomal and hormonal gender, which is typically taken to be the real, natural, biological gender, usually referred to as \"sex.\"2 Nevertheless, physicians who handle the cases of intersexed infants consider several factors beside biological ones in determining, assigning, and announcing the gender of a particular infant. Indeed, biological factors are often preempted in their deliberations by such cultural factors as the \"correct\" length of the penis and capacity of the vagina.", "title": "" }, { "docid": "8433f58b63632abf9074eefdf5fa429f", "text": "We are developing a monopivot centrifugal pump for circulatory assist for a period of more than 2 weeks. The impeller is supported by a pivot bearing at one end and by a passive magnetic bearing at the other. The pivot undergoes concentrated exposure to the phenomena of wear, hemolysis, and thrombus formation. The pivot durability, especially regarding the combination of male/female pivot radii, was examined through rotating wear tests and animal tests. As a result, combinations of similar radii for the male/female pivots were found to provide improved pump durability. In the extreme case, the no-gap combination would result in no thrombus formation.", "title": "" }, { "docid": "74ea9bde4e265dba15cf9911fce51ece", "text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.", "title": "" }, { "docid": "98889e4861485fdc04cff54640f4d3ab", "text": "The design, prototype implementation, and demonstration of an ethical governor capable of restricting lethal action of an autonomous system in a manner consistent with the Laws of War and Rules of Engagement is presented.", "title": "" }, { "docid": "c07f30465dc4ed355847d015fee1cadb", "text": "0747-5632/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.chb.2008.06.002 * Corresponding author. Tel.: +86 13735892489. E-mail addresses: [email protected] (Y. Lu), zh [email protected] (B. Wang). 1 Tel.: +1 956 3813336. Instant messaging (IM) is a popular Internet application around the world. In China, the competition in the IM market is very intense and there are over 10 IM products available. We examine the intrinsic and extrinsic motivations that affect Chinese users’ acceptance of IM based on the theory of planned behavior (TPB), the technology acceptance model (TAM), and the flow theory. Results demonstrate that users’ perceived usefulness and perceived enjoyment significantly influence their attitude towards using IM, which in turn impacts their behavioral intention. Furthermore, perceived usefulness, users’ concentration, and two components of the theory of planned behavior (TPB): subjective norm and perceived behavioral control, also have significant impact on the behavioral intention. Users’ intention determines their actual usage behavior. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1f6637ecfc9415dd0f827ab6d3149af3", "text": "Impaired renal function due to acute kidney injury (AKI) and/or chronic kidney diseases (CKD) is frequent in cirrhosis. Recurrent episodes of AKI may occur in end-stage cirrhosis. Differential diagnosis between functional (prerenal and hepatorenal syndrome) and acute tubular necrosis (ATN) is crucial. The concept that AKI and CKD represent a continuum rather than distinct entities, is now emerging. Not all patients with AKI have a potential for full recovery. Precise evaluation of kidney function and identification of kidney changes in patients with cirrhosis is central in predicting reversibility. This review examines current biomarkers for assessing renal function and identifying the cause and mechanisms of impaired renal function. When CKD is suspected, clearance of exogenous markers is the reference to assess glomerular filtration rate, as creatinine is inaccurate and cystatin C needs further evaluation. Recent biomarkers may help differentiate ATN from hepatorenal syndrome. Neutrophil gelatinase-associated lipocalin has been the most extensively studied biomarker yet, however, there are no clear-cut values that differentiate each of these conditions. Studies comparing ATN and hepatorenal syndrome in cirrhosis, do not include a gold standard. Combinations of innovative biomarkers are attractive to identify patients justifying simultaneous liver and kidney transplantation. Accurate biomarkers of underlying CKD are lacking and kidney biopsy is often contraindicated in this population. Urinary microRNAs are attractive although not definitely validated. Efforts should be made to develop biomarkers of kidney fibrosis, a common and irreversible feature of CKD, whatever the cause. Biomarkers of maladaptative repair leading to irreversible changes and CKD after AKI are also promising.", "title": "" }, { "docid": "c6645086397ba0825f5f283ba5441cbf", "text": "Anomalies have broad patterns corresponding to their causes. In industry, anomalies are typically observed as equipment failures. Anomaly detection aims to detect such failures as anomalies. Although this is usually a binary classification task, the potential existence of unseen (unknown) failures makes this task difficult. Conventional supervised approaches are suitable for detecting seen anomalies but not for unseen anomalies. Although, unsupervised neural networks for anomaly detection now detect unseen anomalies well, they cannot utilize anomalous data for detecting seen anomalies even if some data have been made available. Thus, providing an anomaly detector that finds both seen and unseen anomalies well is still a tough problem. In this paper, we introduce a novel probabilistic representation of anomalies to solve this problem. The proposed model defines the normal and anomaly distributions using the analogy between a set and the complementary set. We applied these distributions to an unsupervised variational autoencoder (VAE)-based method and turned it into a supervised VAE-based method. We tested the proposed method with well-known data and real industrial data to show that the proposed method detects seen anomalies better than the conventional unsupervised method without degrading the detection performance for unseen anomalies.", "title": "" }, { "docid": "12cac87e781307224db2c3edf0d217b8", "text": "Fetal ventriculomegaly (VM) refers to the enlargement of the cerebral ventricles in utero. It is associated with the postnatal diagnosis of hydrocephalus. VM is clinically diagnosed on ultrasound and is defined as an atrial diameter greater than 10 mm. Because of the anatomic detailed seen with advanced imaging, VM is often further characterized by fetal magnetic resonance imaging (MRI). Fetal VM is a heterogeneous condition with various etiologies and a wide range of neurodevelopmental outcomes. These outcomes are heavily dependent on the presence or absence of associated anomalies and the direct cause of the ventriculomegaly rather than on the absolute degree of VM. In this review article, we discuss diagnosis, work-up, counseling, and management strategies as they relate to fetal VM. We then describe imaging-based research efforts aimed at using prenatal data to predict postnatal outcome. Finally, we review the early experience with fetal therapy such as in utero shunting, as well as the advances in prenatal diagnosis and fetal surgery that may begin to address the limitations of previous therapeutic efforts.", "title": "" }, { "docid": "7ec2bb00153e124e76fa7d6ab39c0b77", "text": "Goal: Sensorimotor-based brain-computer interfaces (BCIs) have achieved successful control of real and virtual devices in up to three dimensions; however, the traditional sensor-based paradigm limits the intuitive use of these systems. Many control signals for state-of-the-art BCIs involve imagining the movement of body parts that have little to do with the output command, revealing a cognitive disconnection between the user's intent and the action of the end effector. Therefore, there is a need to develop techniques that can identify with high spatial resolution the self-modulated neural activity reflective of the actions of a helpful output device. Methods: We extend previous EEG source imaging (ESI) work to decoding natural hand/wrist manipulations by applying a novel technique to classifying four complex motor imaginations of the right hand: flexion, extension, supination, and pronation. Results: We report an increase of up to 18.6% for individual task classification and 12.7% for overall classification using the proposed ESI approach over the traditional sensor-based method. Conclusion: ESI is able to enhance BCI performance of decoding complex right-hand motor imagery tasks. Significance: This study may lead to the development of BCI systems with naturalistic and intuitive motor imaginations, thus facilitating broad use of noninvasive BCIs.", "title": "" }, { "docid": "f2c846f200d9c59362bf285b2b68e2cd", "text": "A Root Cause Failure Analysis (RCFA) for repeated impeller blade failures in a five stage centrifugal propane compressor is described. The initial failure occurred in June 2007 with a large crack found in one blade on the third impeller and two large pieces released from adjacent blades on the fourth impeller. An RCFA was performed to determine the cause of the failures. The failure mechanism was identified to be high cycle fatigue. Several potential causes related to the design, manufacture, and operation of the compressor were examined. The RCFA concluded that the design and manufacture were sound and there were no conclusive issues with respect to operation. A specific root cause was not identified. In June 2009, a second case of blade cracking occurred with a piece once again released from a single blade on the fourth impeller. Due to the commonality with the previous instance this was identified as a repeat failure. Specifically, both cases had occurred in the same compressor whereas, two compressors operating in identical service in adjacent Liquefied natural Gas (LNG) trains had not encountered the problem. A second RCFA was accordingly launched with the ultimate objective of preventing further repeated failures. Both RCFA teams were established comprising of engineers from the End User (RasGas), the OEM (Elliott Group) and an independent consultancy (Southwest Research Institute). The scope of the current investigation included a detailed metallurgical assessment, impeller modal frequency assessment, steady and unsteady computational fluid dynamics (CFD) assessment, finite element analyses (FEA), fluid structure interaction (FSI) assessment, operating history assessment and a comparison change analysis. By the process of elimination, the most probable causes were found to be associated with: • vane wake excitation of either the impeller blade leading edge modal frequency from severe mistuning and/or unusual response of the 1-diameter cover/blades modal frequency • mist carry over from third side load upstream scrubber • end of curve operation in the compressor rear section INTRODUCTION RasGas currently operates seven LNG trains at Ras Laffan Industrial City, Qatar. Train 3 was commissioned in 2004 with a nameplate LNG production of 4.7 Mtpa which corresponds to a wet sour gas feed of 790 MMscfd (22.37 MMscmd). Trains 4 and 5 were later commissioned in 2005 and 2006 respectively. They were also designed for a production 4.7 Mtpa LNG but have higher wet sour gas feed rates of 850 MMscfd (24.05 MMscmd). Despite these differences, the rated operation of the propane compressor is identical in each train. Figure 1. APCI C3-MR Refrigeration system for Trains 3, 4 and 5 The APCI C3-MR refrigeration cycle (Roberts, et al. 2002), depicted in Figure 1 is common for all three trains. Propane is circulated in a continuous loop between four compressor inlets and a single discharge. The compressed discharge gas is cooled and condensed in three sea water cooled heat exchangers before being routed to the LLP, LP, MP and HP evaporators. Here, the liquid propane is evaporated by the transfer of heat from the warmer feed and MR gas streams. It finally passes through one of the four suction scrubbers before re-entering the compressor as a gas. Although not shown, each section inlet has a dedicated anti-surge control loop from the de-superheater discharge to the suction scrubber inlet. A cross section of the propane compressor casing and rotor is illustrated in Figure 2. It is a straight through centrifugal unit with a horizontally split casing. Five impellers are mounted upon the 21.3 ft (6.5 m) long shaft. Three side loads add gas upstream of the suction at impellers 2, 3 & 4. The impellers are of two piece construction, with each piece fabricated from AISI 4340 forgings that were heat treated such that the material has sufficient strength and toughness for operation at temperatures down to -50F (-45.5C). The blades are milled to the hub piece and the cover piece was welded to the blades using a robotic metal inert gas (MIG) welding process. The impellers are mounted to the shaft with an interference fit. The thrust disc is mounted to the shaft with a line on line fit and antirotation key. The return channel and side load inlets are all vaned to align the downstream swirl angle. The impeller diffusers are all vaneless. A summary of the relevant compressor design parameters is given in Table 1. The complete compressor string is also depicted in Figure 1. The propane compressor is coupled directly to the HP MR compressor and driven by a GE Frame 7EA gas turbine and ABB 16086 HP (12 MW) helper motor at 3600 rpm rated shaft speed. Table 1. Propane Compressor design parameters Component Material No of", "title": "" }, { "docid": "ccefef1618c7fa637de366e615333c4b", "text": "Context: Systems development normally takes place in a specific organizational context, including organizational culture. Previous research has identified organizational culture as a factor that potentially affects the deployment systems development methods. Objective: The purpose is to analyze the relationship between organizational culture and the postadoption deployment of agile methods. Method: This study is a theory development exercise. Based on the Competing Values Model of organizational culture, the paper proposes a number of hypotheses about the relationship between organizational culture and the deployment of agile methods. Results: Inspired by the agile methods thirteen new hypotheses are introduced and discussed. They have interesting implications, when contrasted with ad hoc development and with traditional systems devel-", "title": "" }, { "docid": "1b2991f84433c96c6f0d61378baebbea", "text": "This article analyzes the topic of leadership from an evolutionary perspective and proposes three conclusions that are not part of mainstream theory. First, leading and following are strategies that evolved for solving social coordination problems in ancestral environments, including in particular the problems of group movement, intragroup peacekeeping, and intergroup competition. Second, the relationship between leaders and followers is inherently ambivalent because of the potential for exploitation of followers by leaders. Third, modern organizational structures are sometimes inconsistent with aspects of our evolved leadership psychology, which might explain the alienation and frustration of many citizens and employees. The authors draw several implications of this evolutionary analysis for leadership theory, research, and practice.", "title": "" }, { "docid": "e3d1b0383d0f8b2382586be15961a765", "text": "The critical study of political discourse has up until very recently rested solely within the domain of the social sciences. Working within a linguistics framework, Critical Discourse Analysis (CDA), in particular Fairclough (Fairclough 1989, 1995a, 1995b, 2001; Fairclough and Wodak 1997), has been heavily influenced by Foucault. 2 The linguistic theory that CDA and critical linguistics especially (which CDA subsumes) has traditionally drawn upon is Halliday‟s Systemic-Functional Grammar, which is largely concerned with the function of language in the social structure 3 (Fowler et al. 1979; Fowler 1991; Kress and Hodge 1979).", "title": "" }, { "docid": "832c48916e04744188ed71bf3ab1f784", "text": "Internet is commonly accepted as an important aspect in successful tourism promotion as well as destination marketing in this era. The main aim of this study is to explore how online promotion and its influence on destination awareness and loyalty in the tourism industry. This study proposes a structural model of the relationships among online promotion (OP), destination awareness (DA), tourist satisfaction (TS) and destination loyalty (DL). Randomly-selected respondents from the population of international tourists departing from Vietnamese international airports were selected as the questionnaire samples in the study. Initially, the exploratory factor analysis (EFA) was performed to test the validity of constructs, and the confirmatory factor analysis (CFA), using AMOS, was used to test the significance of the proposed hypothesizes model. The results show that the relationships among OP, DA, TS and DL appear significant in this study. The result also indicates that online promotion could improve the destination loyalty. Finally, the academic contribution, implications of the findings for tourism marketers and limitation are also discussed in this study. JEL classification numbers: L11", "title": "" } ]
scidocsrr
f06686b4ea6fdc98f10a76b15d4e1d26
Sensing and Modeling Human Behavior Using Social Media and Mobile Data
[ { "docid": "8fad55f682270afe6434ec595dbbdeb3", "text": "It is becoming harder to find an app on one's smart phone due to the increasing number of apps available and installed on smart phones today. We collect sensory data including app use from smart phones, to perform a comprehensive analysis of the context related to mobile app use, and build prediction models that calculate the probability of an app in the current context. Based on these models, we developed a dynamic home screen application that presents icons for the most probable apps on the main screen of the phone and highlights the most probable one. Our models outperformed other strategies, and, in particular, improved prediction accuracy by 8% over Most Frequently Used from 79.8% to 87.8% (for 9 candidate apps). Also, we found that the dynamic home screen improved accessibility to apps on the phone, compared to the conventional static home screen in terms of accuracy, required touch input and app selection time.", "title": "" } ]
[ { "docid": "3fc784bb6e21cd26a5398973d1252029", "text": "Robots are slowly finding their way into the hands of search and rescue groups. One of the robots contributing to this effort is the Inuktun VGTV-Xtreme series by American Standard Robotics. This capable robot is one of the only robots engineered specifically for the search and rescue domain. This paper describes the adaptation of the VGTV platform from an industrial inspection robot into a capable and versatile search and rescue robot. These adaptations were based on growing requirements established by rescue groups, academic research, and extensive field trials. A narrative description of a successful search of a damaged building during the aftermath of Hurricane Katrina is included to support these claims. Finally, lessons learned from these deployments and guidelines for future robot development is discussed.", "title": "" }, { "docid": "aa0dc468b1b7402e9eb03848af31216e", "text": "This paper discusses the construction of speech databases for research into speech information processing and describes a problem illustrated by the case of emotional speech synthesis. It introduces a project for the processing of expressive speech, and describes the data collection techniques and the subsequent analysis of supra-linguistic, and emotional features signalled in the speech. It presents annotation guidelines for distinguishing speaking-style differences, and argues that the focus of analysis for expressive speech processing applications should be on the speaker relationships (defined herein), rather than on emotions.", "title": "" }, { "docid": "ea1072f2972dbf15ef8c2d38704a0095", "text": "The reliability of the microinverter is a very important feature that will determine the reliability of the ac-module photovoltaic (PV) system. Recently, many topologies and techniques have been proposed to improve its reliability. This paper presents a thorough study for different power decoupling techniques in single-phase microinverters for grid-tie PV applications. These power decoupling techniques are categorized into three groups in terms of the decoupling capacitor locations: 1) PV-side decoupling; 2) dc-link decoupling; and 3) ac-side decoupling. Various techniques and topologies are presented, compared, and scrutinized in scope of the size of decoupling capacitor, efficiency, and control complexity. Also, a systematic performance comparison is presented for potential power decoupling topologies and techniques.", "title": "" }, { "docid": "3bca3446ce76b1f1560e037e4041a1de", "text": "PURPOSE\nThe aim was to describe the demographic and clinical data of 116 consecutive cases of ocular dermoids.\n\n\nMETHODS\nThis was a retrospective case series and a review of clinical records of all the patients diagnosed with ocular dermoids. Both demographic and clinical data were recorded. Statistical analysis was performed with SPSS v. 18. Descriptive statistics are reported.\n\n\nRESULTS\nThe study included 116 consecutive patients with diagnosis consistent with ocular dermoids: corneal 18% (21), dermolipomas 38% (44), and orbital 44% (51). Sixty-five percent (71) were female, and 46% (54) were detected at birth. Secondary manifestations: amblyopia was present in 14% (3), and strabismus was detected in 6.8% (8). The Goldenhar syndrome was the most frequent syndromic entity in 7.5% (12) of the patients. Surgical resection was required on 49% (25) of orbital dermoids, 24% (5) of corneal dermoids, and 13% (6) of dermolipomas.\n\n\nCONCLUSIONS\nOrbital dermoids were the most frequent variety, followed by conjunctival and corneal. In contrast to other reports, corneal dermoids were significantly more prevalent in women. Goldenhar syndrome was the most frequent syndromatic entity.", "title": "" }, { "docid": "41d546266db9b3e9ec5071e4926abb8d", "text": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.", "title": "" }, { "docid": "1beb1c36b24f186de59d6c8ef5348dcd", "text": "We present a new corpus, PersonaBank, consisting of 108 personal stories from weblogs that have been annotated with their STORY INTENTION GRAPHS, a deep representation of the fabula of a story. We describe the topics of the stories and the basis of the STORY INTENTION GRAPH representation, as well as the process of annotating the stories to produce the STORY INTENTION GRAPHs and the challenges of adapting the tool to this new personal narrative domain We also discuss how the corpus can be used in applications that retell the story using different styles of tellings, co-tellings, or as a content planner.", "title": "" }, { "docid": "4f6b8ea6fb0884bbcf6d4a6a4f658e52", "text": "Ballistocardiography (BCG) enables the recording of heartbeat, respiration, and body movement data from an unconscious human subject. In this paper, we propose a new heartbeat detection algorithm for calculating heart rate (HR) and heart rate variability (HRV) from the BCG signal. The proposed algorithm consists of a moving dispersion calculation method to effectively highlight the respective heartbeat locations and an adaptive heartbeat peak detection method that can set a heartbeat detection window by automatically predicting the next heartbeat location. To evaluate the proposed algorithm, we compared it with other reference algorithms using a filter, waveform analysis and envelope calculation of signal by setting the ECG lead I as the gold standard. The heartbeat detection in BCG should be able to measure sensitively in the regions for lower and higher HR. However, previous detection algorithms are optimized mainly in the region of HR range (60~90 bpm) without considering the HR range of lower (40~60 bpm) and higher (90~110 bpm) HR. Therefore, we proposed an improved method in wide HR range that 40~110 bpm. The proposed algorithm detected the heartbeat greater stability in varying and wider heartbeat intervals as comparing with other previous algorithms. Our proposed algorithm achieved a relative accuracy of 98.29% with a root mean square error (RMSE) of 1.83 bpm for HR, as well as coverage of 97.63% and relative accuracy of 94.36% for HRV. And we obtained the root mean square (RMS) value of 1.67 for separated ranges in HR.", "title": "" }, { "docid": "be447131554900aaba025be449944613", "text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.", "title": "" }, { "docid": "68720a44720b4d80e661b58079679763", "text": "The value of involving people as ‘users’ or ‘participants’ in the design process is increasingly becoming a point of debate. In this paper we describe a new framework, called ‘informant design’, which advocates efficiency of input from different people: maximizing the value of contributions tlom various informants and design team members at different stages of the design process. To illustrate how this can be achieved we describe a project that uses children and teachers as informants at difTerent stages to help us design an interactive learning environment for teaching ecology.", "title": "" }, { "docid": "d79f92819d5485f2631897befd686416", "text": "Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. During the development of information visualization techniques the designer has to take into account the users' tasks to choose the graphical metaphor as well as the interactive methods to be provided. Testing and evaluating the usability of information visualization techniques are still a research question, and methodologies based on real or experimental users often yield significant results. To be comprehensive, however, experiments with users must rely on a set of tasks that covers the situations a real user will face when using the visualization tool. The present work reports and discusses the results of three case studies conducted as Multi-dimensional In-depth Long-term Case studies. The case studies were carried out to investigate MILCs-based usability evaluation methods for visualization tools.", "title": "" }, { "docid": "17ec5256082713e85c819bb0a0dd3453", "text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.", "title": "" }, { "docid": "2ae3a8bf304cfce89e8fcd331d1ec733", "text": "Linear Discriminant Analysis (LDA) is among the most optimal dimension reduction methods for classification, which provides a high degree of class separability for numerous applications from science and engineering. However, problems arise with this classical method when one or both of the scatter matrices is singular. Singular scatter matrices are not unusual in many applications, especially for highdimensional data. For high-dimensional undersampled and oversampled problems, the classical LDA requires modification in order to solve a wider range of problems. In recent work the generalized singular value decomposition (GSVD) has been shown to mitigate the issue of singular scatter matrices, and a new algorithm, LDA/GSVD, has been shown to be very robust for many applications in machine learning. However, the GSVD inherently has a considerable computational overhead. In this paper, we propose fast algorithms based on the QR decomposition and regularization that solve the LDA/GSVD computational bottleneck. In addition, we present fast algorithms for classical LDA and regularized LDA utilizing the framework based on LDA/GSVD and preprocessing by the Cholesky decomposition. Experimental results are presented that demonstrate substantial speedup in all of classical LDA, regularized LDA, and LDA/GSVD algorithms without any sacrifice in classification performance for a wide range of machine learning applications.", "title": "" }, { "docid": "66f76354b6470a49f18300f67e47abd0", "text": "Technologies in museums often support learning goals, providing information about exhibits. However, museum visitors also desire meaningful experiences and enjoy the social aspects of museum-going, values ignored by most museum technologies. We present ArtLinks, a visualization with three goals: helping visitors make connections to exhibits and other visitors by highlighting those visitors who share their thoughts; encouraging visitors' reflection on the social and liminal aspects of museum-going and their expectations of technology in museums; and doing this with transparency, aligning aesthetically pleasing elements of the design with the goals of connection and reflection. Deploying ArtLinks revealed that people have strong expectations of technology as an information appliance. Despite these expectations, people valued connections to other people, both for their own sake and as a way to support meaningful experience. We also found several of our design choices in the name of transparency led to unforeseen tradeoffs between the social and the liminal.", "title": "" }, { "docid": "db7edbb1a255e9de8486abbf466f9583", "text": "Nowadays, adopting an optimized irrigation system has become a necessity due to the lack of the world water resource. The system has a distributed wireless network of soil-moisture and temperature sensors. This project focuses on a smart irrigation system which is cost effective. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology where automation is playing important role in human life. Automation allows us to control various appliances automatically. DC motor based vehicle is designed for irrigation purpose. The objectives of this paper were to control the water supply to each plant automatically depending on values of temperature and soil moisture sensors. Mechanism is done such that soil moisture sensor electrodes are inserted in front of each soil. It also monitors the plant growth using various parameters like height and width. Android app.", "title": "" }, { "docid": "99fdab0b77428f98e9486d1cc7430757", "text": "Self organizing Maps (SOMs) are most well-known, unsupervised approach of neural network that is used for clustering and are very efficient in handling large and high dimensional dataset. As SOMs can be applied on large complex set, so it can be implemented to detect credit card fraud. Online banking and ecommerce has been experiencing rapid growth over past years and will show tremendous growth even in future. So, it is very necessary to keep an eye on fraudsters and find out some ways to depreciate the rate of frauds. This paper focuses on Real Time Credit Card Fraud Detection and presents a new and innovative approach to detect the fraud by the help of SOM. Keywords— Self-Organizing Map, Unsupervised Learning, Transaction Introduction The fast and rapid growth in the credit card issuers, online merchants and card users have made them very conscious about the online frauds. Card users just want to make safe transactions while purchasing their goods and on the other hand, banks want to differentiate the legitimate as well as fraudulent users. The merchants that is mostly affected as they do not have any kind of evidence like Digital Signature wants to sell their goods only to the legitimate users to make profit and want to use a great secure system that avoid them from a great loss. Our approach of Self Organizing map can work in the large complex datasets and can cluster even unaware datasets. It is an unsupervised neural network that works even in the absence of an external teacher and provides fruitful results in detecting credit card frauds. It is interesting to note that credit card fraud affect owner the least and merchant the most. The existing legislation and card holder protection policies as well as insurance scheme affect most the merchant and customer the least. Card issuer bank also has to pay the administrative cost and infrastructure cost. Studies show that average time lag between the fraudulent transaction dates and charge back notification 1344 Mitali Bansal and Suman can be high as 72 days, thereby giving fraudster sufficient time to cause severe damage. In this paper first, you will see a brief survey of different approaches on credit card fraud detection systems,. In Section 2 we explain the design and architecture of SOM to detect Credit Card Fraud. Section 3, will represent results. Finally, Conclusion are presented in Section 4. A Survey of Credit card fraud Detection Fraud Detection Systems work by trying to identify anomalies in an environment [1]. At the early stage, the research focus lies in using rule based expert systems. The model’s rule constructed through the input of many fraud experts within the bank [2]. But when their processing is encountered, their output become was worst. Because the rule based expert system totally lies on the prior information of the data set that is generally not available easily in the case of credit card frauds. After these many Artificial Neural Network (ANN) is mostly used and solved very complex problems in a very efficient way [3]. Some believe that unsupervised methods are best to detect credit card frauds because these methods work well even in absence of external teacher. While supervised methods are based on prior data knowledge and surely needs an external teacher. Unsupervised method is used [4] [5] to detect some kind of anomalies like fraud. They do not cluster the data but provides a ranking on the list of all segments and by this ranking method they provide how much a segment is anomalous as compare to the whole data sets or other segments [6]. Dempster-Shafer Theory [1] is able to detect anomalous data. They did an experiment to detect infected E-mails by the help of D-S theory. As this theory can also be helpful because in this modern era all the new card information is sent through e-mails by the banks. Some various other approaches have also been used to detect Credit Card Frauds, one of which is ID3 pre pruning method in which decision tree is formed to detect anomalous data [7]. Artificial Neural Networks are other efficient and intelligent methods to detect credit card fraud. A compound method that is based on rule-based systems and ANN is used to detect Credit card fraud by Brause et al. [8]. Our work is based on self-organizing map that is based on unsupervised approach to detect Credit Card Fraud. We focus on to detect anomalous data by making clusters so that legitimate and fraudulent transactions can be differentiated. Collection of data and its pre-processing is also explained by giving example in fraud detection. SYSTEM DESIGN ARCHITECTURE The SOM works well in detecting Credit Card Fraud and all its interesting properties we have already discussed. Here we provide some detailed prototype and working of SOM in fraud detection. Credit Card Fraud Detection Using Self Organised Map 1345 Our Approach to detect Credit Card Fraud Using SOM Our approach towards Real time Credit Card Fraud detection is modelled by prototype. It is a multilayered approach as: 1. Initial Selection of data set. 2. Conversion of data from Symbolic to Numerical Data Set. 3. Implementation of SOM. 4. A layer of further review and decision making. This multilayered approach works well in the detection of Credit Card Fraud. As this approach is based on SOM, so finally it will cluster the data into fraudulent and genuine sets. By further review the sets can be analyzed and proper decision can be taken based on those results. The algorithm that is implemented to detect credit card fraud using Self Organizing Map is represented in Figure 1: 1. Initially choose all neurons (weight vectors wi) randomly. 2. For each input vector Ii { 2. 1) Convert all the symbolic input to the Numerical input by applying some mean and standard deviation formulas. 2. 2) Perform the initial authentication process like verification of Pin, Address, expiry date etc. } 3. Choose the learning rate parameter randomly for eg. 0. 5 4. Initially update all neurons for each input vector Ii. 5. Apply the unsupervised approach to distinguish the transaction into fraudulent and non-fraudulent cluster. 5. 1) Perform iteration till a specific cluster is not formed for a input vector. 6. By applying SOM we can divide the transactions into fraudulent (Fk) and genuine vector (Gk). 7. Perform a manually review decision. 8. Get your optimized result. Figure 1: Algorithm to detect Credit Card Fraud Initial Selection of Data Set Input vectors are generally in the form of High Dimensional Real world quantities which will be fed to a neuron matrix. These quantities are generally divided as [9]: 1346 Mitali Bansal and Suman Figure 2: Division of Transactions to form an Input Matrix In Account related quantities we can include like account number, currency of account, account opening date, last date of credit or debit available balance etc. In customer related quantities we can include customer id, customer type like high profile, low profile etc. In transaction related quantities we can have transaction no, location, currency, its timestamp etc. Conversion of Symbolic data into Numeric In credit card fraud detection, all of the data of banking transactions will be in the form of the symbolic, so there is a need to convert that symbolic data into numeric one. For example location, name, customer id etc. Conversion of all this data needs some normal distribution mechanism on the basis of frequency. The normalizing of data is done using Z = (Ni-Mi) / S where Ni is frequency of occurrence of a particular entity, M is mean and S is standard deviation. Then after all this procedure we will arrive at normalized values [9]. Implementation of SOM After getting all the normalized values, we make a input vector matrix. After that randomly weight vector is selected, this is generally termed as Neuron matrix. Dimension of this neuron matrix will be same as input vector matrix. A randomly learning parameter α is also taken. The value of this learning parameter is a small positive value that can be adjusted according to the process. The commonly used similarity matrix is the Euclidian distance given by equation 1: Distance between two neuron = jx(p)=minj││X-Wj(p)││={ Xi-Wij(p)]}, (1) Where j=1, 2......m and W is neuron or weight matrix, X is Input vectorThe main output of SOM is the patterns and cluster it has given as output vector. The cluster in credit card fraud detection will be in the form of fraudulent and genuine set represented as Fk and Gk respectively. Credit Card Fraud Detection Using Self Organised Map 1347 Review and decision making The clustering of input data into fraudulent and genuine set shows the categories of transactions performed as well as rarely performed more frequently as well as rarely by each customer. Since by the help of SOM relationship as well as hidden patterns is unearthed, we get more accuracy in our results. If the extent of suspicious activity exceeds a certain threshold value that transaction can be sent for review. So, it reduces overall processing time and complexity. Results The no of transactions taken in Test1, Test2, Test3 and Test4 are 500, 1000, 1500 and 2000 respectively. When compared to ID3 algorithm our approach presents much efficient result as shown in figure 3. Conclusion As results shows that SOM gives better results in case of detecting credit card fraud. As all parameters are verified and well represented in plots. The uniqueness of our approach lies in using the normalization and clustering mechanism of SOM of detecting credit card fraud. This helps in detecting hidden patterns of the transactions which cannot be identified to the other traditional method. With appropriate no of weight neurons and with help of thousands of iterations the network is trained and then result is verified to new transactions. The concept of normalization will help to normalize the values in other fraud cases and SOM will be helpful in detecting anomalies in credit card fraud cas", "title": "" }, { "docid": "fa1b427e152ee84b8c38687ab84d1f7c", "text": "We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG [44], where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG’s per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth [18], we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 [11] for ImageNet [3], where our techniques produce improved accuracy (.15–.41% in precision@1) with substantially less computation (bypassing 25–40% of the layers).", "title": "" }, { "docid": "7e17c1842a70e416f0a90bdcade31a8e", "text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.", "title": "" }, { "docid": "0dbca0a2aec1b27542463ff80fc4f59d", "text": "An emerging research area named Learning-to-Rank (LtR) has shown that effective solutions to the ranking problem can leverage machine learning techniques applied to a large set of features capturing the relevance of a candidate document for the user query. Large-scale search systems must however answer user queries very fast, and the computation of the features for candidate documents must comply with strict back-end latency constraints. The number of features cannot thus grow beyond a given limit, and Feature Selection (FS) techniques have to be exploited to find a subset of features that both meets latency requirements and leads to high effectiveness of the trained models. In this paper, we propose three new algorithms for FS specifically designed for the LtR context where hundreds of continuous or categorical features can be involved. We present a comprehensive experimental analysis conducted on publicly available LtR datasets and we show that the proposed strategies outperform a well-known state-of-the-art competitor.", "title": "" }, { "docid": "dacf68b5e159211d6e9bb8983ef8bb3c", "text": "Analog-to-Digital converters plays vital role in medical and signal processing applications. Normally low power ADC's were required for long term and battery operated applications. SAR ADC is best suited for low power, medium resolution and moderate speed applications. This paper presents a 10-bit low power SAR ADC which is simulated in 180nm CMOS technology. Based on literature survey, low power consumption is attained by using Capacitive DAC. Capacitive DAC also incorporate Sample-and-Hold circuit in it. Dynamic latch comparator is used to increase in speed of operation and to get lower power consumption.", "title": "" }, { "docid": "feef714b024ad00086a5303a8b74b0a4", "text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.", "title": "" } ]
scidocsrr
90ac93734d1255e3fed9569138c05db8
Generalizing the Convolution Operator to Extend CNNs to Irregular Domains
[ { "docid": "be593352763133428b837f1c593f30cf", "text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "title": "" }, { "docid": "645395d46f653358d942742711d50c0b", "text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets", "title": "" } ]
[ { "docid": "0cd96187b257ee09060768650432fe6d", "text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.", "title": "" }, { "docid": "ee5b46719023b5dbae96997bbf9925b0", "text": "The teaching of reading in different languages should be informed by an effective evidence base. Although most children will eventually become competent, indeed skilled, readers of their languages, the pre-reading (e.g. phonological awareness) and language skills that they bring to school may differ in systematic ways for different language environments. A thorough understanding of potential differences is required if literacy teaching is to be optimized in different languages. Here we propose a theoretical framework based on a psycholinguistic grain size approach to guide the collection of evidence in different countries. We argue that the development of reading depends on children's phonological awareness in all languages studied to date. However, we propose that because languages vary in the consistency with which phonology is represented in orthography, there are developmental differences in the grain size of lexical representations, and accompanying differences in developmental reading strategies across orthographies.", "title": "" }, { "docid": "5387c752db7b4335a125df91372099b3", "text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.", "title": "" }, { "docid": "91599bb49aef3e65ee158ced65277d80", "text": "We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.", "title": "" }, { "docid": "947bb564a2a4207d33ca545d8194add4", "text": "Classical theories of the firm assume access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using twenty-five online field experiments (representing $2.8 million) with major U.S. retailers and brokerages, that this assumption typically does not hold. Statistical evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign—a “small” impact on a noisy dependent variable can generate positive returns. A concise statistical argument shows that the required sample size for an experiment to generate sufficiently informative confidence intervals is typically in excess of ten million person-weeks. This also implies that heterogeneity bias (or model misspecification) unaccounted for by observational methods only needs to explain a tiny fraction of the variation in sales to severely bias estimates. The weak informational feedback means most firms cannot even approach profit maximization.", "title": "" }, { "docid": "553ec50cb948fb96d96b5481ada71399", "text": "Enormous amount of online information, available in legal domain, has made legal text processing an important area of research. In this paper, we attempt to survey different text summarization techniques that have taken place in the recent past. We put special emphasis on the issue of legal text summarization, as it is one of the most important areas in legal domain. We start with general introduction to text summarization, briefly touch the recent advances in single and multi-document summarization, and then delve into extraction based legal text summarization. We discuss different datasets and metrics used in summarization and compare performances of different approaches, first in general and then focused to legal text. we also mention highlights of different summarization techniques. We briefly cover a few software tools used in legal text summarization. We finally conclude with some future research directions.", "title": "" }, { "docid": "577b9ea82dd60b394ad3024452986d96", "text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods involving manual detection are not only time consuming, expensive and inaccurate, but in the age of big data they are also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive review of financial fraud detection research using such data mining methods, with a particular focus on computational intelligence (CI)-based techniques. Over fifty scientific literature, primarily spanning the period 2004-2014, were analysed in this study; literature that reported empirical studies focusing specifically on CI-based financial fraud detection were considered in particular. Research gap was identified as none of the existing review articles addresses the association among fraud types, CIbased detection algorithms and their performance, as reported in the literature. We have presented a comprehensive classification as well as analysis of existing fraud detection literature based on key aspects such as detection algorithm used, fraud type investigated, and performance of the detection methods for specific financial fraud types. Some of the key issues and challenges associated with the current practices and potential future direction of research have also", "title": "" }, { "docid": "338d3b05db192186bb6caf6f36904dd0", "text": "The threat of malicious insiders to organizations is persistent and increasing. We examine 15 real cases of insider threat sabotage of IT systems to identify several key points in the attack time-line, such as when the insider clearly became disgruntled, began attack preparations, and carried out the attack. We also determine when the attack stopped, when it was detected, and when action was taken on the insider. We found that 7 of the insiders we studied clearly became disgruntled more than 28 days prior to attack, but 9 did not carry out malicious acts until less than a day prior to attack. Of the 15 attacks, 8 ended within a day, 12 were detected within a week, and in 10 cases action was taken on the insider within a month. This exercise is a proof-of-concept for future work on larger data sets, and in this paper we detail our study methods and results, discuss challenges we faced, and identify potential new research directions.", "title": "" }, { "docid": "3256b2050c603ca16659384a0e98a22c", "text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.", "title": "" }, { "docid": "e775fbbad557e2335268111ab7fc1875", "text": "In recent times the rate at which information is being processed and shared through the internet has tremendously increased. Internet users are in need of systems and tools that will help them manage this information overload. Search engines and recommendation systems have been recently adopted to help solve this problem. The aim of this research is to model a spontaneous research paper recommender system that recommends serendipitous research papers from two large normally mismatched information spaces or domains using BisoNets. Set and graph theory methods were employed to model the problem, whereas text mining methodologies were used to develop nodes and links of the BisoNets. Nodes were constructed from keywords, while links between nodes were established through weighting that was determined from the co-occurrence of corresponding keywords in the same title and domain. Preliminary results from the word clouds indicates that there is no obvious relationship between the two domains. The strongest links in the established information networks can be exploited to display associations that can be discovered between the two matrices. Research paper recommender systems exploit these latent relationships to recommend serendipitous articles when Bisociative Knowledge Discovery techniques and methodologies are utilized appropriately.", "title": "" }, { "docid": "9d849042d1775cf9008678f98f1a3452", "text": "Nonuniform sampling can be utilized to achieve certain desirable results. Periodic nonuniform sampling can decrease the required sampling rate for signals. Random sampling can be used as a digital alias-free signal processing method in analog-to-digital conversion. In this paper, we first present the fractional spectrum estimation of signals that are bandlimited in the fractional Fourier domain based on the general periodic random sampling approach. To show the estimation effect, the unbiasedness, the variance, and the optimal estimation condition are analyzed. The reconstruction of the fractional spectrum from the periodic random samples is also proposed. Second, the effects of sampling jitters and observation errors on the performance of the fractional spectrum estimation are analyzed, where the new defined fractional characteristic function is used to compensate the estimation bias from sampling jitters. Furthermore, we investigate the fractional spectral analysis from two widely used random sampling schemes, i.e., simple random sampling and stratified random sampling. Finally, all of the analysis results are applied and verified using a radar signal processing system.", "title": "" }, { "docid": "5ccda95046b0e5d1cfc345011b1e350d", "text": "Considerable emphasis is currently placed on reducing healthcare-associated infection through improving hand hygiene compliance among healthcare professionals. There is also increasing discussion in the lay media of perceived poor hand hygiene compliance among healthcare staff. Our aim was to report the outcomes of a systematic search for peer-reviewed, published studies - especially clinical trials - that focused on hand hygiene compliance among healthcare professionals. Literature published between December 2009, after publication of the World Health Organization (WHO) hand hygiene guidelines, and February 2014, which was indexed in PubMed and CINAHL on the topic of hand hygiene compliance, was searched. Following examination of relevance and methodology of the 57 publications initially retrieved, 16 clinical trials were finally included in the review. The majority of studies were conducted in the USA and Europe. The intensive care unit emerged as the predominant focus of studies followed by facilities for care of the elderly. The category of healthcare worker most often the focus of the research was the nurse, followed by the healthcare assistant and the doctor. The unit of analysis reported for hand hygiene compliance was 'hand hygiene opportunity'; four studies adopted the 'my five moments for hand hygiene' framework, as set out in the WHO guidelines, whereas other papers focused on unique multimodal strategies of varying design. We concluded that adopting a multimodal approach to hand hygiene improvement intervention strategies, whether guided by the WHO framework or by another tested multimodal framework, results in moderate improvements in hand hygiene compliance.", "title": "" }, { "docid": "4fc67f5a4616db0906b943d7f13c856d", "text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].", "title": "" }, { "docid": "ecbdb56c52a59f26cf8e33fc533d608f", "text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.", "title": "" }, { "docid": "27464fdcd9a56975bf381773fd4da76d", "text": "Although evidence with respect to its prevalence is mixed, it is clear that fathers perpetrate a serious proportion of filicide. There also seems to be a consensus that paternal filicide has attracted less research attention than its maternal counterpart and is therefore less well understood. National registries are a very rich source of data, but they generally provide limited information about the perpetrator as psychiatric, psychological and behavioral data are often lacking. This paper presents a fully documented case of a paternal filicide. Noteworthy is that two motives were present: spousal revenge as well as altruism. The choice of the victim was in line with emerging evidence indicating that children with disabilities in general and with autism in particular are frequent victims of filicide-suicide. Finally, a schizoid personality disorder was diagnosed. Although research is quite scarce on that matter, some research outcomes have showed an association between schizoid personality disorder and homicide and violence.", "title": "" }, { "docid": "7eac260700c56178533ec687159ac244", "text": "Chat robot, a computer program that simulates human conversation, or chat, through artificial intelligence an intelligence chat bot will be used to give information or answers to any question asked by user related to bank. It is more like a virtual assistant, people feel like they are talking with real person. They speak the same language we do, can answer questions. In banks, at user care centres and enquiry desks, human is insufficient and usually takes long time to process the single request which results in wastage of time and also reduce quality of user service. The primary goal of this chat bot is user can interact with mentioning their queries in plain English and the chat bot can resolve their queries with appropriate response in return The proposed system would help duplicate the user utility experience with one difference that employee and yet get the queries attended and resolved. It can extend daily life, by providing solutions to help desks, telephone answering systems, user care centers. This paper defines the dataset that we have prepared from FAQs of bank websites, architecture and methodology used for developing such chatbot. Also this paper discusses the comparison of seven ML classification algorithm used for getting the class of input to chat bot.", "title": "" }, { "docid": "9c09cf2c1fd62e7d24f472e03b615017", "text": "Summarization is the process of reducing a text document to create a summary that retains the most important points of the original document. Extractive summarizers work on the given text to extract sentences that best convey the message hidden in the text. Most extractive summarization techniques revolve around the concept of finding keywords and extracting sentences that have more keywords than the rest. Keyword extraction usually is done by extracting relevant words having a higher frequency than others, with stress on important ones'. Manual extraction or annotation of keywords is a tedious process brimming with errors involving lots of manual effort and time. In this paper, we proposed an algorithm to extract keyword automatically for text summarization in e-newspaper datasets. The proposed algorithm is compared with the experimental result of articles having the similar title in four different e-Newspapers to check the similarity and consistency in summarized results.", "title": "" }, { "docid": "9cf81f7fc9fdfcf5718aba0a67b89a45", "text": "Many modern games provide environments in which agents perform decision making at several levels of granularity. In the domain of real-time strategy games, an effective agent must make high-level strategic decisions while simultaneously controlling individual units in battle. We advocate reactive planning as a powerful technique for building multi-scale game AI and demonstrate that it enables the specification of complex, real-time agents in a unified agent architecture. We present several idioms used to enable authoring of an agent that concurrently pursues strategic and tactical goals, and an agent for playing the real-time strategy game StarCraft that uses these design patterns.", "title": "" }, { "docid": "ce0f21b03d669b72dd954352e2c35ab1", "text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.", "title": "" }, { "docid": "9d7a67f2cd12a6fd033ad102fb9c526e", "text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.", "title": "" } ]
scidocsrr
d17f2cc0093908c1a716ab0b788169e8
RoarNet: A Robust 3D Object Detection based on RegiOn Approximation Refinement
[ { "docid": "df609125f353505fed31eee302ac1742", "text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "title": "" }, { "docid": "a214ed60c288762210189f14a8cf8256", "text": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "title": "" }, { "docid": "73a62915c29942d2fac0570cac7eb3e0", "text": "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "title": "" } ]
[ { "docid": "2fd7cc65c34551c90a72fc3cb4665336", "text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.", "title": "" }, { "docid": "ee865e3291eff95b5977b54c22b59f19", "text": "Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls.", "title": "" }, { "docid": "661d5db6f4a8a12b488d6f486ea5995e", "text": "Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.", "title": "" }, { "docid": "cef79010b9772639d42351c960b68c83", "text": "In many real world elections, agents are not required to rank all candidates. We study three of the most common meth ods used to modify voting rules to deal with such partial votes. These methods modify scoring rules (like the Borda count), e limination style rules (like single transferable vote) and rule s based on the tournament graph (like Copeland) respectively. We argu e that with an elimination style voting rule like single transfera ble vote, partial voting does not change the situations where strateg ic voting is possible. However, with scoring rules and rules based on the tournament graph, partial voting can increase the situations wher e strategic voting is possible. As a consequence, the computational com plexity of computing a strategic vote can change. For example, with B orda count, the complexity of computing a strategic vote can decr ease or stay the same depending on how we score partial votes.", "title": "" }, { "docid": "8af777a64f8f2127552a05c8ea462416", "text": "This work addresses the issue of fire and smoke detection in a scene within a video surveillance framework. Detection of fire and smoke pixels is at first achieved by means of a motion detection algorithm. In addition, separation of smoke and fire pixels using colour information (within appropriate spaces, specifically chosen in order to enhance specific chromatic features) is performed. In parallel, a pixel selection based on the dynamics of the area is carried out in order to reduce false detection. The output of the three parallel algorithms are eventually fused by means of a MLP.", "title": "" }, { "docid": "fca58dee641af67f9bb62958b5b088f2", "text": "This work explores the possibility of mixing two different fingerprints, pertaining to two different fingers, at the image level in order to generate a new fingerprint. To mix two fingerprints, each fingerprint pattern is decomposed into two different components, viz., the continuous and spiral components. After prealigning the components of each fingerprint, the continuous component of one fingerprint is combined with the spiral component of the other fingerprint. Experiments on the West Virginia University (WVU) and FVC2002 datasets show that mixing fingerprints has several benefits: (a) it can be used to generate virtual identities from two different fingers; (b) it can be used to obscure the information present in an individual's fingerprint image prior to storing it in a central database; and (c) it can be used to generate a cancelable fingerprint template, i.e., the template can be reset if the mixed fingerprint is compromised.", "title": "" }, { "docid": "b14010454fe4b9f9712c13cbf9a5e23b", "text": "In this paper we propose an approach to Part of Speech (PoS) tagging using a combination of Hidden Markov Model and error driven learning. For the NLPAI joint task, we also implement a chunker using Conditional Random Fields (CRFs). The results for the PoS tagging and chunking task are separately reported along with the results of the joint task.", "title": "" }, { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" }, { "docid": "69d8d5b38456b30d3252d95cb43734cf", "text": "Article prepared for a revised edition of the ENCYCLOPEDIA OF ARTIFICIAL INTELLIGENCE, S. Shapiro (editor), to be published by John Wiley, 1992. Final Draft; DO NOT REPRODUCE OR CIRCULATE. This copy is for review only. Please do not cite or copy. Prepared using troff, pic, eqn, tbl and bib under Unix 4.3 BSD.", "title": "" }, { "docid": "e61a0ba24db737d42a730d5738583ffa", "text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.", "title": "" }, { "docid": "533b8bf523a1fb69d67939607814dc9c", "text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.", "title": "" }, { "docid": "68b2608c91525f3147f74b41612a9064", "text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.", "title": "" }, { "docid": "dea52c761a9f4d174e9bd410f3f0fa38", "text": "Much computational work has been done on identifying and interpreting the meaning of metaphors, but little work has been done on understanding the motivation behind the use of metaphor. To computationally model discourse and social positioning in metaphor, we need a corpus annotated with metaphors relevant to speaker intentions. This paper reports a corpus study as a first step towards computational work on social and discourse functions of metaphor. We use Amazon Mechanical Turk (MTurk) to annotate data from three web discussion forums covering distinct domains. We then compare these to annotations from our own annotation scheme which distinguish levels of metaphor with the labels: nonliteral, conventionalized, and literal. Our hope is that this work raises questions about what new work needs to be done in order to address the question of how metaphors are used to achieve social goals in interaction.", "title": "" }, { "docid": "a03d0772d8c3e1fd5c954df2b93757e3", "text": "The tumor microenvironment is a complex system, playing an important role in tumor development and progression. Besides cellular stromal components, extracellular matrix fibers, cytokines, and other metabolic mediators are also involved. In this review we outline the potential role of hypoxia, a major feature of most solid tumors, within the tumor microenvironment and how it contributes to immune resistance and immune suppression/tolerance and can be detrimental to antitumor effector cell functions. We also outline how hypoxic stress influences immunosuppressive pathways involving macrophages, myeloid-derived suppressor cells, T regulatory cells, and immune checkpoints and how it may confer tumor resistance. Finally, we discuss how microenvironmental hypoxia poses both obstacles and opportunities for new therapeutic immune interventions.", "title": "" }, { "docid": "e0b8b4e916f5e4799ad2ab95d71b0b26", "text": "Automation plays a very important role in every field of human life. This paper contains the proposal of a fully automated menu ordering system in which the paper based menu is replaced by a user friendly Touchscreen based menu card. The system has PIC microcontroller which is interfaced with the input and output modules. The input module is the touchscreen sensor which is placed on GLCD (Graphical Liquid Crystal Display) to have a graphic image display, which takes the input from the user and provides the same information to the microcontroller. The output module is a Zigbee module which is used for communication between system at the table and system for receiving section. Microcontroller also displays the menu items on the GLCD. At the receiving end the selected items will be displayed on the LCD and by using the conveyer belt the received order will send to the particular table.", "title": "" }, { "docid": "257ffbc75578916dc89a703598ac0447", "text": "Implant surgery in mandibular anterior region may turn from an easy minor surgery into a complicated one for the surgeon, due to inadequate knowledge of the anatomy of the surgical area and/or ignorance toward the required surgical protocol. Hence, the purpose of this article is to present an overview on the: (a) Incidence of massive bleeding and its consequences after implant placement in mandibular anterior region. (b) Its etiology, the precautionary measures to be taken to avoid such an incidence in clinical practice and management of such a hemorrhage if at all happens. An inclusion criterion for selection of article was defined, and an electronic Medline search through different database using different keywords and manual search in journals and books was executed. Relevant articles were selected based upon inclusion criteria to form the valid protocols for implant surgery in the anterior mandible. Further, from the selected articles, 21 articles describing case reports were summarized separately in a table to alert the dental surgeons about the morbidity they could come across while operating in this region. If all the required adequate measures for diagnosis and treatment planning are taken and appropriate surgical protocol is followed, mandibular anterior region is no doubt a preferable area for implant placement.", "title": "" }, { "docid": "f3e9858900dd75c86d106856e63f1ab2", "text": "In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\n This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.", "title": "" }, { "docid": "809392d489af5e1f8e85a9ad8a8ba9e0", "text": "Although a large number of ion channels are now believed to be regulated by phosphoinositides, particularly phosphoinositide 4,5-bisphosphate (PIP2), the mechanisms involved in phosphoinositide regulation are unclear. For the TRP superfamily of ion channels, the role and mechanism of PIP2 modulation has been especially difficult to resolve. Outstanding questions include: is PIP2 the endogenous regulatory lipid; does PIP2 potentiate all TRPs or are some TRPs inhibited by PIP2; where does PIP2 interact with TRP channels; and is the mechanism of modulation conserved among disparate subfamilies? We first addressed whether the PIP2 sensor resides within the primary sequence of the channel itself, or, as recently proposed, within an accessory integral membrane protein called Pirt. Here we show that Pirt does not alter the phosphoinositide sensitivity of TRPV1 in HEK-293 cells, that there is no FRET between TRPV1 and Pirt, and that dissociated dorsal root ganglion neurons from Pirt knock-out mice have an apparent affinity for PIP2 indistinguishable from that of their wild-type littermates. We followed by focusing on the role of the C terminus of TRPV1 in sensing PIP2. Here, we show that the distal C-terminal region is not required for PIP2 regulation, as PIP2 activation remains intact in channels in which the distal C-terminal has been truncated. Furthermore, we used a novel in vitro binding assay to demonstrate that the proximal C-terminal region of TRPV1 is sufficient for PIP2 binding. Together, our data suggest that the proximal C-terminal region of TRPV1 can interact directly with PIP2 and may play a key role in PIP2 regulation of the channel.", "title": "" }, { "docid": "b19e77ddb2c2ca5cc18bd8ba5425a698", "text": "In pharmaceutical formulations, phospholipids obtained from plant or animal sources and synthetic phospholipids are used. Natural phospholipids are purified from, e.g., soybeans or egg yolk using non-toxic solvent extraction and chromatographic procedures with low consumption of energy and minimum possible waste. Because of the use of validated purification procedures and sourcing of raw materials with consistent quality, the resulting products differing in phosphatidylcholine content possess an excellent batch to batch reproducibility with respect to phospholipid and fatty acid composition. The natural phospholipids are described in pharmacopeias and relevant regulatory guidance documentation of the Food and Drug Administration (FDA) and European Medicines Agency (EMA). Synthetic phospholipids with specific polar head group, fatty acid composition can be manufactured using various synthesis routes. Synthetic phospholipids with the natural stereochemical configuration are preferably synthesized from glycerophosphocholine (GPC), which is obtained from natural phospholipids, using acylation and enzyme catalyzed reactions. Synthetic phospholipids play compared to natural phospholipid (including hydrogenated phospholipids), as derived from the number of drug products containing synthetic phospholipids, a minor role. Only in a few pharmaceutical products synthetic phospholipids are used. Natural phospholipids are used in oral, dermal, and parenteral products including liposomes. Natural phospholipids instead of synthetic phospholipids should be selected as phospholipid excipients for formulation development, whenever possible, because natural phospholipids are derived from renewable sources and produced with more ecologically friendly processes and are available in larger scale at relatively low costs compared to synthetic phospholipids. Practical applications: For selection of phospholipid excipients for pharmaceutical formulations, natural phospholipids are preferred compared to synthetic phospholipids because they are available at large scale with reproducible quality at lower costs of goods. They are well accepted by regulatory authorities and are produced using less chemicals and solvents at higher yields. In order to avoid scale up problems during pharmaceutical development and production, natural phospholipid excipients instead of synthetic phospholipids should be selected whenever possible.", "title": "" }, { "docid": "d372c1fba12412dac5dc850baf3267b9", "text": "Smart grid is an intelligent power network featured by its two-way flows of electricity and information. With an integrated communication infrastructure, smart grid manages the operation of all connected components to provide reliable and sustainable electricity supplies. Many advanced communication technologies have been identified for their applications in different domains of smart grid networks. This paper focuses on wireless communication networking technologies for smart grid neighborhood area networks (NANs). In particular, we aim to offer a comprehensive survey to address various important issues on implementation of smart grid NANs, including network topology, gateway deployment, routing algorithms, and security. We will identify four major challenges for the implementation of NANs, including timeliness management, security assurance, compatibility design, and cognitive spectrum access, based on which the future research directions are suggested.", "title": "" } ]
scidocsrr
244f19e37a8cdaeba09b9581f772e37d
Workload Management in Dynamic IT Service Delivery Organizations
[ { "docid": "254a84aae5d06ae652996535027e282c", "text": "Change management is a process by which IT systems are modified to accommodate considerations such as software fixes, hardware upgrades and performance enhancements. This paper discusses the CHAMPS system, a prototype under development at IBM Research for Change Management with Planning and Scheduling. The CHAMPS system is able to achieve a very high degree of parallelism for a set of tasks by exploiting detailed factual knowledge about the structure of a distributed system from dependency information at runtime. In contrast, today's systems expect an administrator to provide such insights, which is often not the case. Furthermore, the optimization techniques we employ allow the CHAMPS system to come up with a very high quality solution for a mathematically intractable problem in a time which scales nicely with the problem size. We have implemented the CHAMPS system and have applied it in a TPC-W environment that implements an on-line book store application.", "title": "" }, { "docid": "b45f832faf2816d456afa25a3641ffe9", "text": "This book is about feedback control of computing systems. The main idea of feedback control is to use measurements of a system’s outputs, such as response times, throughputs, and utilizations, to achieve externally specified goals. This is done by adjusting the system control inputs, such as parameters that affect buffer sizes, scheduling policies, and concurrency levels. Since the measured outputs are used to determine the control inputs, and the inputs then affect the outputs, the architecture is called feedback or closed loop. Almost any system that is considered automatic has some element of feedback control. In this book we focus on the closed-loop control of computing systems and methods for their analysis and design.", "title": "" } ]
[ { "docid": "7ec12c0bf639c76393954baae196a941", "text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.", "title": "" }, { "docid": "376f28143deecc7b95fe45d54dd16bb6", "text": "We investigate the problem of lung nodule malignancy suspiciousness (the likelihood of nodule malignancy) classification using thoracic Computed Tomography (CT) images. Unlike traditional studies primarily relying on cautious nodule segmentation and time-consuming feature extraction, we tackle a more challenging task on directly modeling raw nodule patches and building an end-to-end machinelearning architecture for classifying lung nodule malignancy suspiciousness. We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times. Extensive experimental results show that the proposed method not only achieves state-of-the-art nodule suspiciousness classification performance, but also effectively characterizes nodule semantic attributes (subtlety and margin) and nodule diameter which are potentially helpful in modeling nodule malignancy. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "05f3d2097efffb3e1adcbede16ec41d2", "text": "BACKGROUND\nDialysis patients with uraemic pruritus (UP) have significantly impaired quality of life. To assess the therapeutic effect of UP treatments, a well-validated comprehensive and multidimensional instrument needed to be established.\n\n\nOBJECTIVES\nTo develop and validate a multidimensional scale assessing UP in patients on dialysis: the Uraemic Pruritus in Dialysis Patients (UP-Dial).\n\n\nMETHODS\nThe development and validation of the UP-Dial instrument were conducted in four phases: (i) item generation, (ii) development of a pilot questionnaire, (iii) refinement of the questionnaire with patient recruitment and (iv) psychometric validation. Participants completed the UP-Dial, the visual analogue scale (VAS) of UP, the Dermatology Life Quality Index (DLQI), the Kidney Disease Quality of Life-36 (KDQOL-36), the Pittsburgh Sleep Quality Index (PSQI) and the Beck Depression Inventory (BDI) between 15 May 2012 and 30 November 2015.\n\n\nRESULTS\nThe 27-item pilot UP-Dial was generated, with 168 participants completing the pilot scale. After factor analysis was performed, the final 14-item UP-Dial encompassed three domains: signs and symptoms, psychosocial, and sleep. Face and content validity were satisfied through the item generation process and expert review. Psychometric analysis demonstrated that the UP-Dial had good convergent and discriminant validity. The UP-Dial was significantly correlated [Spearman rank coefficient, 95% confidence interval (CI)] with the VAS-UP (0·76, 0·69-0·83), DLQI (0·78, 0·71-0·85), KDQOL-36 (-0·86, -0·91 to -0·81), PSQI (0·85, 0·80-0·89) and BDI (0·70, 0·61-0·79). The UP-Dial revealed excellent internal consistency (Cronbach's α 0·90, 95% CI 0·87-0·92) and reproducibility (intraclass correlation 0·95, 95% CI 0·90-0·98).\n\n\nCONCLUSIONS\nThe UP-Dial is valid and reliable for assessing UP among patients on dialysis. Future research should focus on the cross-cultural adaptation and translation of the scale to other languages.", "title": "" }, { "docid": "305efd1823009fe79c9f8ff52ddb5724", "text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.", "title": "" }, { "docid": "1fc965670f71d9870a4eea93d129e285", "text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "e6d309d24e7773d7fc78c3ebeb926ba0", "text": "INTRODUCTION\nLiver disease is the third most common cause of premature mortality in the UK. Liver failure accelerates frailty, resulting in skeletal muscle atrophy, functional decline and an associated risk of liver transplant waiting list mortality. However, there is limited research investigating the impact of exercise on patient outcomes pre and post liver transplantation. The waitlist period for patients listed for liver transplantation provides a unique opportunity to provide and assess interventions such as prehabilitation.\n\n\nMETHODS AND ANALYSIS\nThis study is a phase I observational study evaluating the feasibility of conducting a randomised control trial (RCT) investigating the use of a home-based exercise programme (HBEP) in the management of patients awaiting liver transplantation. Twenty eligible patients will be randomly selected from the Queen Elizabeth University Hospital Birmingham liver transplant waiting list. Participants will be provided with an individually tailored 12-week HBEP, including step targets and resistance exercises. Activity trackers and patient diaries will be provided to support data collection. For the initial 6 weeks, telephone support will be given to discuss compliance with the study intervention, achievement of weekly targets, and to address any queries or concerns regarding the intervention. During weeks 6-12, participants will continue the intervention without telephone support to evaluate longer term adherence to the study intervention. On completing the intervention, all participants will be invited to engage in a focus group to discuss their experiences and the feasibility of an RCT.\n\n\nETHICS AND DISSEMINATION\nThe protocol is approved by the National Research Ethics Service Committee North West - Greater Manchester East and Health Research Authority (REC reference: 17/NW/0120). Recruitment into the study started in April 2017 and ended in July 2017. Follow-up of participants is ongoing and due to finish by the end of 2017. The findings of this study will be disseminated through peer-reviewed publications and international presentations. In addition, the protocol will be placed on the British Liver Trust website for public access.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02949505; Pre-results.", "title": "" }, { "docid": "712a4bdb5b285f3ef52218096ec3a4bf", "text": "We describe the relations between active maintenance of the hand at various positions in a two-dimensional space and the frequency of single cell discharge in motor cortex (n = 185) and area 5 (n = 128) of the rhesus monkey. The steady-state discharge rate of 124/185 (67%) motor cortical and 105/128 (82%) area 5 cells varied with the position in which the hand was held in space (“static spatial effect”). The higher prevalence of this effect in area 5 was statistically significant. In both structures, static effects were observed at similar frequencies for cells that possessed as well as for those that lacked passive driving from the limb. The results obtained by a quantitative analysis were similar for neurons of the two cortical areas studied. It was found that of the neurons with a static effect, the steady-state discharge rate of 78/124 (63%) motor cortical and 63/105 (60%) area 5 cells was a linear function of the position of the hand across the two-dimensional space, so that the neuronal “response surface” was adequately described by a plane (R2 ≥ 0.7, p < 0.05, F-test in analysis of variance). The preferred orientations of these response planes differed for different cells. These results indicate that individual cells in these areas do not relate uniquely a particular position of the hand in space. Instead, they seem to encode spatial gradients at certain orientations. A unique relation to position in space could be signalled by the whole population of these neurons, considered as an ensemble. This remains to be elucidated. Finally, the similarity of the quantitative relations observed in motor cortex and area 5 suggests that these structures may process spatial information in a similar way.", "title": "" }, { "docid": "7c19a963cd3ad7119278744e73c1c27a", "text": "This work presents a study of three important issues of the color pixel classification approach to skin segmentation: color representation, color quantization, and classification algorithm. Our analysis of several representative color spaces using the Bayesian classifier with the histogram technique shows that skin segmentation based on color pixel classification is largely unaffected by the choice of the color space. However, segmentation performance degrades when only chrominance channels are used in classification. Furthermore, we find that color quantization can be as low as 64 bins per channel, although higher histogram sizes give better segmentation performance. The Bayesian classifier with the histogram technique and the multilayer perceptron classifier are found to perform better compared to other tested classifiers, including three piecewise linear classifiers, three unimodal Gaussian classifiers, and a Gaussian mixture classifier.", "title": "" }, { "docid": "cdcbbe1e40a36974ac333912940718a7", "text": "Plant growth promoting rhizobacteria (PGPR) are beneficial bacteria which have the ability to colonize the roots and either promote plant growth through direct action or via biological control of plant diseases (Kloepper and Schroth 1978). They are associated with many plant species and are commonly present in varied environments. Strains with PGPR activity, belonging to genera Azoarcus, Azospirillum, Azotobacter, Arthrobacter, Bacillus, Clostridium, Enterobacter, Gluconacetobacter, Pseudomonas, and Serratia, have been reported (Hurek and Reinhold-Hurek 2003). Among these, species of Pseudomonas and Bacillus are the most extensively studied. These bacteria competitively colonize the roots of plant and can act as biofertilizers and/or antagonists (biopesticides) or simultaneously both. Diversified populations of aerobic endospore forming bacteria (AEFB), viz., species of Bacillus, occur in agricultural fields and contribute to crop productivity directly or indirectly. Physiological traits, such as multilayered cell wall, stress resistant endospore formation, and secretion of peptide antibiotics, peptide signal molecules, and extracellular enzymes, are ubiquitous to these bacilli and contribute to their survival under adverse environmental conditions for extended periods of time. Multiple species of Bacillus and Paenibacillus are known to promote plant growth. The principal mechanisms of growth promotion include production of growth stimulating phytohormones, solubilization and mobilization of phosphate, siderophore production, antibiosis, i.e., production of antibiotics, inhibition of plant ethylene synthesis, and induction of plant systemic resistance to pathogens (Richardson et al. 2009; Idris et al. 2007; Gutierrez-Manero et al. 2001;", "title": "" }, { "docid": "d51a844fa1ec4a63868611d73c6acfad", "text": "Massive open online courses (MOOCs) attract a large number of student registrations, but recent studies have shown that only a small fraction of these students complete their courses. Student dropouts are thus a major deterrent for the growth and success of MOOCs. We believe that understanding student engagement as a course progresses is essential for minimizing dropout rates. Formally defining student engagement in an online setting is challenging. In this paper, we leverage activity (such as posting in discussion forums, timely submission of assignments, etc.), linguistic features from forum content and structural features from forum interaction to identify two different forms of student engagement (passive and active) in MOOCs. We use probabilistic soft logic (PSL) to model student engagement by capturing domain knowledge about student interactions and performance. We test our models on MOOC data from Coursera and demonstrate that modeling engagement is helpful in predicting student performance.", "title": "" }, { "docid": "bc05c9cafade197494b52cf3f2ff091b", "text": "Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.", "title": "" }, { "docid": "1baaed4083a1a8315f8d5cd73730c81e", "text": "While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, it is naturally desirable to tightly integrate deep learning and Bayesian models within a principled probabilistic framework, which we call Bayesian deep learning. In this unified framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in return, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a general introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, and control. In this survey, we also discuss the relationship and differences between Bayesian deep learning and other related topics like Bayesian treatment of neural networks.", "title": "" }, { "docid": "85f5833628a4b50084fa50cbe45ebe4d", "text": "We introduce a functional gradient descent trajectory optimization algorithm for robot motion planning in Reproducing Kernel Hilbert Spaces (RKHSs). Functional gradient algorithms are a popular choice for motion planning in complex many-degree-of-freedom robots, since they (in theory) work by directly optimizing within a space of continuous trajectories to avoid obstacles while maintaining geometric properties such as smoothness. However, in practice, implementations such as CHOMP and TrajOpt typically commit to a fixed, finite parametrization of trajectories, often as a sequence of waypoints. Such a parameterization can lose much of the benefit of reasoning in a continuous trajectory space: e.g., it can require taking an inconveniently small step size and large number of iterations to maintain smoothness. Our work generalizes functional gradient trajectory optimization by formulating it as minimization of a cost functional in an RKHS. This generalization lets us represent trajectories as linear combinations of kernel functions. As a result, we are able to take larger steps and achieve a locally optimal trajectory in just a few iterations. Depending on the selection of kernel, we can directly optimize in spaces of trajectories that are inherently smooth in velocity, jerk, curvature, etc., and that have a low-dimensional, adaptively chosen parameterization. Our experiments illustrate the effectiveness of the planner for different kernels, including Gaussian RBFs with independent and coupled interactions among robot joints, Laplacian RBFs, and B-splines, as compared to the standard discretized waypoint representation.", "title": "" }, { "docid": "fda37e6103f816d4933a3a9c7dee3089", "text": "This paper introduces a novel approach to estimate the systolic and diastolic blood pressure ratios (SBPR and DBPR) based on the maximum amplitude algorithm (MAA) using a Gaussian mixture regression (GMR). The relevant features, which clearly discriminate the SBPR and DBPR according to the targeted groups, are selected in a feature vector. The selected feature vector is then represented by the Gaussian mixture model. The SBPR and DBPR are subsequently obtained with the help of the GMR and then mapped back to SBP and DBP values that are more accurate than those obtained with the conventional MAA method.", "title": "" }, { "docid": "2ee1f7a56eba17b75217cca609452f20", "text": "We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.", "title": "" }, { "docid": "5fc9fe7bcc50aad948ebb32aefdb2689", "text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.", "title": "" }, { "docid": "ec5aac01866a1e4ca3f4e906990d5d8e", "text": "But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one orderof-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.", "title": "" }, { "docid": "960022742172d6d0e883a23c74d800ef", "text": "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.", "title": "" }, { "docid": "cfddb85a8c81cb5e370fe016ea8d4c5b", "text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.", "title": "" } ]
scidocsrr
2a6609f28ccd04f9de7c4e9b02837b33
A Tale of Two Kernels: Towards Ending Kernel Hardening Wars with Split Kernel
[ { "docid": "7c05ef9ac0123a99dd5d47c585be391c", "text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.", "title": "" }, { "docid": "16186ff81d241ecaea28dcf5e78eb106", "text": "Different kinds of people use computers now than several decades ago, but operating systems have not fully kept pace with this change. It is true that we have point-and-click GUIs now instead of command line interfaces, but the expectation of the average user is different from what it used to be, because the user is different. Thirty or 40 years ago, when operating systems began to solidify into their current form, almost all computer users were programmers, scientists, engineers, or similar professionals doing heavy-duty computation, and they cared a great deal about speed. Few teenagers and even fewer grandmothers spent hours a day behind their terminal. Early users expected the computer to crash often; reboots came as naturally as waiting for the neighborhood TV repairman to come replace the picture tube on their home TVs. All that has changed and operating systems need to change with the times.", "title": "" } ]
[ { "docid": "3475d98ae13c4bab3424103f009f3fb1", "text": "According to a small, lightweight, low-cost high performance inertial Measurement Units(IMU), an effective calibration method is implemented to evaluate the performance of Micro-Electro-Mechanical Systems(MEMS) sensors suffering from various errors to get acceptable navigation results. A prototype development board based on FPGA, dual core processor's configuration for INS/GPS integrated navigation system is designed for experimental testing. The significant error sources of IMU such as bias, scale factor, and misalignment are estimated in virtue of static tests, rate tests, thermal tests. Moreover, an effective intelligent calibration method combining with Kalman Filter is proposed to estimate parameters and compensate errors. The proposed approach has been developed and its efficiency is demonstrated by various experimental scenarios with real MEMS data.", "title": "" }, { "docid": "41c317b0e275592ea9009f3035d11a64", "text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.", "title": "" }, { "docid": "363cc184a6cae8b7a81744676e339a80", "text": "Dismissing-avoidant adults are characterized by expressing relatively low levels of attachment-related distress. However, it is unclear whether this reflects a relative absence of covert distress or an attempt to conceal covert distress. Two experiments were conducted to distinguish between these competing explanations. In Experiment 1, participants were instructed to suppression resulted in a decrease in the accessibility of abandonment-related thoughts for dismissing-avoidant adults. Experiment 2 demonstrated that attempts to suppress the attachment system resulted in decreases in physiological arousal for dismissing-avoidant adults. These experiments indicate that dismissing-avoidant adults are capable of suppressing the latent activation of their attachment system and are not simply concealing latent distress. The discussion focuses on development, cognitive, and social factors that may promote detachment.", "title": "" }, { "docid": "329ab44195e7c20e696e5d7edc8b65a8", "text": "In this work, we consider challenges relating to security for Industrial Control Systems (ICS) in the context of ICS security education and research targeted both to academia and industry. We propose to address those challenges through gamified attack training and countermeasure evaluation. We tested our proposed ICS security gamification idea in the context of the (to the best of our knowledge) first Capture-The-Flag (CTF) event targeted to ICS security called SWaT Security Showdown (S3). Six teams acted as attackers in a security competition leveraging an ICS testbed, with several academic defense systems attempting to detect the ongoing attacks. The event was conducted in two phases. The online phase (a jeopardy-style CTF) served as a training session. The live phase was structured as an attack-defense CTF. We acted as judges and we assigned points to the attacker teams according to a scoring system that we developed internally based on multiple factors, including realistic attacker models. We conclude the paper with an evaluation and discussion of the S3, including statistics derived from the data collected in each phase of S3.", "title": "" }, { "docid": "6825c5294da2dfe7a26b6ac89ba8f515", "text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.", "title": "" }, { "docid": "fed23432144a6929c4f3442b10157771", "text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa", "title": "" }, { "docid": "85c4c0ffb224606af6bc3af5411d31ca", "text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.", "title": "" }, { "docid": "404fce3f101d0a1d22bc9afdf854b1e0", "text": "The intimate connection between the brain and the heart was enunciated by Claude Bernard over 150 years ago. In our neurovisceral integration model we have tried to build on this pioneering work. In the present paper we further elaborate our model. Specifically we review recent neuroanatomical studies that implicate inhibitory GABAergic pathways from the prefrontal cortex to the amygdala and additional inhibitory pathways between the amygdala and the sympathetic and parasympathetic medullary output neurons that modulate heart rate and thus heart rate variability. We propose that the default response to uncertainty is the threat response and may be related to the well known negativity bias. We next review the evidence on the role of vagally mediated heart rate variability (HRV) in the regulation of physiological, affective, and cognitive processes. Low HRV is a risk factor for pathophysiology and psychopathology. Finally we review recent work on the genetics of HRV and suggest that low HRV may be an endophenotype for a broad range of dysfunctions.", "title": "" }, { "docid": "6ce3156307df03190737ee7c0ae24c75", "text": "Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.", "title": "" }, { "docid": "f153ee3853f40018ed0ae8b289b1efcf", "text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.", "title": "" }, { "docid": "308622daf5f4005045f3d002f5251f8c", "text": "The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.", "title": "" }, { "docid": "9d2f569d1105bdac64071541eb01c591", "text": "1. Outline the principles of the diagnostic tests used to confirm brain death. . 2. The patient has been certified brain dead and her relatives agree with her previously stated wishes to donate her organs for transplantation. Outline the supportive measures which should be instituted to maintain this patient’s organs in an optimal state for subsequent transplantation of the heart, lungs, liver and kidneys.", "title": "" }, { "docid": "01a649c8115810c8318e572742d9bd00", "text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.", "title": "" }, { "docid": "1f20204533ade658723cc56b429d5792", "text": "ILQUA first participated in TREC QA main task in 2003. This year we have made modifications to the system by removing some components with poor performance and enhanced the system with new methods and new components. The newly built ILQUA is an IE-driven QA system. To answer “Factoid” and “List” questions, we apply our answer extraction methods on NE-tagged passages. The answer extraction methods adopted here are surface text pattern matching, n-gram proximity search and syntactic dependency matching. Surface text pattern matching has been applied in some previous TREC QA systems. However, the patterns used in ILQUA are automatically generated by a supervised learning system and represented in a format of regular expressions which can handle up to 4 question terms. N-gram proximity search and syntactic dependency matching are two steps of one component. N-grams of question terms are matched around every named entity in the candidate passages and a list of named entities are generated as answer candidate. These named entities go through a multi-level syntactic dependency matching until a final answer is generated. To answer “Other” questions, we parse the answer sentences of “Other” questions in 2004 main task and built syntactic patterns combined with semantic features. These patterns are applied to the parsed candidate sentences to extract answers of “Other” questions. The evaluation results showed ILQUA has reached an accuracy of 30.9% for factoid questions. ILQUA is an IE-driven QA system without any pre-compiled knowledge base of facts and it doesn’t get reference from any other external search engine such as Google. The disadvantage of an IE-driven QA system is that there are some types of questions that can’t be answered because the answer in the passages can’t be tagged as appropriate NE types. Figure 1 shows the diagram of the ILQUA architecture.", "title": "" }, { "docid": "73333ad599c6bbe353e46d7fd4f51768", "text": "The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research–brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.", "title": "" }, { "docid": "0c9bbeaa783b2d6270c735f004ecc47f", "text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: [email protected]. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: [email protected].", "title": "" }, { "docid": "f4edb4f6bc0d0e9b31242cf860f6692d", "text": "Search on the web is a delay process and it can be hard task especially for beginners when they attempt to use a keyword query language. Beginner (inexpert) searchers commonly attempt to find information with ambiguous queries. These ambiguous queries make the search engine returns irrelevant results. This work aims to get more relevant pages to query through query reformulation and expanding search space. The proposed system has three basic parts WordNet, Google search engine and Genetic Algorithm. Every part has a special task. The system uses WordNet to remove ambiguity from queries by displaying the meaning of every keyword in user query and selecting the proper meaning for keywords. The system obtains synonym for every keyword from WordNet and generates query list. Genetic algorithm is used to create generation for every query in query list. Every query in system is navigated using Google search engine to obtain results from group of documents on the Web. The system has been tested on number of ambiguous queries and it has obtained more relevant URL to user query especially when the query has one keyword. The results are promising and therefore open further research directions.", "title": "" }, { "docid": "29d2a613f7da6b99e35eb890d590f4ca", "text": "Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physicallybased augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.", "title": "" }, { "docid": "5873204bba0bd16262274d4961d3d5f9", "text": "The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive.", "title": "" } ]
scidocsrr
55bd54d13a2ba4dd6ad7fd7d079f1b86
Logics for resource-bounded agents
[ { "docid": "4285d9b4b9f63f22033ce9a82eec2c76", "text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "12866e003093bc7d89d751697f2be93c", "text": "We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems.", "title": "" } ]
[ { "docid": "fdff78b32803eb13904c128d8e011ea8", "text": "The task of identifying when to take a conversational turn is an important function of spoken dialogue systems. The turn-taking system should also ideally be able to handle many types of dialogue, from structured conversation to spontaneous and unstructured discourse. Our goal is to determine how much a generalized model trained on many types of dialogue scenarios would improve on a model trained only for a specific scenario. To achieve this goal we created a large corpus of Wizard-of-Oz conversation data which consisted of several different types of dialogue sessions, and then compared a generalized model with scenario-specific models. For our evaluation we go further than simply reporting conventional metrics, which we show are not informative enough to evaluate turn-taking in a real-time system. Instead, we process results using a performance curve of latency and false cut-in rate, and further improve our model's real-time performance using a finite-state turn-taking machine. Our results show that the generalized model greatly outperformed the individual model for attentive listening scenarios but was worse in job interview scenarios. This implies that a model based on a large corpus is better suited to conversation which is more user-initiated and unstructured. We also propose that our method of evaluation leads to more informative performance metrics in a real-time system.", "title": "" }, { "docid": "f6647e82741dfe023ee5159bd6ac5be9", "text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.", "title": "" }, { "docid": "766dd6c18f645d550d98f6e3e86c7b2f", "text": "Licorice root has been used for years to regulate gastrointestinal function in traditional Chinese medicine. This study reveals the gastrointestinal effects of isoliquiritigenin, a flavonoid isolated from the roots of Glycyrrhiza glabra (a kind of Licorice). In vivo, isoliquiritigenin produced a dual dose-related effect on the charcoal meal travel, inhibitory at the low doses, while prokinetic at the high doses. In vitro, isoliquiritigenin showed an atropine-sensitive concentration-dependent spasmogenic effect in isolated rat stomach fundus. However, a spasmolytic effect was observed in isolated rabbit jejunums, guinea pig ileums and atropinized rat stomach fundus, either as noncompetitive inhibition of agonist concentration-response curves, inhibition of high K(+) (80 mM)-induced contractions, or displacement of Ca(2+) concentration-response curves to the right, indicating a calcium antagonist effect. Pretreatment with N(omega)-nitro-L-arginine methyl ester (L-NAME; 30 microM), indomethacin (10 microM), methylene blue (10 microM), tetraethylammonium chloride (0.5 mM), glibenclamide (1 microM), 4-aminopyridine (0.1 mM), or clotrimazole (1 microM) did not inhibit the spasmolytic effect. These results indicate that isoliquiritigenin plays a dual role in regulating gastrointestinal motility, both spasmogenic and spasmolytic. The spasmogenic effect may involve the activating of muscarinic receptors, while the spasmolytic effect is predominantly due to blockade of the calcium channels.", "title": "" }, { "docid": "0022121142a2b3a2b627fcb1cfe48ccb", "text": "Graph colouring and its generalizations are useful tools in modelling a wide variety of scheduling and assignment problems. In this paper we review several variants of graph colouring, such as precolouring extension, list colouring, multicolouring, minimum sum colouring, and discuss their applications in scheduling.", "title": "" }, { "docid": "3c118c4f2b418f801faee08050e3a165", "text": "Unsupervised learning from visual data is one of the most difficult challenges in computer vision. It is essential for understanding how visual recognition works. Learning from unsupervised input has an immense practical value, as huge quantities of unlabeled videos can be collected at low cost. Here we address the task of unsupervised learning to detect and segment foreground objects in single images. We achieve our goal by training a student pathway, consisting of a deep neural network that learns to predict, from a single input image, the output of a teacher pathway that performs unsupervised object discovery in video. Our approach is different from the published methods that perform unsupervised discovery in videos or in collections of images at test time. We move the unsupervised discovery phase during the training stage, while at test time we apply the standard feed-forward processing along the student pathway. This has a dual benefit: firstly, it allows, in principle, unlimited generalization possibilities during training, while remaining fast at testing. Secondly, the student not only becomes able to detect in single images significantly better than its unsupervised video discovery teacher, but it also achieves state of the art results on two current benchmarks, YouTube Objects and Object Discovery datasets. At test time, our system is two orders of magnitude faster than other previous methods.", "title": "" }, { "docid": "44cf91a19b11fa62a5859ce236e7dc3f", "text": "We previously reported an ultrasound-guided transversus thoracic abdominis plane (TTP) block, able to target many anterior branches of the intercostal nerve (Th2-6), releasing the pain in the internal mammary area [1–3]. The injection point for this TTP block was located between the transversus thoracic muscle and the internal intercostal muscle, amid the third and fourth left ribs next to the sternum. However, analgesia efficacy in the region of an anterior branch of the sixth intercostal nerve was unstable. We subsequently investigated a more appropriate injection point for an ultrasound-guided TTP block. We selected 10 healthy volunteers for this study. All volunteers received bilateral TTP blocks. Right lateral TTP blocks of all cases involved the injection of 20 mL of 0.375% levobupivacaine into the fascial plane between the transversus thoracic muscle and the internal intercostal muscle at between the third and fourth ribs connecting at the sternum. On the other hand, all left lateral TTP blocks were administered by injection of 20 mL of 0.375% levobupivacaine into the fascial plane between the transversus thoracic muscle and the internal intercostal muscle between the fourth and fifth connecting at the sternum. In 20 minutes after the injections, we investigated the spread of local anesthetic on the TTP by an ultrasound machine (Fig. 1) and the analgesic effect by a sense testing. The sense testing is blindly the cold testing. The spread of local anesthetic is detailed in Table 1. As for the analgesic effect of sense testing, both sides gain sensory extinction in the region of multiple anterior branches of inter-", "title": "" }, { "docid": "4645d0d7b1dfae80657f75d3751ef72a", "text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.", "title": "" }, { "docid": "198d352bf0c044ceccddaeb630b3f9c7", "text": "In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low-power write operations for the learning and implement short-term association using temporal coding and spike-timing-dependent plasticity–based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime.", "title": "" }, { "docid": "bade68b8f95fc0ae5a377a52c8b04b5c", "text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future", "title": "" }, { "docid": "5db19f15ec148746613bdb48a4ca746a", "text": "Wireless power transfer (WPT) system is a practical and promising way for charging electric vehicles due to its security, convenience, and reliability. The requirement for high-power wireless charging is on the rise, but implementing such a WPT system has been a challenge because of the constraints of the power semiconductors and the installation space limitation at the bottom of the vehicle. In this paper, bipolar coils and unipolar coils are integrated into the transmitting side and the receiving side to make the magnetic coupler more compact while delivering high power. The same-side coils are naturally decoupled; therefore, there is no magnetic coupling between the same-side coils. The circuit model of the proposed WPT system using double-sided LCC compensations is presented. Finite-element analysis tool ANSYS MAXWELL is adopted to simulate and design the magnetic coupler. Finally, an experimental setup is constructed to evaluate the proposed WPT system. The proposed WPT system achieved the dc–dc efficiency at 94.07% while delivering 4.73 kW to the load with a vertical air gap of 150 mm.", "title": "" }, { "docid": "1acbb63a43218d216a2e850d9b3d3fa1", "text": "In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner.", "title": "" }, { "docid": "6073d07e5e6a05cbaa84ab8cd734bd12", "text": "Microblogging websites, e.g. Twitter and Sina Weibo, have become a popular platform for socializing and sharing information in recent years. Spammers have also discovered this new opportunity to unfairly overpower normal users with unsolicited content, namely social spams. While it is intuitive for everyone to follow legitimate users, recent studies show that both legitimate users and spammers follow spammers for different reasons. Evidence of users seeking for spammers on purpose is also observed. We regard this behavior as a useful information for spammer detection. In this paper, we approach the problem of spammer detection by leveraging the \"carefulness\" of users, which indicates how careful a user is when she is about to follow a potential spammer. We propose a framework to measure the carefulness, and develop a supervised learning algorithm to estimate it based on known spammers and legitimate users. We then illustrate how spammer detection can be improved in the aid of the proposed measure. Evaluation on a real dataset with millions of users and an online testing are performed on Sina Weibo. The results show that our approach indeed capture the carefulness, and it is effective to detect spammers. In addition, we find that the proposed measure is also beneficial for other applications, e.g. link prediction.", "title": "" }, { "docid": "7ea56b976524d77b7234340318f7e8dc", "text": "Market Integration and Market Structure in the European Soft Drinks Industry: Always Coca-Cola? by Catherine Matraves* This paper focuses on the question of European integration, considering whether the geographic level at which competition takes place differs across the two major segments of the soft drinks industry: carbonated soft drinks and mineral water. Our evidence shows firms are competing at the European level in both segments. Interestingly, the European market is being integrated through corporate strategy, defined as increased multinationality, rather than increased trade flows. To interpret these results, this paper uses the new theory of market structure where the essential notion is that in endogenous sunk cost industries such as soft drinks, the traditional inverse structure-size relation may break down, due to the escalation of overhead expenditures.", "title": "" }, { "docid": "129a85f7e611459cf98dc7635b44fc56", "text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.", "title": "" }, { "docid": "d662e37e868f686a31fda14d4676501a", "text": "Gesture recognition has multiple applications in medical and engineering fields. The problem of hand gesture recognition consists of identifying, at any moment, a given gesture performed by the hand. In this work, we propose a new model for hand gesture recognition in real time. The input of this model is the surface electromyography measured by the commercial sensor the Myo armband placed on the forearm. The output is the label of the gesture executed by the user at any time. The proposed model is based on the Λ-nearest neighbor and dynamic time warping algorithms. This model can learn to recognize any gesture of the hand. To evaluate the performance of our model, we measured and compared its accuracy at recognizing 5 classes of gestures to the accuracy of the proprietary system of the Myo armband. As a result of this evaluation, we determined that our model performs better (86% accurate) than the Myo system (83%).", "title": "" }, { "docid": "9f13ba2860e70e0368584bb4c36d01df", "text": "Network log messages (e.g., syslog) are expected to be valuable and useful information to detect unexpected or anomalous behavior in large scale networks. However, because of the huge amount of system log data collected in daily operation, it is not easy to extract pinpoint system failures or to identify their causes. In this paper, we propose a method for extracting the pinpoint failures and identifying their causes from network syslog data. The methodology proposed in this paper relies on causal inference that reconstructs causality of network events from a set of time series of events. Causal inference can filter out accidentally correlated events, thus it outputs more plausible causal events than traditional cross-correlation-based approaches can. We apply our method to 15 months’ worth of network syslog data obtained from a nationwide academic network in Japan. The proposed method significantly reduces the number of pseudo correlated events compared with the traditional methods. Also, through three case studies and comparison with trouble ticket data, we demonstrate the effectiveness of the proposed method for practical network operation.", "title": "" }, { "docid": "73a5fee293c2ae98e205fd5093cf8b9c", "text": "Millimeter-wave (MMW) imaging techniques have been used for the detection of concealed weapons and contraband carried on personnel at airports and other secure locations. The combination of frequency-modulated continuous-wave (FMCW) technology and MMW imaging techniques should lead to compact, light-weight, and low-cost systems which are especially suitable for security and detection application. However, the long signal duration time leads to the failure of the conventional stop-and-go approximation of the pulsed system. Therefore, the motion within the signal duration time needs to be taken into account. Analytical threedimensional (3-D) backscattered signal model, without using the stop-and-go approximation, is developed in this paper. Then, a wavenumber domain algorithm, with motion compensation, is presented. In addition, conventional wavenumber domain methods use Stolt interpolation to obtain uniform wavenumber samples and compute the fast Fourier transform (FFT). This paper uses the 3D nonuniform fast Fourier transform (NUFFT) instead of the Stolt interpolation and FFT. The NUFFT-based method is much faster than the Stolt interpolation-based method. Finally, point target simulations are performed to verify the algorithm.", "title": "" }, { "docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c", "text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.", "title": "" }, { "docid": "d4d48e7275191ab29f805ca86e626c04", "text": "This paper addresses the problem of keyword extraction from conversations, with the goal of using these keywords to retrieve, for each short conversation fragment, a small number of potentially relevant documents, which can be recommended to participants. However, even a short fragment contains a variety of words, which are potentially related to several topics; moreover, using an automatic speech recognition (ASR) system introduces errors among them. Therefore, it is difficult to infer precisely the information needs of the conversation participants. We first propose an algorithm to extract keywords from the output of an ASR system (or a manual transcript for testing), which makes use of topic modeling techniques and of a submodular reward function which favors diversity in the keyword set, to match the potential diversity of topics and reduce ASR noise. Then, we propose a method to derive multiple topically separated queries from this keyword set, in order to maximize the chances of making at least one relevant recommendation when using these queries to search over the English Wikipedia. The proposed methods are evaluated in terms of relevance with respect to conversation fragments from the Fisher, AMI, and ELEA conversational corpora, rated by several human judges. The scores show that our proposal improves over previous methods that consider only word frequency or topic similarity, and represents a promising solution for a document recommender system to be used in conversations.", "title": "" }, { "docid": "a9ff593d6eea9f28aa1d2b41efddea9b", "text": "A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.", "title": "" } ]
scidocsrr
7c0719b2936701c6e4ca5b3ed3cf2d91
Curating and contextualizing Twitter stories to assist with social newsgathering
[ { "docid": "463ef40777aaf14406186d5d4d99ba13", "text": "Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR (\"Seriously Rapid Source Review\"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution.", "title": "" } ]
[ { "docid": "7a6a1bf378f5bdfc6c373dc55cf0dabd", "text": "In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.", "title": "" }, { "docid": "e693e811edb2196baa1fd22b25246eaf", "text": "The chicken is an excellent model organism for studying vertebrate limb development, mainly because of the ease of manipulating the developing limb in vivo. Classical chicken embryology has provided fate maps and elucidated the cell-cell interactions that specify limb pattern. The first defined chemical that can mimic one of these interactions was discovered by experiments on developing chick limbs and, over the last 15 years or so, the role of an increasing number of developmentally important genes has been uncovered. The principles that underlie limb development in chickens are applicable to other vertebrates and there are growing links with clinical genetics. The sequence of the chicken genome, together with other recently assembled chicken genomic resources, will present new opportunities for exploiting the ease of manipulating the limb.", "title": "" }, { "docid": "394d96f18402c7033f27f5ead8219698", "text": "Today, online social networks in the World Wide Web become increasingly interactive and networked. Web 2.0 technologies provide a multitude of platforms, such as blogs, wikis, and forums where for example consumers can disseminate data about products and manufacturers. This data provides an abundance of information on personal experiences and opinions which are extremely relevant for companies and sales organizations. A new approach based on text mining and social network analysis is presented which allows detecting opinion leaders and opinion trends. This allows getting a better understanding of the opinion formation. The overall concept is presented and illustrated by an example.", "title": "" }, { "docid": "6ccad3fd0fea9102d15bd37306f5f562", "text": "This paper reviews deposition, integration, and device fabrication of ferroelectric PbZrxTi1−xO3 (PZT) films for applications in microelectromechanical systems. As examples, a piezoelectric ultrasonic micromotor and pyroelectric infrared detector array are presented. A summary of the published data on the piezoelectric properties of PZT thin films is given. The figures of merit for various applications are discussed. Some considerations and results on operation, reliability, and depolarization of PZT thin films are presented.", "title": "" }, { "docid": "2891ce3327617e9e957488ea21e9a20c", "text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.", "title": "" }, { "docid": "b5831795da97befd3241b9d7d085a20f", "text": "Want to learn more about the background and concepts of Internet congestion control? This indispensable text draws a sketch of the future in an easily comprehensible fashion. Special attention is placed on explaining the how and why of congestion control mechanisms complex issues so far hardly understood outside the congestion control research community. A chapter on Internet Traffic Management from the perspective of an Internet Service Provider demonstrates how the theory of congestion control impacts on the practicalities of service delivery.", "title": "" }, { "docid": "ec07bddc8bdc96678eebf49c7ee3752e", "text": "This study aimed to assess the effects of core stability training on lower limbs' muscular asymmetries and imbalances in team sport. Twenty footballers were divided into two groups, either core stability or control group. Before each daily practice, core stability group (n = 10) performed a core stability training programme, while control group (n = 10) did a standard warm-up. The effects of the core stability training programme were assessed by performing isokinetic tests and single-leg countermovement jumps. Significant improvement was found for knee extensors peak torque at 3.14 rad · s(-1) (14%; P < 0.05), knee flexors peak torque at 1.05 and 3.14 rad · s(-1) (19% and 22% with P < 0.01 and P < 0.01, respectively) and peak torque flexors/extensors ratios at 1.05 and 3.14 rad · s(-1) (7.7% and 8.5% with P < 0.05 and P < 0.05, respectively) only in the core stability group. The jump tests showed a significant reduction in the strength asymmetries in core stability group (-71.4%; P = 0.02) while a concurrent increase was seen in the control group (33.3%; P < 0.05). This study provides practical evidence in combining core exercises for optimal lower limbs strength balance development in young soccer players.", "title": "" }, { "docid": "eece6349d77b415115fa6afbbbd85190", "text": "BACKGROUND\nAcute appendicitis is the most common cause of acute abdomen. Approximately 7% of the population will be affected by this condition during full life. The development of AIR score may contribute to diagnosis associating easy clinical criteria and two simple laboratory tests.\n\n\nAIM\nTo evaluate the score AIR (Appendicitis Inflammatory Response score) as a tool for the diagnosis and prediction of severity of acute appendicitis.\n\n\nMETHOD\nWere evaluated all patients undergoing surgical appendectomy. From 273 patients, 126 were excluded due to exclusion criteria. All patients were submitted o AIR score.\n\n\nRESULTS\nThe value of the C-reactive protein and the percentage of leukocytes segmented blood count showed a direct relationship with the phase of acute appendicitis.\n\n\nCONCLUSION\nAs for the laboratory criteria, serum C-reactive protein and assessment of the percentage of the polymorphonuclear leukocytes count were important to diagnosis and disease stratification.", "title": "" }, { "docid": "c1956e4c6b732fa6a420d4c69cfbe529", "text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.", "title": "" }, { "docid": "3f5f8e75af4cc24e260f654f8834a76c", "text": "The Balanced Scorecard (BSC) methodology focuses on major critical issues of modern business organisations: the effective measurement of corporate performance and the evaluation of the successful implementation of corporate strategy. Despite the increased adoption of the BSC methodology by numerous business organisations during the last decade, limited case studies concern non-profit organisations (e.g. public sector, educational institutions, healthcare organisations, etc.). The main aim of this study is to present the development of a performance measurement system for public health care organisations, in the context of BSC methodology. The proposed approach considers the distinguished characteristics of the aforementioned sector (e.g. lack of competition, social character of organisations, etc.). The proposed measurement system contains the most important financial performance indicators, as well as non-financial performance indicators that are able to examine the quality of the provided services, the satisfaction of internal and external customers, the selfimprovement system of the organisation and the ability of the organisation to adapt and change. These indicators play the role of Key Performance Indicators (KPIs), in the context of BSC methodology. The presented analysis is based on a MCDA approach, where the UTASTAR method is used in order to aggregate the marginal performance of KPIs. This approach is able to take into account the preferences of the management of the organisation regarding the achievement of the defined strategic objectives. The main results of the proposed approach refer to the evaluation of the overall scores for each one of the main dimensions of the BSC methodology (i.e. financial, customer, internal business process, and innovation-learning). These results are able to help the organisation to evaluate and revise its strategy, and generally to adopt modern management approaches in every day practise. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a1df80a201943ad386a7836c7ba3ff94", "text": "This paper estimates the effect of air pollution on child hospitalizations for asthma using naturally occurring seasonal variations in pollution within zip codes. Of the pollutants considered, carbon monoxide (CO) has a significant effect on asthma for children ages 1-18: if 1998 pollution levels were at their 1992 levels, there would be a 5-14% increase in asthma admissions. Also, households respond to information about pollution with avoidance behavior, suggesting it is important to account for these endogenous responses when measuring the effect of pollution on health. Finally, the effect of pollution is greater for children of lower socio-economic status (SES), indicating that pollution is one potential mechanism by which SES affects health.", "title": "" }, { "docid": "78829447a6cbf0aa020ef098a275a16d", "text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.", "title": "" }, { "docid": "057621c670a9b7253ba829210c530dca", "text": "Actual challenges in production are individualization and short product lifecycles. To achieve this, the product development and the production planning must be accelerated. In some cases specialized production machines are engineered for automating production processes for a single product. Regarding the engineering of specialized production machines, there is often a sequential process starting with the mechanics, proceeding with the electrics and ending with the automation design. To accelerate this engineering process the different domains have to be parallelized as far as possible (Schlögl, 2008). Thereby the different domains start detailing in parallel after the definition of a common concept. The system integration follows the detailing with the objective to verify the system including the PLC-code. Regarding production machines, the system integration is done either by commissioning of the real machine or by validating the PLCcode against a model of the machine, so called virtual commissioning.", "title": "" }, { "docid": "ca4aa2c6f4096bbffaa2e3e1dd06fbe8", "text": "Hybrid unmanned aircraft, that combine hover capability with a wing for fast and efficient forward flight, have attracted a lot of attention in recent years. Many different designs are proposed, but one of the most promising is the tailsitter concept. However, tailsitters are difficult to control across the entire flight envelope, which often includes stalled flight. Additionally, their wing surface makes them susceptible to wind gusts. In this paper, we propose incremental nonlinear dynamic inversion control for the attitude and position control. The result is a single, continuous controller, that is able to track the acceleration of the vehicle across the flight envelope. The proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor experiments are performed, showing that unmodeled forces and moments are effectively compensated by the incremental control structure, and that accelerations can be tracked across the flight envelope. Finally, we provide a comprehensive procedure for the implementation of the controller on other types of hybrid UAVs.", "title": "" }, { "docid": "eaf30f31b332869bc45ff1288c41da71", "text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.", "title": "" }, { "docid": "dce75562a7e8b02364d39fd7eb407748", "text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.", "title": "" }, { "docid": "b59c843d687a1dbed0ef1b891c314424", "text": "Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the end-member signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model each mixed pixel in the scene. In practice, this is a combinatorial problem which calls for efficient linear sparse regression (SR) techniques based on sparsity-inducing regularizers, since the number of endmembers participating in a mixed pixel is usually very small compared with the (ever-growing) dimensionality (and availability) of spectral libraries. Linear SR is an area of very active research, with strong links to compressed sensing, basis pursuit (BP), BP denoising, and matching pursuit. In this paper, we study the linear spectral unmixing problem under the light of recent theoretical results published in those referred to areas. Furthermore, we provide a comparison of several available and new linear SR algorithms, with the ultimate goal of analyzing their potential in solving the spectral unmixing problem by resorting to available spectral libraries. Our experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra. This opens new perspectives for spectral unmixing, since the abundance estimation process no longer depends on the availability of pure spectral signatures in the input data nor on the capacity of a certain endmember extraction algorithm to identify such pure signatures.", "title": "" }, { "docid": "956ffd90cc922e77632b8f9f79f42a98", "text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433", "title": "" }, { "docid": "589396a7c9dae0567f0bcd4d83461a6f", "text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.", "title": "" }, { "docid": "cd55fc3fafe2618f743a845d89c3a796", "text": "According to the notation proposed by the International Federation for the Theory of Mechanisms and Machines IFToMM (Ionescu, 2003); a parallel manipulator is a mechanism where the motion of the end-effector, namely the moving or movable platform, is controlled by means of at least two kinematic chains. If each kinematic chain, also known popularly as limb or leg, has a single active joint, then the mechanism is called a fully-parallel mechanism, in which clearly the nominal degree of freedom equates the number of limbs. Tire-testing machines (Gough & Whitehall, 1962) and flight simulators (Stewart, 1965), appear to be the first transcendental applications of these complex mechanisms. Parallel manipulators, and in general mechanisms with parallel kinematic architectures, due to benefits --over their serial counterparts-such as higher stiffness and accuracy, have found interesting applications such as walking machines, pointing devices, multi-axis machine tools, micro manipulators, and so on. The pioneering contributions of Gough and Stewart, mainly the theoretical paper of Stewart (1965), influenced strongly the development of parallel manipulators giving birth to an intensive research field. In that way, recently several parallel mechanisms for industrial purposes have been constructed using the, now, classical hexapod as a base mechanism: Octahedral Hexapod HOH-600 (Ingersoll), HEXAPODE CMW 300 (CMW), Cosmo Center PM-600 (Okuma), F-200i (FANUC) and so on. On the other hand one cannot ignore that this kind of parallel kinematic structures have a limited and complex-shaped workspace. Furthermore, their rotation and position capabilities are highly coupled and therefore the control and calibration of them are rather complicated. It is well known that many industrial applications do not require the six degrees of freedom of a parallel manipulator. Thus in order to simplify the kinematics, mechanical assembly and control of parallel manipulators, an interesting trend is the development of the so called defective parallel manipulators, in other words, spatial parallel manipulators with fewer than six degrees of freedom. Special mention deserves the Delta robot, invented by Clavel (1991); which proved that parallel robotic manipulators are an excellent option for industrial applications where the accuracy and stiffness are fundamental characteristics. Consider for instance that the Adept Quattro robot, an application of the Delta robot, developed by Francois Pierrot in collaboration with Fatronik (Int. patent appl. WO/2006/087399), has a", "title": "" } ]
scidocsrr
eeb25d53134c4cc77a78e8cb6d6fabbe
An Intelligent Secure and Privacy-Preserving Parking Scheme Through Vehicular Communications
[ { "docid": "fd61461d5033bca2fd5a2be9bfc917b7", "text": "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.", "title": "" } ]
[ { "docid": "90b248a3b141fc55eb2e55d274794953", "text": "The aerodynamic admittance function (AAF) has been widely invoked to relate wind pressures on building surfaces to the oncoming wind velocity. In current practice, strip and quasi-steady theories are generally employed in formulating wind effects in the along-wind direction. These theories permit the representation of the wind pressures on building surfaces in terms of the oncoming wind velocity field. Synthesis of the wind velocity field leads to a generalized wind load that employs the AAF. This paper reviews the development of the current AAF in use. It is followed by a new definition of the AAF, which is based on the base bending moment. It is shown that the new AAF is numerically equivalent to the currently used AAF for buildings with linear mode shape and it can be derived experimentally via high frequency base balance. New AAFs for square and rectangular building models were obtained and compared with theoretically derived expressions. Some discrepancies between experimentally and theoretically derived AAFs in the high frequency range were noted.", "title": "" }, { "docid": "97c0dc54f51ebcfe041f18028a15c621", "text": "Mobile learning or “m-learning” is the process of learning when learners are not at a fixed location or time and can exploit the advantage of learning opportunities using mobile technologies. Nowadays, speech recognition is being used in many mobile applications.!Speech recognition helps people to interact with the device as if were they talking to another person. This technology helps people to learn anything using computers by promoting self-study over extended periods of time. The objective of this study focuses on designing and developing a mobile application for the Arabic recognition of spoken Quranic verses. The application is suitable for Android-based devices. The application is called Say Quran and is available on Google Play Store. Moreover, this paper presents the results of a preliminary study to gather feedback from students regarding the developed application.", "title": "" }, { "docid": "1b9f54b275252818f730858654dc4348", "text": "We will demonstrate a conversational products recommendation agent. This system shows how we combine research in personalized recommendation systems with research in dialogue systems to build a virtual sales agent. Based on new deep learning technologies we developed, the virtual agent is capable of learning how to interact with users, how to answer user questions, what is the next question to ask, and what to recommend when chatting with a human user. Normally a descent conversational agent for a particular domain requires tens of thousands of hand labeled conversational data or hand written rules. This is a major barrier when launching a conversation agent for a new domain. We will explore and demonstrate the effectiveness of the learning solution even when there is no hand written rules or hand labeled training data.", "title": "" }, { "docid": "fdc1beef8614e0c85e784597532a1ce4", "text": "This article presents the hardware design and software algorithms of RoboSimian, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain. The robot has generalized limbs and hands capable of mobility and manipulation, along with almost fully hemispherical 3D sensing with passive stereo cameras. The system is semi-autonomous, enabling low-bandwidth, high latency control operated from a standard laptop. Because limbs are used for mobility and manipulation, a single unified mobile manipulation planner is used to generate autonomous behaviors, including walking, sitting, climbing, grasping, and manipulating. The remote operator interface is optimized to designate, parameterize, sequence, and preview behaviors, which are then executed by the robot. RoboSimian placed fifth in the DARPA Robotics Challenge (DRC) Trials, demonstrating its ability to perform disaster recovery tasks in degraded human environments.", "title": "" }, { "docid": "6300f94dbfa58583e15741e5c86aa372", "text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.", "title": "" }, { "docid": "eb29f281b0237bea84ae26829f5545bd", "text": "Using formal concept analysis, we propose a method for engineering ontology from MongoDB to effectively represent unstructured data. Our method consists of three main phases: (1) generating formal context from a MongoDB, (2) applying formal concept analysis to derive a concept lattice from that formal context, and (3) converting the obtained concept lattice to the first prototype of an ontology. We apply our method on NorthWind database and demonstrate how the proposed mapping rules can be used for learning an ontology from such database. At the end, we discuss about suggestions by which we can improve and generalize the method for more complex database examples.", "title": "" }, { "docid": "51f2ba8b460be1c9902fb265b2632232", "text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.", "title": "" }, { "docid": "a8ddaed8209d09998159014307233874", "text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.", "title": "" }, { "docid": "c6bfdc5c039de4e25bb5a72ec2350223", "text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.", "title": "" }, { "docid": "5e6175d56150485d559d0c1a963e12b8", "text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.", "title": "" }, { "docid": "e70425a0b9d14ff4223f3553de52c046", "text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.", "title": "" }, { "docid": "d05c6ec4bfb24f283e7f8baa08985e70", "text": "This paper describes a recently developed architecture for a Hardware-in-the-Loop simulator for Unmanned Aerial Vehicles. The principal idea is to use the advanced modeling capabilities of Simulink rather than hard-coded software as the flight dynamics simulating engine. By harnessing Simulink’s ability to precisely model virtually any dynamical system or phenomena this newly developed simulator facilitates the development, validation and verification steps of flight control algorithms. Although the presented architecture is used in conjunction with a particular commercial autopilot, the same approach can be easily implemented on a flight platform with a different autopilot. The paper shows the implementation of the flight modeling simulation component in Simulink supported with an interfacing software to a commercial autopilot. This offers the academic community numerous advantages for hardware-in-the-loop simulation of flight dynamics and control tasks. The developed setup has been rigorously tested under a wide variety of conditions. Results from hardware-in-the-loop and real flight tests are presented and compared to validate its adequacy and assess its usefulness as a rapid prototyping tool.", "title": "" }, { "docid": "ce99ce3fb3860e140164e7971291f0fa", "text": "We describe the development and psychometric characteristics of the Generalized Workplace Harassment Questionnaire (GWHQ), a 29-item instrument developed to assess harassing experiences at work in five conceptual domains: verbal aggression, disrespect, isolation/exclusion, threats/bribes, and physical aggression. Over 1700 current and former university employees completed the GWHQ at three time points. Factor analytic results at each wave of data suggested a five-factor solution that did not correspond to the original five conceptual factors. We suggest a revised scoring scheme for the GWHQ utilizing four of the empirically extracted factors: covert hostility, verbal hostility, manipulation, and physical hostility. Covert hostility was the most frequently experienced type of harassment, followed by verbal hostility, manipulation, and physical hostility. Verbal hostility, covert hostility, and manipulation were found to be significant predictors of psychological distress.", "title": "" }, { "docid": "5f806baa9987146a642fbce106f43291", "text": "Biofouling is generally undesirable for many applications. An overview of the medical, marine and industrial fields susceptible to fouling is presented. Two types of fouling include biofouling from organism colonization and inorganic fouling from non-living particles. Nature offers many solutions to control fouling through various physical and chemical control mechanisms. Examples include low drag, low adhesion, wettability (water repellency and attraction), microtexture, grooming, sloughing, various miscellaneous behaviours and chemical secretions. A survey of nature's flora and fauna was taken in order to discover new antifouling methods that could be mimicked for engineering applications. Antifouling methods currently employed, ranging from coatings to cleaning techniques, are described. New antifouling methods will presumably incorporate a combination of physical and chemical controls.", "title": "" }, { "docid": "337a738d386fa66725fe9be620365d5f", "text": "Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.", "title": "" }, { "docid": "c41c38377b1a824e1d021794802c7aed", "text": "This paper presents an optimization methodology that includes three important components necessary for a systematic approach to naval ship concept design. These are: • An efficient and effective search of design space for non-dominated designs • Well-defined and quantitative measures of objective attributes • An effective format to describe the design space and to present non-dominated concepts for rational selection by the customer A Multiple-Objective Genetic Optimization (MOGO) is used to search design parameter space and identify non-dominated design concepts based on life cycle cost and mission effectiveness. A nondominated frontier and selected generations of feasible designs are used to present results to the customer for selection of preferred alternatives. A naval ship design application is presented.", "title": "" }, { "docid": "261f146b67fd8e13d1ad8c9f6f5a8845", "text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.", "title": "" }, { "docid": "3c95e090ab4e57f2fd21543226ad55ae", "text": "Increase in the area and neuron number of the cerebral cortex over evolutionary time systematically changes its computational properties. One of the fundamental developmental mechanisms generating the cortex is a conserved rostrocaudal gradient in duration of neuron production, coupled with distinct asymmetries in the patterns of axon extension and synaptogenesis on the same axis. A small set of conserved sensorimotor areas with well-defined thalamic input anchors the rostrocaudal axis. These core mechanisms organize the cortex into two contrasting topographic zones, while systematically amplifying hierarchical organization on the rostrocaudal axis in larger brains. Recent work has shown that variation in 'cognitive control' in multiple species correlates best with absolute brain size, and this may be the behavioral outcome of this progressive organizational change.", "title": "" }, { "docid": "172aaf47ee3f89818abba35a463ecc76", "text": "I examined the relationship of recalled and diary recorded frequency of penile-vaginal intercourse (FSI), noncoital partnered sexual activity, and masturbation to measured waist and hip circumference in 120 healthy adults aged 19-38. Slimmer waist (in men and in the sexes combined) and slimmer hips (in men and women) were associated with greater FSI. Slimmer waist and hips were associated with rated importance of intercourse for men. Noncoital partnered sexual activity had a less consistent association with slimness. Slimmer waist and hips were associated with less masturbation (in men and in the sexes combined). I discuss the results in terms of differences between different sexual behaviors, attractiveness, emotional relatedness, physical sensitivity, sexual dysfunction, sociobiology, psychopharmacological aspects of excess fat and carbohydrate consumption, and implications for sex therapy.", "title": "" } ]
scidocsrr
b049d9544a7cee820b8df4f4b4fe1adc
Compact CPW-Fed Tri-Band Printed Antenna With Meandering Split-Ring Slot for WLAN/WiMAX Applications
[ { "docid": "237a88ea092d56c6511bb84604e6a7c7", "text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.", "title": "" }, { "docid": "7bc8be5766eeb11b15ea0aa1d91f4969", "text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.", "title": "" } ]
[ { "docid": "0b6f3498022abdf0407221faba72dcf1", "text": "A broadband coplanar waveguide (CPW) to coplanar strip (CPS) transmission line transition directly integrated with an RF microelectromechanical systems reconfigurable multiband antenna is presented in this paper. This transition design exhibits very good performance up to 55 GHz, and uses a minimum number of dissimilar transmission line sections and wire bonds, achieving a low-loss and low-cost balancing solution to feed planar antenna designs. The transition design methodology that was followed is described and measurement results are presented.", "title": "" }, { "docid": "c31dddbca92e13e84e08cca310329151", "text": "For the first time, automated Hex solvers have surpassed humans in their ability to solve Hex positions: they can now solve many 9×9 Hex openings. We summarize the methods that attained this milestone, and examine the future of Hex solvers.", "title": "" }, { "docid": "65ed76ddd6f7fd0aea717d2e2643dd16", "text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.", "title": "" }, { "docid": "26b67fe7ee89c941d313187672b1d514", "text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.", "title": "" }, { "docid": "1336b193e4884a024f21a384b265eac6", "text": "In this proposal, we introduce Bayesian Abductive Logic Programs (BALP), a probabilistic logic that adapts Bayesian Logic Programs (BLPs) for abductive reasoning. Like BLPs, BALPs also combine first-order logic and Bayes nets. However, unlike BLPs, which use deduction to construct Bayes nets, BALPs employ logical abduction. As a result, BALPs are more suited for problems like plan/activity recognition that require abductive reasoning. In order to demonstrate the efficacy of BALPs, we apply it to two abductive reasoning tasks – plan recognition and natural language understanding.", "title": "" }, { "docid": "529929af902100d25e08fe00d17e8c1a", "text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.", "title": "" }, { "docid": "ee61181cb9625868526eb608db0c58b4", "text": "The primary focus of machine learning has traditionally been on learning from data assumed to be sufficient and representative of the underlying fixed, yet unknown, distribution. Such restrictions on the problem domain paved the way for development of elegant algorithms with theoretically provable performance guarantees. As is often the case, however, real-world problems rarely fit neatly into such restricted models. For instance class distributions are often skewed, resulting in the “class imbalance” problem. Data drawn from non-stationary distributions is also common in real-world applications, resulting in the “concept drift” or “non-stationary learning” problem which is often associated with streaming data scenarios. Recently, these problems have independently experienced increased research attention, however, the combined problem of addressing all of the above mentioned issues has enjoyed relatively little research. If the ultimate goal of intelligent machine learning algorithms is to be able to address a wide spectrum of real-world scenarios, then the need for a general framework for learning from, and adapting to, a non-stationary environment that may introduce imbalanced data can be hardly overstated. In this paper, we first present an overview of each of these challenging areas, followed by a comprehensive review of recent research for developing such a general framework.", "title": "" }, { "docid": "54a1257346f9a1ead514bb8077b0e7ca", "text": "Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.", "title": "" }, { "docid": "5785108e48e62ce2758a7b18559a697e", "text": "The objective of this article is to create a better understanding of the intersection of the academic fields of entrepreneurship and strategic management, based on an aggregation of the extant literature in these two fields. The article structures and synthesizes the existing scholarly works in the two fields, thereby generating new knowledge. The results can be used to further enhance fruitful integration of these two overlapping but separate academic fields. The article attempts to integrate the two fields by first identifying apparent interrelations, and then by concentrating in more detail on some important intersections, including strategic management in small and medium-sized enterprises and start-ups, acknowledging the central role of the entrepreneur. The content and process sides of strategic management are discussed as well as their important connecting link, the business plan. To conclude, implications and future research directions for the two fields are proposed.", "title": "" }, { "docid": "efde28bc545de68dbb44f85b198d85ff", "text": "Blockchain technology is regarded as highly disruptive, but there is a lack of formalization and standardization of terminology. Not only because there are several (sometimes propriety) implementation platforms, but also because the academic literature so far is predominantly written from either a purely technical or an economic application perspective. The result of the confusion is an offspring of blockchain solutions, types, roadmaps and interpretations. For blockchain to be accepted as a technology standard in established industries, it is pivotal that ordinary internet users and business executives have a basic yet fundamental understanding of the workings and impact of blockchain. This conceptual paper provides a theoretical contribution and guidance on what blockchain actually is by taking an ontological approach. Enterprise Ontology is used to make a clear distinction between the datalogical, infological and essential level of blockchain transactions and smart contracts.", "title": "" }, { "docid": "5275184686a8453a1922cec7a236b66d", "text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.", "title": "" }, { "docid": "c75967795041ef900236d71328dd7936", "text": "In order to investigate the strategies used to plan and control multijoint arm trajectories, two-degrees-of-freedom arm movements performed by normal adult humans were recorded. Only the shoulder and elbow joints were active. When a subject was told simply to move his hand from one visual target to another, the path of the hand was roughly straight, and the hand speed profile of their straight trajectories was bell-shaped. When the subject was required to produce curved hand trajectories, the path usually had a segmented appearance, as if the subject was trying to approximate a curve with low curvature elements. Hand speed profiles associated with curved trajectories contained speed valleys or inflections which were temporally associated with the local maxima in the trajectory curvature. The mean duration of curved movements was longer than the mean for straight movements. These results are discussed in terms of trajectory control theories which have originated in the fields of mechanical manipulator control and biological motor control. Three explanations for the results are offered.", "title": "" }, { "docid": "06c4281aad5e95cac1f4525cbb90e5c7", "text": "Offering training programs to their employees is one of the necessary tasks that managers must comply with. Training is done mainly to provide upto-date knowledge or to convey to staff the objectives, history, corporate name, functions of the organization’s areas, processes, laws, norms or policies that must be fulfilled. Although there are a lot of methods, models or tools that are useful for this purpose, many companies face with some common problems like employee’s motivation and high costs in terms of money and time. In an effort to solve this problem, new trends have emerged in the last few years, in particular strategies related to games, such as serious games and gamification, whose success has been demonstrated by numerous researchers. According to the above, we present a systematic literature review of the different approaches that have used games or their elements, using the procedure suggested by Cooper, on this matter, ending with about the positive and negative findings.", "title": "" }, { "docid": "24d55c65807e4a90fb0dffb23fc2f7bc", "text": "This paper presents a comprehensive study of deep correlation features on image style classification. Inspired by that, correlation between feature maps can effectively describe image texture, and we design various correlations and transform them into style vectors, and investigate classification performance brought by different variants. In addition to intralayer correlation, interlayer correlation is proposed as well, and its effectiveness is verified. After showing the effectiveness of deep correlation features, we further propose a learning framework to automatically learn correlations between feature maps. Through extensive experiments on image style classification and artist classification, we demonstrate that the proposed learnt deep correlation features outperform several variants of convolutional neural network features by a large margin, and achieve the state-of-the-art performance.", "title": "" }, { "docid": "283d3f1ff0ca4f9c0a2a6f4beb1f7771", "text": "As a proof-of-concept for the vision “SSD as SQL Engine” (SaS in short), we demonstrate that SQLite [4], a popular mobile database engine, in its entirety can run inside a real SSD development platform. By turning storage device into database engine, SaS allows applications to directly interact with full SQL database server running inside storage device. In SaS, the SQL language itself, not the traditional dummy block interface, will be provided as new interface between applications and storage device. In addition, since SaS plays the role of the uni ed platform of database computing node and storage node, the host and the storage need not be segregated any more as separate physical computing components.", "title": "" }, { "docid": "62d39d41523bca97939fa6a2cf736b55", "text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.", "title": "" }, { "docid": "328aad76b94b34bf49719b98ae391cfe", "text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.", "title": "" }, { "docid": "a9a22c9c57e9ba8c3deefbea689258d5", "text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.", "title": "" }, { "docid": "c5f749c36b3d8af93c96bee59f78efe5", "text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.", "title": "" }, { "docid": "87eb54a981fca96475b73b3dfa99b224", "text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.", "title": "" } ]
scidocsrr
a7bbea069feaed269fc9caf24cc3c6a0
Architectural support for SWAR text processing with parallel bit streams: the inductive doubling principle
[ { "docid": "8fde46517d705da12fb43ce110a27a0f", "text": "Parabix (parallel bit streams for XML) is an open-source XML parser that employs the SIMD (single-instruction multiple-data) capabilities of modern-day commodity processors to deliver dramatic performance improvements over traditional byte-at-a-time parsing technology. Byte-oriented character data is first transformed to a set of 8 parallel bit streams, each stream comprising one bit per character code unit. Character validation, transcoding and lexical item stream formation are all then carried out in parallel using bitwise logic and shifting operations. Byte-at-a-time scanning loops in the parser are replaced by bit scan loops that can advance by as many as 64 positions with a single instruction.\n A performance study comparing Parabix with the open-source Expat and Xerces parsers is carried out using the PAPI toolkit. Total CPU cycle counts, level 2 data cache misses and branch mispredictions are measured and compared for each parser. The performance of Parabix is further studied with a breakdown of the cycle counts across the core components of the parser. Prospects for further performance improvements are also outlined, with a particular emphasis on leveraging the intraregister parallelism of SIMD processing to enable intrachip parallelism on multicore architectures.", "title": "" } ]
[ { "docid": "bf1ba6901d6c64a341ba1491c6c2c3c9", "text": "The present research proposes schema congruity as a theoretical basis for examining the effectiveness and consequences of product anthropomorphism. Results of two studies suggest that the ability of consumers to anthropomorphize a product and their consequent evaluation of that product depend on the extent to which that product is endowed with characteristics congruent with the proposed human schema. Furthermore, consumers’ perception of the product as human mediates the influence of feature type on product evaluation. Results of a third study, however, show that the affective tag attached to the specific human schema moderates the evaluation but not the successful anthropomorphizing of theproduct.", "title": "" }, { "docid": "7b99f2b0c903797c5ed33496f69481fc", "text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.", "title": "" }, { "docid": "7f3686b783273c4df7c4fb41fe7ccefd", "text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "345e46da9fc01a100f10165e82d9ca65", "text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.", "title": "" }, { "docid": "a4f0b524f79db389c72abd27d36f8944", "text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.", "title": "" }, { "docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04", "text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.", "title": "" }, { "docid": "518dc6882c6e13352c7b41f23dfd2fad", "text": "The Diagnostic and Statistical Manual of Mental Disorders (DSM) is considered to be the gold standard manual for assessing the psychiatric diseases and is currently in its fourth version (DSM-IV), while a fifth (DSM-V) has just been released in May 2013. The DSM-V Anxiety Work Group has put forward recommendations to modify the criteria for diagnosing specific phobias. In this manuscript, we propose to consider the inclusion of nomophobia in the DSM-V, and we make a comprehensive overview of the existing literature, discussing the clinical relevance of this pathology, its epidemiological features, the available psychometric scales, and the proposed treatment. Even though nomophobia has not been included in the DSM-V, much more attention is paid to the psychopathological effects of the new media, and the interest in this topic will increase in the near future, together with the attention and caution not to hypercodify as pathological normal behaviors.", "title": "" }, { "docid": "917ab22adee174259bef5171fe6f14fb", "text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.", "title": "" }, { "docid": "43f5d21de3421564a7d5ecd6c074ea0a", "text": "Epithelial-mesenchymal transition (EMT) is an important process in embryonic development, fibrosis, and cancer metastasis. During cancer progression, the activation of EMT permits cancer cells to acquire migratory, invasive, and stem-like properties. A growing body of evidence supports the critical link between EMT and cancer stemness. However, contradictory results have indicated that the inhibition of EMT also promotes cancer stemness, and that mesenchymal-epithelial transition, the reverse process of EMT, is associated with the tumor-initiating ability required for metastatic colonization. The concept of 'intermediate-state EMT' provides a possible explanation for this conflicting evidence. In addition, recent studies have indicated that the appearance of 'hybrid' epithelial-mesenchymal cells is favorable for the establishment of metastasis. In summary, dynamic changes or plasticity between the epithelial and the mesenchymal states rather than a fixed phenotype is more likely to occur in tumors in the clinical setting. Further studies aimed at validating and consolidating the concept of intermediate-state EMT and hybrid tumors are needed for the establishment of a comprehensive profile of cancer metastasis.", "title": "" }, { "docid": "a1f29ac1db0745a61baf6995459c02e7", "text": "Adolescence is a developmental period characterized by suboptimal decisions and actions that give rise to an increased incidence of unintentional injuries and violence, alcohol and drug abuse, unintended pregnancy and sexually transmitted diseases. Traditional neurobiological and cognitive explanations for adolescent behavior have failed to account for the nonlinear changes in behavior observed during adolescence, relative to childhood and adulthood. This review provides a biologically plausible conceptualization of the neural mechanisms underlying these nonlinear changes in behavior, as a heightened responsiveness to incentives while impulse control is still relatively immature during this period. Recent human imaging and animal studies provide a biological basis for this view, suggesting differential development of limbic reward systems relative to top-down control systems during adolescence relative to childhood and adulthood. This developmental pattern may be exacerbated in those adolescents with a predisposition toward risk-taking, increasing the risk for poor outcomes.", "title": "" }, { "docid": "f437862098dac160f3a3578baeb565a2", "text": "Techniques for modeling and simulating channel conditions play an essential role in understanding network protocol and application behavior. In [11], we demonstrated that inaccurate modeling using a traditional analytical model yielded significant errors in error control protocol parameters choices. In this paper, we demonstrate that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes. We then present an algorithm that divides traces into stationary components in order to provide analytical channel models that, relative to traditional approaches, more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes. Our algorithm also generates artificial traces with the same statistical characteristics as actual collected network traces. For validation, we develop a channel model for the circuit-switched data service in GSM and show that it: (1) more closely approximates GSM channel characteristics than a traditional Gilbert model and (2) generates artificial traces that closely match collected traces' statistics. Using these traces in a simulator environment enables future protocol and application testing under different controlled and repeatable conditions.", "title": "" }, { "docid": "c3c36535a6dbe74165c0e8b798ac820f", "text": "Multiplier, being a very vital part in the design of microprocessor, graphical systems, multimedia systems, DSP system etc. It is very important to have an efficient design in terms of performance, area, speed of the multiplier, and for the same Booth's multiplication algorithm provides a very fundamental platform for all the new advances made for high end multipliers meant for faster multiplication with higher performance. The algorithm provides an efficient encoding of the bits during the first steps of the multiplication process. In pursuit of the same, Radix 4 booths encoding has increased the performance of the multiplier by reducing the number of partial products generated. Radix 4 Booths algorithm produces both positive and negative partial products and implementing the negative partial product nullifies the advances made in different units to some extent if not fully. Most of the research work focuses on the reduction of the number of partial products generated and making efficient implementation of the algorithm. There is very little work done on disposal of the negative partial products generated. The presented work in the paper addresses the issue of disposal of the negative partial products efficiently by computing the 2's complement avoiding the additional adder for adding 1 and generation of long carry chain, hence. The proposed mechanism also continues to support the concept of reducing the partial product and in persuasion of the same it is able to reduce the number of partial product and also improved further from n/2 +1 partial products achieved via modified booths algorithm to n/2. Also, while implementing the proposed mechanism using Verilog HDL, a mode selection capability is provided, enabling the same hardware to act as multiplier and as a simple two's complement calculator using the proposed mechanism. The proposed technique has added advantage in terms of its independentness of the number of bits to be multiplied. It is tested and verified with varied test vectors of different number bit sets. Xilinx synthesis tool is used for synthesis and the multiplier mechanism has a maximum operating frequency of 14.59 MHz and a delay of 7.013 ns.", "title": "" }, { "docid": "db907780a2022761d2595a8ad5d03401", "text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.", "title": "" }, { "docid": "8fc560987781afbb25f47eb560176e2c", "text": "Liposomes are microparticulate lipoidal vesicles which are under extensive investigation as drug carriers for improving the delivery of therapeutic agents. Due to new developments in liposome technology, several liposomebased drug formulations are currently in clinical trial, and recently some of them have been approved for clinical use. Reformulation of drugs in liposomes has provided an opportunity to enhance the therapeutic indices of various agents mainly through alteration in their biodistribution. This review discusses the potential applications of liposomes in drug delivery with examples of formulations approved for clinical use, and the problems associated with further exploitation of this drug delivery system. © 1997 Elsevier Science B.V.", "title": "" }, { "docid": "c1aa687c4a48cfbe037fe87ed4062dab", "text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.", "title": "" }, { "docid": "be82da372c061ef3029273bfc91a9e0a", "text": "Search and rescue missions and surveillance require finding targets in a large area. These tasks often use unmanned aerial vehicles (UAVs) with cameras to detect and move towards a target. However, common UAV approaches make two simplifying assumptions. First, they assume that observations made from different heights are deterministically correct. In practice, observations are noisy, with the noise increasing as the height used for observations increases. Second, they assume that a motion command executes correctly, which may not happen due to wind and other environmental factors. To address these, we propose a sequential algorithm that determines actions in real time based on observations, using partially observable Markov decision processes (POMDPs). Our formulation handles both observations and motion uncertainty and errors. We run offline simulations and learn a policy. This policy is run on a UAV to find the target efficiently. We employ a novel compact formulation to represent the coordinates of the drone relative to the target coordinates. Our POMDP policy finds the target up to 3.4 times faster when compared to a heuristic policy.", "title": "" }, { "docid": "4239f9110973888c7eded81037c056b3", "text": "The role of epistasis in the genetic architecture of quantitative traits is controversial, despite the biological plausibility that nonlinear molecular interactions underpin the genotype–phenotype map. This controversy arises because most genetic variation for quantitative traits is additive. However, additive variance is consistent with pervasive epistasis. In this Review, I discuss experimental designs to detect the contribution of epistasis to quantitative trait phenotypes in model organisms. These studies indicate that epistasis is common, and that additivity can be an emergent property of underlying genetic interaction networks. Epistasis causes hidden quantitative genetic variation in natural populations and could be responsible for the small additive effects, missing heritability and the lack of replication that are typically observed for human complex traits.", "title": "" }, { "docid": "5565f51ad8e1aaee43f44917befad58a", "text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.", "title": "" }, { "docid": "1b47dffdff3825ad44a0430311e2420b", "text": "The present paper describes the SSM algorithm of protein structure comparison in three dimensions, which includes an original procedure of matching graphs built on the protein's secondary-structure elements, followed by an iterative three-dimensional alignment of protein backbone Calpha atoms. The SSM results are compared with those obtained from other protein comparison servers, and the advantages and disadvantages of different scores that are used for structure recognition are discussed. A new score, balancing the r.m.s.d. and alignment length Nalign, is proposed. It is found that different servers agree reasonably well on the new score, while showing considerable differences in r.m.s.d. and Nalign.", "title": "" }, { "docid": "9c16f3ccaab4e668578e3eda7d452ebd", "text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.", "title": "" } ]
scidocsrr
940cd05eb09f3aa85e0a63e79bcb338c
Proactive Coping and its Relation to the Five-Factor Model of Personality
[ { "docid": "281bcb92dfaae0dc541ef0b7b8db2d72", "text": "In 3 studies, the authors investigated the functional role of psychological resilience and positive emotions in the stress process. Studies 1a and 1b explored naturally occurring daily stressors. Study 2 examined data from a sample of recently bereaved widows. Across studies, multilevel random coefficient modeling analyses revealed that the occurrence of daily positive emotions serves to moderate stress reactivity and mediate stress recovery. Findings also indicated that differences in psychological resilience accounted for meaningful variation in daily emotional responses to stress. Higher levels of trait resilience predicted a weaker association between positive and negative emotions, particularly on days characterized by heightened stress. Finally, findings indicated that over time, the experience of positive emotions functions to assist high-resilient individuals in their ability to recover effectively from daily stress. Implications for research into protective factors that serve to inhibit the scope, severity, and diffusion of daily stressors in later adulthood are discussed.", "title": "" }, { "docid": "6c29473469f392079fa8406419190116", "text": "The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists.", "title": "" } ]
[ { "docid": "0d81a7af3c94e054841e12d4364b448c", "text": "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research. During the last decade, Internet of Things (IoT) approached our lives silently and gradually, thanks to the availability of wireless communication systems (e.g., RFID, WiFi, 4G, IEEE 802.15.x), which have been increasingly employed as technology driver for crucial smart monitoring and control applications [1–3]. Nowadays, the concept of IoT is many-folded, it embraces many different technologies, services, and standards and it is widely perceived as the angular stone of the ICT market in the next ten years, at least [4–6]. From a logical viewpoint, an IoT system can be depicted as a collection of smart devices that interact on a collabo-rative basis to fulfill a common goal. At the technological floor, IoT deployments may adopt different processing and communication architectures, technologies, and design methodologies, based on their target. For instance, the same IoT system could leverage the capabilities of a wireless sensor network (WSN) that collects the environmental information in a given area and a set of smartphones on top of which monitoring applications run. In the middle, a standardized or proprietary middle-ware could be employed to ease the access to virtualized resources and services. The middleware, in turn, might be implemented using cloud technologies, centralized overlays , or peer to peer systems [7]. Of course, this high level of heterogeneity, coupled to the wide scale of IoT systems, is expected to magnify security threats of the current Internet, which is being increasingly used to let interact humans, machines, and robots, in any combination. More in details, traditional security countermeasures and privacy enforcement cannot be directly applied to IoT technologies due to …", "title": "" }, { "docid": "cc379f31d87bce8ec46829f227458059", "text": "In this paper we exemplify how information visualization supports speculative thinking, hypotheses testing, and preliminary interpretation processes as part of literary research. While InfoVis has become a buzz topic in the digital humanities, skepticism remains about how effectively it integrates into and expands on traditional humanities research approaches. From an InfoVis perspective, we lack case studies that show the specific design challenges that make literary studies and humanities research at large a unique application area for information visualization. We examine these questions through our case study of the Speculative W@nderverse, a visualization tool that was designed to enable the analysis and exploration of an untapped literary collection consisting of thousands of science fiction short stories. We present the results of two empirical studies that involved general-interest readers and literary scholars who used the evolving visualization prototype as part of their research for over a year. Our findings suggest a design space for visualizing literary collections that is defined by (1) their academic and public relevance, (2) the tension between qualitative vs. quantitative methods of interpretation, (3) result-vs. process-driven approaches to InfoVis, and (4) the unique material and visual qualities of cultural collections. Through the Speculative W@nderverse we demonstrate how visualization can bridge these sometimes contradictory perspectives by cultivating curiosity and providing entry points into literary collections while, at the same time, supporting multiple aspects of humanities research processes.", "title": "" }, { "docid": "8c0f20061bd09b328748d256d5ece7cc", "text": "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.", "title": "" }, { "docid": "357e03d12dc50cf5ce27cadd50ac99fa", "text": "This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.", "title": "" }, { "docid": "b5beb47957acfaa6ab44a5a65b729793", "text": "In developing technology for indoor localization, we have recently begun exploring commercially available state of the art localization technologies. The DecaWave DW1000 is a new ultra-wideband transceiver that advertises high-precision indoor pairwise ranging between modules with errors as low as 10 cm. We are currently exploring this technology to automate obtaining anchor ground-truth locations for other indoor localization systems. Anchor positioning is a constrained version of indoor localization, with minimal time constraints and static devices. However, as we intend to include the DW1000 hardware on our own localization system, this provides an opportunity for gathering performance data for a commerciallyenabled localization system deployed by a third-party for comparison purposes. We do not claim the ranging hardware as our original work, but we do provide a hardware implementation, an infrastructure for converting pairwise measurements to locations, and the front-end for viewing the results.", "title": "" }, { "docid": "01ea3bf8f7694f76b486265edbdeb834", "text": "We deepen and extend resource-level theorizing about sustainable competitive advantage by developing a formal model of resource development in competitive markets. Our model incorporates three important barriers to imitation: time compression diseconomies, causal ambiguity and the magnitude of ...xed investments. Time compression diseconomies are derived from a micro-model of resource development with diminishing returns to effort. We characterize two dimensions of sustainability: whether a resource is imitable and how long imitation takes. We identify conditions under which competitive advantage does not lead to superior performance and show that an imitator can sometimes bene...t from increases in causal ambiguity. Despite recent criticisms, we rea¢rm the usefulness of a resource-level of analysis, especially when the focus is on resources developed through internal projects with identi...able stopping times.", "title": "" }, { "docid": "74808d33cffabf89e7f6c4f97565f486", "text": "Multimedia data security is becoming important with the continuous increase of digital communication on the internet. Without having privacy of data there is no meaning of doing communication using extremely high end technologies. Data encryption is suitable method to protect data, where as steganography is the process of hiding secret information inside some carrier. This paper focus on utilization of digital video/images as a cover to hide data and for insisting more security encryption is done with steganography. In the proposed method encrypting message image with ECC and hiding encrypted image using LSB within cover video. It gives a high level of authentication, security and resistance against extraction by attacker. As ECC offer better security with smaller key sizes, results in faster computation , lower power consumption as well as memory and bandwidth saving.", "title": "" }, { "docid": "08804b3859d70c6212bba05c7e792f9a", "text": "Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a \"Bayesian sparse linear mixed model\" (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "455bad2a024c2e15a1aec6b8472e2ef4", "text": "In this contribution we present a probabilistic fusion framework for implementing a sensor independent measurement fusion. All interfaces are using probabilistic descriptions of measurement and existence uncertainties. We introduce several extensions to already existing algorithms: the support for association of multiple measurements to the same object is introduced, which reduces the effects of split segments in the data preprocessing step of high-resolution sensors like laser scanners. Furthermore, we present an approach for integrating explicit object birth models. We also developed extensions to speed up the algorithm which lead to real-time performance with fragmented data. We show the application of the framework in an automotive multi-target multi-sensor environment by fusing laser scanner and video. The algorithms were evaluated using real-world data in our research vehicle.", "title": "" }, { "docid": "d0ffe432e19d9039a95aed4146b55b61", "text": "While dynamic malware analysis methods generally provide better precision than purely static methods, they have the key drawback that they can only detect malicious behavior if it is executed during analysis. This requires inputs that trigger the malicious behavior to be applied during execution. All current methods, such as hard-coded tests, random fuzzing and concolic testing, can provide good coverage but are inefficient because they are unaware of the specific capabilities of the dynamic analysis tool. In this work, we introduce IntelliDroid, a generic Android input generator that can be configured to produce inputs specific to a dynamic analysis tool, for the analysis of any Android application. Furthermore, IntelliDroid is capable of determining the precise order that the inputs must be injected, and injects them at what we call the device-framework interface such that system fidelity is preserved. This enables it to be paired with full-system dynamic analysis tools such as TaintDroid. Our experiments demonstrate that IntelliDroid requires an average of 72 inputs and only needs to execute an average of 5% of the application to detect malicious behavior. When evaluated on 75 instances of malicious behavior, IntelliDroid successfully identifies the behavior, extracts path constraints, and executes the malicious code in all but 5 cases. On average, IntelliDroid performs these tasks in 138.4 seconds per application.", "title": "" }, { "docid": "8ccd1dfb75523c296508453b5a557384", "text": "It has long been considered a significant problem to improve the visual quality of lossy imageand video compression. Recent advances in computing power together with the availabilityof large training data sets has increased interest in the application of deep learning cnnsto address image recognition and image processing tasks. Here, we present a powerful cnntailored to the specific task of semantic image understanding to achieve higher visual qualityin lossy compression. A modest increase in complexity is incorporated to the encoder whichallows a standard, off-the-shelf jpeg decoder to be used. While jpeg encoding may beoptimized for generic images, the process is ultimately unaware of the specific content ofthe image to be compressed. Our technique makes jpeg content-aware by designing andtraining a model to identify multiple semantic regions in a given image. Unlike objectdetection techniques, our model does not require labeling of object positions and is able toidentify objects in a single pass. We present a new cnn architecture directed specifically toimage compression, which generates a map that highlights semantically-salient regions sothat they can be encoded at higher quality as compared to background regions. By addinga complete set of features for every class, and then taking a threshold over the sum of allfeature activations, we generate a map that highlights semantically-salient regions so thatthey can be encoded at a better quality compared to background regions. Experimentsare presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset, in which our algorithm achieves higher visual quality for the same compressed size whilepreserving PSNR1.", "title": "" }, { "docid": "ec2eb33d3bf01df406409a31cc0a0e1f", "text": "Brain graphs provide a relatively simple and increasingly popular way of modeling the human brain connectome, using graph theory to abstractly define a nervous system as a set of nodes (denoting anatomical regions or recording electrodes) and interconnecting edges (denoting structural or functional connections). Topological and geometrical properties of these graphs can be measured and compared to random graphs and to graphs derived from other neuroscience data or other (nonneural) complex systems. Both structural and functional human brain graphs have consistently demonstrated key topological properties such as small-worldness, modularity, and heterogeneous degree distributions. Brain graphs are also physically embedded so as to nearly minimize wiring cost, a key geometric property. Here we offer a conceptual review and methodological guide to graphical analysis of human neuroimaging data, with an emphasis on some of the key assumptions, issues, and trade-offs facing the investigator.", "title": "" }, { "docid": "4b9df4116960cd3e3300d87e4f97e1e9", "text": "Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.", "title": "" }, { "docid": "bc11f3de3037b0098a6c313d879ae696", "text": "The study of polygon meshes is a large sub-field of computer graphics and geometric modeling. Different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include boolean logic, smoothing, simplification, and many others. 2.3.1 What is a mesh? A mesh is a collection of polygonal facets targeting to constitute an appropriate approximation of a real 3D object. It possesses three different combinatorial elements: vertices, edges and facets. From another viewpoint, a mesh can also be completely described by two kinds of information. The geometry information gives essentially the positions (coordinates) of all its vertices, while the connectivity information provides the adjacency relations between the different elements. 2.3.2 An example of 3D meshes As we can see in the Fig. 2.3, the facets usually consist of triangles, quadrilaterals or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes. The degree of a facet is the number of its component edges, and the valence of a vertex is defined as the number of its incident edges. 2.3.3 Classification of structures Polygon meshes may be represented in a variety of structures, using different methods to store the vertex, edge and face data. In general they include/", "title": "" }, { "docid": "befbfb5b083cddb7fb43ebaa8df244c1", "text": "The aim of this study was to adapt and validate the Spanish version of the Sport Motivation Scale-II (S-SMS-II) in adolescent athletes. The sample included 766 Spanish adolescents (263 females and 503 males; average age = 13.71 ± 1.30 years old). The methodological steps established by the International Test Commission were followed. Four measurement models were compared employing the maximum likelihood estimation (with six, five, three, and two factors). Then, factorial invariance analyses were conducted and the effect sizes were calculated. Finally, the reliability was calculated using Cronbach's alpha, omega, and average variance extracted coefficients. The five-factor S-SMS-II showed the best indices of fit (Cronbach's alpha .64 to .74; goodness of fit index .971, root mean square error of approximation .044, comparative fit index .966). Factorial invariance was also verified across gender and between sport-federated athletes and non-federated athletes. The proposed S-SMS-II is discussed according to previous validated versions (English, Portuguese, and Chinese).", "title": "" }, { "docid": "7f067f869481f06e865880e1d529adc8", "text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.", "title": "" }, { "docid": "093b6b75b34799a1920e27ef8f02595d", "text": "Logistic Regression is a well-known classification method that has been used widely in many applications of data mining, machine learning, computer vision, and bioinformatics. Sparse logistic regression embeds feature selection in the classification framework using the l1-norm regularization, and is attractive in many applications involving high-dimensional data. In this paper, we propose Lassplore for solving large-scale sparse logistic regression. Specifically, we formulate the problem as the l1-ball constrained smooth convex optimization, and propose to solve the problem using the Nesterov's method, an optimal first-order black-box method for smooth convex optimization. One of the critical issues in the use of the Nesterov's method is the estimation of the step size at each of the optimization iterations. Previous approaches either applies the constant step size which assumes that the Lipschitz gradient is known in advance, or requires a sequence of decreasing step size which leads to slow convergence in practice. In this paper, we propose an adaptive line search scheme which allows to tune the step size adaptively and meanwhile guarantees the optimal convergence rate. Empirical comparisons with several state-of-the-art algorithms demonstrate the efficiency of the proposed Lassplore algorithm for large-scale problems.", "title": "" }, { "docid": "88968e939e9586666c83c13d4f640717", "text": "The economics of two-sided markets or multi-sided platforms has emerged over the past decade as one of the most active areas of research in economics and strategy. The literature has constantly struggled, however, with a lack of agreement on a proper definition: for instance, some existing definitions imply that retail firms such as grocers, supermarkets and department stores are multi-sided platforms (MSPs). We propose a definition which provides a more precise notion of MSPs by requiring that they enable direct interactions between the multiple customer types which are affiliated to them. Several important implications of this new definition are derived. First, cross-group network effects are neither necessary nor sufficient for an organization to be a MSP. Second, our definition emphasizes the difference between MSPs and alternative forms of intermediation such as “re-sellers” which take control over the interactions between the various sides, or input suppliers which have only one customer group affiliated as opposed to multiple. We discuss a number of examples that illustrate the insights that can be derived by applying our definition. Third, we point to the economic considerations that determine where firms choose to position themselves on the continuum between MSPs and resellers, or MSPs and input suppliers. 1 Britta Kelley provided excellent research assistance. We are grateful to Elizabeth Altman, Tom Eisenmann and Marc Rysman for comments on an earlier draft. 2 Harvard University, [email protected]. 3 National University of Singapore, [email protected].", "title": "" }, { "docid": "4239773a9ef4636f4dd8e084b658a6bc", "text": "Alternative splicing and alternative polyadenylation (APA) of pre-mRNAs greatly contribute to transcriptome diversity, coding capacity of a genome and gene regulatory mechanisms in eukaryotes. Second-generation sequencing technologies have been extensively used to analyse transcriptomes. However, a major limitation of short-read data is that it is difficult to accurately predict full-length splice isoforms. Here we sequenced the sorghum transcriptome using Pacific Biosciences single-molecule real-time long-read isoform sequencing and developed a pipeline called TAPIS (Transcriptome Analysis Pipeline for Isoform Sequencing) to identify full-length splice isoforms and APA sites. Our analysis reveals transcriptome-wide full-length isoforms at an unprecedented scale with over 11,000 novel splice isoforms. Additionally, we uncover APA of ∼11,000 expressed genes and more than 2,100 novel genes. These results greatly enhance sorghum gene annotations and aid in studying gene regulation in this important bioenergy crop. The TAPIS pipeline will serve as a useful tool to analyse Iso-Seq data from any organism.", "title": "" } ]
scidocsrr
19cb7e29919bb9336b151b313d42c4ef
Approximate fair bandwidth allocation: A method for simple and flexible traffic management
[ { "docid": "740daa67e29636ac58d6f3fa48bd51ba", "text": "Status of Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This memo presents two recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management in routers, to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of router mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification.", "title": "" } ]
[ { "docid": "c629d3588203af2e328fb116c836bb8c", "text": "The purpose of this study was to clinically and radiologically compare the utility, osteoconductivity, and absorbability of hydroxyapatite (HAp) and beta-tricalcium phosphate (TCP) spacers in medial open-wedge high tibial osteotomy (HTO). Thirty-eight patients underwent medial open-wedge HTO with a locking plate. In the first 19 knees, a HAp spacer was implanted in the opening space (HAp group). In the remaining 19 knees, a TCP spacer was implanted in the same manner (TCP group). All patients underwent clinical and radiological examinations before surgery and at 18 months after surgery. Concerning the background factors, there were no statistical differences between the two groups. Post-operatively, the knee score significantly improved in each group. Concerning the post-operative knee alignment and clinical outcome, there was no statistical difference in each parameter between the two groups. Regarding the osteoconductivity, the modified van Hemert’s score of the TCP group was significantly higher (p = 0.0009) than that of the HAp group in the most medial osteotomy zone. The absorption rate was significantly greater in the TCP group than in the HAp group (p = 0.00039). The present study demonstrated that a TCP spacer was significantly superior to a HAp spacer concerning osteoconductivity and absorbability at 18 months after medial open-wedge HTO. Retrospective comparative study, Level III.", "title": "" }, { "docid": "85221954ced857c449acab8ee5cf801e", "text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.", "title": "" }, { "docid": "a34e153e5027a1483fd25c3ff3e1ea0c", "text": "In this paper, we study how to initialize the convolutional neural network (CNN) model for training on a small dataset. Specially, we try to extract discriminative filters from the pre-trained model for a target task. On the basis of relative entropy and linear reconstruction, two methods, Minimum Entropy Loss (MEL) and Minimum Reconstruction Error (MRE), are proposed. The CNN models initialized by the proposed MEL and MRE methods are able to converge fast and achieve better accuracy. We evaluate MEL and MRE on the CIFAR10, CIFAR100, SVHN, and STL-10 public datasets. The consistent performances demonstrate the advantages of the proposed methods.", "title": "" }, { "docid": "a645943a02f5d71b146afe705fb6f49f", "text": "Along with the developments in the field of information technologies, the data in the electronic environment is increasing. Data mining methods are needed to obtain useful information for users in electronic environment. One of these methods, clustering methods, aims to group data according to common properties. This grouping is often based on the distance between the data. Clustering methods are divided into hierarchical and non-hierarchical methods according to the fragmentation technique of clusters. The success of both types of clustering methods varies according to the data set applied. In this study, both types of methods were tested on different type of data sets. Selected methods compared according to five different evaluation metrics. The results of the analysis are presented comparatively at the end of the study and which methods are more convenient for data set is explained.", "title": "" }, { "docid": "5168f7f952d937460d250c44b43f43c0", "text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).", "title": "" }, { "docid": "17dce24f26d7cc196e56a889255f92a8", "text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.", "title": "" }, { "docid": "ae7117416b4a07d2b15668c2c8ac46e3", "text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.", "title": "" }, { "docid": "0289858bb9002e00d753e1ed2da8b204", "text": "This paper presents a motion planning method for mobile manipulators for which the base locomotion is less precise than the manipulator control. In such a case, it is advisable to move the base to discrete poses from which the manipulator can be deployed to cover a prescribed trajectory. The proposed method finds base poses that not only cover the trajectory but also meet constraints on a measure of manipulability. We propose a variant of the conventional manipulability measure that is suited to the trajectory control of the end effector of the mobile manipulator along an arbitrary curve in three space. Results with implementation on a mobile manipulator are discussed.", "title": "" }, { "docid": "19621b0ab08cb0abed04b859331d8092", "text": "The objective of designing a strategy for an institution is to create more value and achieve its vision, with clear and coherent strategies, identifying the conditions in which they are currently, the sector in which they work and the different types of competences that generate, as well as the market in general where they perform, to create this type of conditions requires the availability of strategic information to verify the current conditions, to define the strategic line to follow according to internal and external factors, and in this way decide which methods to use to implement the development of a strategy in the organization. This research project was developed in an institution of higher education where the strategic processes were analyzed from different perspectives i.e. financial, customers, internal processes, and training and learning using business intelligence tools, such as Excel Power BI, Power Pivot, Power Query and a relational database for data repository; which helped having agile and effective information for the creation of the balanced scorecard, involving all levels of the organization and academic units; operating key performance indicators (KPI’s), for operational and strategic decisions. The results were obtained in form of boards of indicators designed to be visualized in the final view of the software constructed with previously described software tools. Keywords—Business intelligence; balanced scorecard; key performance indicators; BI Tools", "title": "" }, { "docid": "0e5111addf4a6d5f0cad92707d6b7173", "text": "We present a novel model based stereo system, which accurately extracts the 3D shape and pose of faces from multiple images taken simultaneously. Extracting the 3D shape from images is important in areas such as pose-invariant face recognition and image manipulation. The method is based on a 3D morphable face model learned from a database of facial scans. The use of a strong face prior allows us to extract high precision surfaces from stereo data of faces, where traditional correlation based stereo methods fail because of the mostly textureless input images. The method uses two or more uncalibrated images of arbitrary baseline, estimating calibration and shape simultaneously. Results using two and three input images are presented. We replace the lighting and albedo estimation of a monocular method with the use of stereo information, making the system more accurate and robust. We evaluate the method using ground truth data and the standard PIE image dataset. A comparison with the state of the art monocular system shows that the new method has a significantly higher accuracy.", "title": "" }, { "docid": "85012f6ad9aa8f3e80a9c971076b4eb9", "text": "The article aims to introduce an integrated web-based interactive data platform for molecular dynamic simulations using the datasets generated by different life science communities from Armenia. The suggested platform, consisting of data repository and workflow management services, is vital for current and future scientific discoveries in the life science domain. We focus on interactive data visualization workflow service as a key to perform more in-depth analyzes of research data outputs, helping to understand the problems efficiently and to consolidate the data into one collective illustration platform. The functionalities of the integrated data platform is presented as an advanced integrated environment to capture, analyze, process and visualize the scientific data.", "title": "" }, { "docid": "8d208bb5318dcbc5d941df24906e121f", "text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.", "title": "" }, { "docid": "ae9469b80390e5e2e8062222423fc2cd", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "ec26505d813ed98ac3f840ea54358873", "text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.", "title": "" }, { "docid": "73e616ebf26c6af34edb0d60a0ce1773", "text": "While recent deep neural networks have achieved a promising performance on object recognition, they rely implicitly on the visual contents of the whole image. In this paper, we train deep neural networks on the foreground (object) and background (context) regions of images respectively. Considering human recognition in the same situations, networks trained on the pure background without objects achieves highly reasonable recognition performance that beats humans by a large margin if only given context. However, humans still outperform networks with pure object available, which indicates networks and human beings have different mechanisms in understanding an image. Furthermore, we straightforwardly combine multiple trained networks to explore different visual cues learned by different networks. Experiments show that useful visual hints can be explicitly learned separately and then combined to achieve higher performance, which verifies the advantages of the proposed framework.", "title": "" }, { "docid": "0952701dd63326f8a78eb5bc9a62223f", "text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.", "title": "" }, { "docid": "83fda0277ebcdb6aeae216a38553db9c", "text": "Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it di cult for non-experts to use. We propose an automatic variational inference algorithm, automatic di erentiation variational inference ( ); we implement it in Stan (code available), a probabilistic programming system. In the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We compare to sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With we can use variational inference on any model we write in Stan.", "title": "" }, { "docid": "3550dbe913466a675b621d476baba219", "text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.", "title": "" }, { "docid": "e67d09b3bf155c5191ad241006e011ad", "text": "An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.", "title": "" } ]
scidocsrr
99cdb216e60bc17be1564c374d39ccd8
Comparing Performances of Big Data Stream Processing Platforms with RAM3S
[ { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" } ]
[ { "docid": "11a4536e40dde47e024d4fe7541b368c", "text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.", "title": "" }, { "docid": "baeddccc34585796fec12659912a757e", "text": "Recurrent neural networks (RNNs) have shown success for many sequence-modeling tasks, but learning long-term dependencies from data remains difficult. This is often attributed to the vanishing gradient problem, which shows that gradient components relating a loss at time t to time t− τ tend to decay exponentially with τ . Long short-term memory (LSTM) and gated recurrent units (GRUs), the most widely-used RNN architectures, attempt to remedy this problem by making the decay’s base closer to 1. NARX RNNs1 take an orthogonal approach: by including direct connections, or delays, from the past, NARX RNNs make the decay’s exponent closer to 0. However, as introduced, NARX RNNs reduce the decay’s exponent only by a factor of nd, the number of delays, and simultaneously increase computation by this same factor. We introduce a new variant of NARX RNNs, called MIxed hiSTory RNNs, which addresses these drawbacks. We show that for τ ≤ 2nd−1, MIST RNNs reduce the decay’s worst-case exponent from τ/nd to log τ , while maintaining computational complexity that is similar to LSTM and GRUs. We compare MIST RNNs to simple RNNs, LSTM, and GRUs across 4 diverse tasks. MIST RNNs outperform all other methods in 2 cases, and in all cases are competitive.", "title": "" }, { "docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4", "text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.", "title": "" }, { "docid": "9e310ac4876eee037e0d5c2a248f6f45", "text": "The self-balancing two-wheel chair (SBC) is an unconventional type of personal transportation vehicle. It has unstable dynamics and therefore requires a special control to stabilize and prevent it from falling and to ensure the possibility of speed control and steering by the rider. This paper discusses the dynamic modeling and controller design for the system. The model of SBC is based on analysis of the motions of the inverted pendulum on a mobile base complemented with equations of the wheel motion and motor dynamics. The proposed control design involves a multi-loop PID control. Experimental verification and prototype implementation are discussed.", "title": "" }, { "docid": "5233286436f0ecfde8e0e647e89b288f", "text": "Each employee’s performance is important in an organization. A way to motivate it is through the application of reinforcement theory which is developed by B. F. Skinner. One of the most commonly used methods is positive reinforcement in which one’s behavior is strengthened or increased based on consequences. This paper aims to review the impact of positive reinforcement on the performances of employees in organizations. It can be applied by utilizing extrinsic reward or intrinsic reward. Extrinsic rewards include salary, bonus and fringe benefit while intrinsic rewards are praise, encouragement and empowerment. By applying positive reinforcement in these factors, desired positive behaviors are encouraged and negative behaviors are eliminated. Financial and non-financial incentives have a positive relationship with the efficiency and effectiveness of staffs.", "title": "" }, { "docid": "6038975e7868b235f2b665ffbd249b68", "text": "Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks&#x2014;pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.", "title": "" }, { "docid": "301aee8363dffd7ae4c7ac2945a55842", "text": "This work studies the usage of the Deep Neural Network (DNN) Bottleneck (BN) features together with the traditional MFCC features in the task of i-vector-based speaker recognition. We decouple the sufficient statistics extraction by using separate GMM models for frame alignment, and for statistics normalization and we analyze the usage of BN and MFCC features (and their concatenation) in the two stages. We also show the effect of using full-covariance GMM models, and, as a contrast, we compare the result to the recent DNN-alignment approach. On the NIST SRE2010, telephone condition, we show 60% relative gain over the traditional MFCC baseline for EER (and similar for the NIST DCF metrics), resulting in 0.94% EER.", "title": "" }, { "docid": "9b30a07edc14ed2d1132421d8f372cd2", "text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.", "title": "" }, { "docid": "b7c4d8b946ea6905a2f0da10e6dc9de6", "text": "We develop a broadband channel estimation algorithm for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs). Our methodology exploits the joint sparsity of the mmWave MIMO channel in the angle and delay domains. We formulate the estimation problem as a noisy quantized compressed-sensing problem and solve it using efficient approximate message passing (AMP) algorithms. In particular, we model the angle-delay coefficients using a Bernoulli–Gaussian-mixture distribution with unknown parameters and use the expectation-maximization forms of the generalized AMP and vector AMP algorithms to simultaneously learn the distributional parameters and compute approximately minimum mean-squared error (MSE) estimates of the channel coefficients. We design a training sequence that allows fast, fast Fourier transform based implementation of these algorithms while minimizing peak-to-average power ratio at the transmitter, making our methods scale efficiently to large numbers of antenna elements and delays. We present the results of a detailed simulation study that compares our algorithms to several benchmarks. Our study investigates the effect of SNR, training length, training type, ADC resolution, and runtime on channel estimation MSE, mutual information, and achievable rate. It shows that, in a mmWave MIMO system, the methods we propose to exploit joint angle-delay sparsity allow 1-bit ADCs to perform comparably to infinite-bit ADCs at low SNR, and 4-bit ADCs to perform comparably to infinite-bit ADCs at medium SNR.", "title": "" }, { "docid": "bd06f693359bba90de59454f32581c9c", "text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.", "title": "" }, { "docid": "84c95e15ddff06200624822cc12fa51f", "text": "A growing body of research has recently been conducted on semantic textual similarity using a variety of neural network models. While recent research focuses on word-based representation for phrases, sentences and even paragraphs, this study considers an alternative approach based on character n-grams. We generate embeddings for character n-grams using a continuous-bag-of-n-grams neural network model. Three different sentence representations based on n-gram embeddings are considered. Results are reported for experiments with bigram, trigram and 4-gram embeddings on the STS Core dataset for SemEval-2016 Task 1.", "title": "" }, { "docid": "0a170051e72b58081ad27e71a3545bcf", "text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "title": "" }, { "docid": "60ec8f06cdd4bf7cb27565c6d576ff40", "text": "2.5D chips with TSV and interposer are becoming the most popular packaging method with great increased flexibility and integrated functionality. However, great challenges have been posed in the failure analysis process to precisely locate the failure point of each interconnection in ultra-small size. The electro-optic sampling (EOS) based pulsed Time-domain reflectometry (TDR) is a powerful tool for the 2.5D/3D package diagnostics with greatly increased I/O speed and density. The timing of peaks in the reflected waveform accurately reveals the faulty location. In this work, 2.5D chip with known open failure location has been analyzed by a EOS based TDR system.", "title": "" }, { "docid": "5ad696a08b236e200a96589780b2b06c", "text": "The need for increasing flexibility of industrial automation system products leads to the trend of shifting functional behavior from hardware solutions to software components. This trend causes an increasing complexity of software components and the need for comprehensive and automated testing approaches to ensure a required (high) quality level. Nevertheless, key tasks in software testing include identifying appropriate test cases that typically require a high effort for (a) test case generation/construction and (b) test case modification in case of requirements changes. Semi-automated derivation of test cases based on models, like UML, can support test case generation. In this paper we introduce an automated test case generation approach for industrial automation applications where the test cases are specified by UML state chart diagrams. In addition we present a prototype application of the presented approach for a sorting machine. Major results showed that state charts (a) can support efficient test case generation and (b) enable automated generation of test cases and code for industrial automation systems.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "7edd1ae4ec4bac9ed91e5e14326a694e", "text": "These days, educational institutions and organizations are generating huge amount of data, more than the people can read in their lifetime. It is not possible for a person to learn, understand, decode, and interpret to find valuable information. Data mining is one of the most popular method which can be used to identify hidden patterns from large databases. User can extract historical, hidden details, and previously unknown information, from large repositories by applying required mining techniques. There are two algorithms which can be used to classify and predict, such as supervised learning and unsupervised learning. Classification is a technique which performs an induction on current data (existing data) and predicts future class. The main objective of classification is to make an unknown class to known class by consulting its neighbor class. therefore it is called as supervised learning, it builds the classifier by consulting with the known class labels such as k-nearest neighbor algorithm (k-NN), Naïve Bayes (NB), support vector machine (SVM), decision tree. Clustering is an unsupervised learning that builds a model to group similar objects into categories without consulting a class label. The main objective of clustering is find the distance between objects like nearby and faraway based on their similarities and dissimilarities it groups the objects and detects outliers. In this paper Weka tool is used to analyze by applying preprocessing, classification on institutional academic result of under graduate students of computer science & engineering. Keywords— Weka, classifier, supervised learning,", "title": "" }, { "docid": "7c13ebe2897fc4870a152159cda62025", "text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.", "title": "" }, { "docid": "36c11c29f6605f7c234e68ecba2a717a", "text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.", "title": "" }, { "docid": "a433ebaeeb5dc5b68976b3ecb770c0cd", "text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01", "title": "" }, { "docid": "9f5998ebc2457c330c29a10772d8ee87", "text": "Fuzzy hashing is a known technique that has been adopted to speed up malware analysis processes. However, Hashing has not been fully implemented for malware detection because it can easily be evaded by applying a simple obfuscation technique such as packing. This challenge has limited the usage of hashing to triaging of the samples based on the percentage of similarity between the known and unknown. In this paper, we explore the different ways fuzzy hashing can be used to detect similarities in a file by investigating particular hashes of interest. Each hashing method produces independent but related interesting results which are presented herein. We further investigate combination techniques that can be used to improve the detection rates in hashing methods. Two such evidence combination theory based methods are applied in this work in order propose a novel way of combining the results achieved from different hashing algorithms. This study focuses on file and section Ssdeep hashing, PeHash and Imphash techniques to calculate the similarity of the Portable Executable files. Our results show that the detection rates are improved when evidence combination techniques are used.", "title": "" } ]
scidocsrr
617ac6cf9494e8982b2e47b5604425da
NEMO : Neuro-Evolution with Multiobjective Optimization of Deep Neural Network for Speed and Accuracy
[ { "docid": "048b124d585c523905b1a61b68fcc09e", "text": "Driver’s status is crucial because one of the main reasons for motor vehicular accidents is related to driver’s inattention or drowsiness. Drowsiness detector on a car can reduce numerous accidents. Accidents occur because of a single moment of negligence, thus driver monitoring system which works in real-time is necessary. This detector should be deployable to an embedded device and perform at high accuracy. In this paper, a novel approach towards real-time drowsiness detection based on deep learning which can be implemented on a low cost embedded board and performs with a high accuracy is proposed. Main contribution of our paper is compression of heavy baseline model to a light weight model deployable to an embedded board. Moreover, minimized network structure was designed based on facial landmark input to recognize whether driver is drowsy or not. The proposed model achieved an accuracy of 89.5% on 3-class classification and speed of 14.9 frames per second (FPS) on Jetson TK1.", "title": "" }, { "docid": "c10dd691e79d211ab02f2239198af45c", "text": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-ofthe-art.", "title": "" } ]
[ { "docid": "41eab64d00f1a4aaea5c5899074d91ca", "text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.", "title": "" }, { "docid": "345a59aac1e89df5402197cca90ca464", "text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia", "title": "" }, { "docid": "37b22de12284d38f6488de74f436ccc8", "text": "Entity disambiguation is an important step in many information retrieval applications. This paper proposes new research for entity disambiguation with the focus of name disambiguation in digital libraries. In particular, pairwise similarity is first learned for publications that share the same author name string (ANS) and then a novel Hierarchical Agglomerative Clustering approach with Adaptive Stopping Criterion (HACASC) is proposed to adaptively cluster a set of publications that share a same ANS to individual clusters of publications with different author identities. The HACASC approach utilizes a mixture of kernel ridge regressions to intelligently determine the threshold in clustering. This obtains more appropriate clustering granularity than non-adaptive stopping criterion. We conduct a large scale empirical study with a dataset of more than 2 million publication record pairs to demonstrate the advantage of the proposed HACASC approach.", "title": "" }, { "docid": "a35a564a2f0e16a21e0ef5e26601eab9", "text": "The social media revolution has created a dynamic shift in the digital marketing landscape. The voice of influence is moving from traditional marketers towards consumers through online social interactions. In this study, we focus on two types of online social interactions, namely, electronic word of mouth (eWOM) and observational learning (OL), and explore how they influence consumer purchase decisions. We also examine how receiver characteristics, consumer expertise and consumer involvement, moderate consumer purchase decision process. Analyzing panel data collected from a popular online beauty forum, we found that consumer purchase decisions are influenced by their online social interactions with others and that action-based OL information is more influential than opinion-based eWOM. Further, our results show that both consumer expertise and consumer involvement play an important moderating role, albeit in opposite direction: Whereas consumer expertise exerts a negative moderating effect, consumer involvement is found to have a positive moderating effect. The study makes important contributions to research and practice.", "title": "" }, { "docid": "e5a936bbd9e6dc0189b7cc18268f0f87", "text": "A new method of obtaining amplitude modulation (AM) for determining target location with spinning reticles is presented. The method is based on the use of graded transmission capabilities. The AM spinning reticles previously presented were functions of three parameters: amplitude vs angle, amplitude vs radius, and phase. This paper presents these parameters along with their capabilities and limitations and shows that multiple parameters can be integrated into a single reticle. It is also shown that AM parameters can be combined with FM parameters in a single reticle. Also, a general equation is developed that relates the AM parameters to a reticle transmission equation.", "title": "" }, { "docid": "f2478e4b1156e112f84adbc24a649d04", "text": "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", "title": "" }, { "docid": "0c8517bab8a8fa34f25a72cf6c971b25", "text": "Automotive radar sensors are key components for driver assistant systems. In order to handle complex traffic scenarios an advanced separability is required with respect to object angle, distance and velocity. In this contribution a highly integrated automotive radar sensor enabling chirp sequence modulation will be presented and discussed. Furthermore, the development of a target simulator which is essential for the characterization of such radar sensors will be introduced including measurements demonstrating the performance of our system.", "title": "" }, { "docid": "288f8a2dab0c32f85c313f5a145e47a5", "text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input", "title": "" }, { "docid": "837a68575b84782a252f8bd49ad654a0", "text": "We explore contemporary, data-driven techniques for solving math word problems over recent large-scale datasets. We show that well-tuned neural equation classifiers can outperform more sophisticated models such as sequence to sequence and self-attention across these datasets. Our error analysis indicates that, while fully data driven models show some promise, semantic and world knowledge is necessary for further advances.", "title": "" }, { "docid": "f4d6cd6f6cd453077e162b64ae485c62", "text": "Effects of Music Therapy on Prosocial Behavior of Students with Autism and Developmental Disabilities by Catherine L. de Mers Dr. Matt Tincani, Examination Committee Chair Assistant Professor o f Special Education University o f Nevada, Las Vegas This researeh study employed a multiple baseline across participants design to investigate the effects o f music therapy intervention on hitting, screaming, and asking o f three children with autism and/or developmental disabilities. Behaviors were observed and recorded during 10-minute free-play sessions both during baseline and immediately after musie therapy sessions during intervention. Interobserver agreement and procedural fidelity data were collected. Music therapy sessions were modeled on literature pertaining to music therapy with children with autism. In addition, social validity surveys were eollected to answer research questions pertaining to the social validity of music therapy as an intervention. Findings indicate that music therapy produced moderate and gradual effects on hitting, screaming, and asking. Hitting and sereaming deereased following intervention, while asking increased. Intervention effects were maintained three weeks following", "title": "" }, { "docid": "5956e9399cfe817aa1ddec5553883bef", "text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "title": "" }, { "docid": "09b77e632fb0e5dfd7702905e51fc706", "text": "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.", "title": "" }, { "docid": "93a9df00671b032986148106d7e90f70", "text": "Vulnerabilities in applications and their widespread exploitation through successful attacks are common these days. Testing applications for preventing vulnerabilities is an important step to address this issue. In recent years, a number of security testing approaches have been proposed. However, there is no comparative study of these work that might help security practitioners select an appropriate approach for their needs. Moreover, there is no comparison with respect to automation capabilities of these approaches. In this work, we identify seven criteria to analyze program security testing work. These are vulnerability coverage, source of test cases, test generation method, level of testing, granularity of test cases, testing automation, and target applications. We compare and contrast prominent security testing approaches available in the literature based on these criteria. In particular, we focus on work that address four most common but dangerous vulnerabilities namely buffer overflow, SQL injection, format string bug, and cross site scripting. Moreover, we investigate automation features available in these work across a security testing process. We believe that our findings will provide practical information for security practitioners in choosing the most appropriate tools.", "title": "" }, { "docid": "952735cb937248c837e0b0244cd9dbb1", "text": "Recently, the desired very high throughput of 5G wireless networks drives millimeter-wave (mm-wave) communication into practical applications. A phased array technique is required to increase the effective antenna aperture at mm-wave frequency. Integrated solutions of beamforming/beam steering are extremely attractive for practical implementations. After a discussion on the basic principles of radio beam steering, we review and explore the recent advanced integration techniques of silicon-based electronic integrated circuits (EICs), photonic integrated circuits (PICs), and antenna-on-chip (AoC). For EIC, the latest advanced designs of on-chip true time delay (TTD) are explored. Even with such advances, the fundamental loss of a silicon-based EIC still exists, which can be solved by advanced PIC solutions with ultra-broad bandwidth and low loss. Advanced PIC designs for mm-wave beam steering are then reviewed with emphasis on an optical TTD. Different from the mature silicon-based EIC, the photonic integration technology for PIC is still under development. In this paper, we review and explore the potential photonic integration platforms and discuss how a monolithic integration based on photonic membranes fits the photonic mm-wave beam steering application, especially for the ease of EIC and PIC integration on a single chip. To combine EIC, for its accurate and mature fabrication techniques, with PIC, for its ultra-broad bandwidth and low loss, a hierarchical mm-wave beam steering chip with large-array delays realized in PIC and sub-array delays realized in EIC can be a future-proof solution. Moreover, the antenna units can be further integrated on such a chip using AoC techniques. Among the mentioned techniques, the integration trends on device and system levels are discussed extensively.", "title": "" }, { "docid": "b15ed1584eb030fba1ab3c882983dbf0", "text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.", "title": "" }, { "docid": "44f257275a36308ce088881fafc92d7c", "text": "Frauds related to the ATM (Automatic Teller Machine) are increasing day by day which is a serious issue. ATM security is used to provide protection against these frauds. Though security is provided for ATM machine, cases of robberies are increasing. Previous technologies provide security within machines for secure transaction, but machine is not neatly protected. The ATM machines are not safe since security provided traditionally were either by using RFID reader or by using security guard outside the ATM. This security is not sufficient because RFID card can be stolen and can be misused for robbery as well as watchman can be blackmailed by the thief. So there is a need to propose new technology which can overcome this problem. This paper proposes a system which aims to design real-time monitoring and controlling system. The system is implemented using Raspberry Pi and fingerprint module which make the system more secure, cost effective and stand alone. For controlling purpose, Embedded Web Server (EWS) is designed using Raspberry Pi which serves web page on which video footage of ATM center is seen and controlled. So the proposed system removes the drawback of manual controlling camera module and door also this system is stand alone and cost effective.", "title": "" }, { "docid": "8920b9fbfe010af17e664c0b62c8e0a2", "text": "The field of machine learning is an interesting and relatively new area of research in artificial intelligence. In this paper, a special type of reinforcement learning, Q-Learning, was applied to the popular mobile game Flappy Bird. The QLearning algorithm was tested on two different environments. The original version and a simplified version. The maximum score achieved on the original version and simplified version were 169 and 28,851, respectively. The trade-off between runtime and accuracy was investigated. Using appropriate settings, the Q-Learning algorithm was proven to be successful with a relatively quick convergence time.", "title": "" }, { "docid": "1dc07b02a70821fdbaa9911755d1e4b0", "text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.", "title": "" }, { "docid": "8d61cbb3df2ea134fa1252d5eff29597", "text": "Recovering 3D full-body human pose is a challenging problem with many applications. It has been successfully addressed by motion capture systems with body worn markers and multiple cameras. In this paper, we address the more challenging case of not only using a single camera but also not leveraging markers: going directly from 2D appearance to 3D geometry. Deep learning approaches have shown remarkable abilities to discriminatively learn 2D appearance features. The missing piece is how to integrate 2D, 3D, and temporal information to recover 3D geometry and account for the uncertainties arising from the discriminative model. We introduce a novel approach that treats 2D joint locations as latent variables whose uncertainty distributions are given by a deep fully convolutional neural network. The unknown 3D poses are modeled by a sparse representation and the 3D parameter estimates are realized via an Expectation-Maximization algorithm, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Extensive evaluation on benchmark datasets shows that the proposed approach achieves greater accuracy over state-of-the-art baselines. Notably, the proposed approach does not require synchronized 2D-3D data for training and is applicable to “in-the-wild” images, which is demonstrated with the MPII dataset.", "title": "" }, { "docid": "ba1368e4acc52395a8e9c5d479d4fe8f", "text": "This talk will present an overview of our recent research on distributional reinforcement learning. Our starting point is our recent ICML paper, in which we argued for the fundamental importance of the value distribution: the distribution of random returns received by a reinforcement learning agent. This is in contrast to the common approach, which models the expectation of this return, or value. Back then, we were able to design a new algorithm that learns the value distribution through a TD-like bootstrap process and achieved state-of-the-art performance on games from the Arcade Learning Environment (ALE). However, this left open the question as to why the distributional approach should perform better at all. We’ve since delved deeper into what makes distributional RL work: first by improving the original using quantile regression, which directly minimizes the Wasserstein metric; and second by unearthing surprising connections between the original C51 algorithm and the distant cousin of the Wasserstein metric, the Cramer distance.", "title": "" } ]
scidocsrr
725400ce7c5aebb6a73a49362a5ec61f
Credibility Assessment in the News : Do we need to read ?
[ { "docid": "a31ca7f2c2fce4a4f26d420f4aa91a91", "text": "Transition-based dependency parsers usually use transition systems that monotonically extend partial parse states until they identify a complete parse tree. Honnibal et al. (2013) showed that greedy onebest parsing accuracy can be improved by adding additional non-monotonic transitions that permit the parser to “repair” earlier parsing mistakes by “over-writing” earlier parsing decisions. This increases the size of the set of complete parse trees that each partial parse state can derive, enabling such a parser to escape the “garden paths” that can trap monotonic greedy transition-based dependency parsers. We describe a new set of non-monotonic transitions that permits a partial parse state to derive a larger set of completed parse trees than previous work, which allows our parser to escape from a larger set of garden paths. A parser with our new nonmonotonic transition system has 91.85% directed attachment accuracy, an improvement of 0.6% over a comparable parser using the standard monotonic arc-eager transitions.", "title": "" }, { "docid": "ee665e5a3d032a4e9b4e95cddac0f95c", "text": "On p. 219, we describe the data we collected from BuzzSumo as “the total number of times each article was shared on Facebook” (emph. added). In fact, the BuzzSumo data are the number of engagements with each article, defined as the sum of shares, comments, and other interactions such as “likes.” All references to counts of Facebook shares in the paper and the online appendix based on the BuzzSumo data should be replaced with references to counts of Facebook engagements. None of the tables or figures in either the paper or the online appendix are affected by this change, nor does the change affect the results based on our custom survey. None of the substantive conclusions of the paper are affected with one exception discussed below, where our substantive conclusion is strengthened. Examples of cases where the text should be changed:", "title": "" } ]
[ { "docid": "7ecba9c479a754ad55664bf8208643e0", "text": "One of the important problems that our society facing is people with disabilities which are finding hard to cope up with the fast growing technology. About nine billion people in the world are deaf and dumb. Communications between deaf-dumb and a normal person have always been a challenging task. Generally deaf and dumb people use sign language for communication, Sign language is an expressive and natural way for communication between normal and dumb people. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity, the artificial mouth is introduced for the dumb people. So, we need a translator to understand what they speak and communicate with us. Hence makes the communication between normal person and disabled people easier. This work aims to lower the barrier of disabled persons in communication. The main aim of this proposed work is to develop a cost effective system which can give voice to voiceless people with the help of Sign language. In the proposed work, the captured images are processed through MATLAB in PC and converted into speech through speaker and text in LCD by interfacing with Arduino. Keyword : Disabled people, Sign language, Image Processing, Arduino, LCD display, Speaker.", "title": "" }, { "docid": "e87a52f3e4f3c08838a2eff7501a12e5", "text": "A coordinated approach to digital forensic readiness (DFR) in a large organisation requires the management and monitoring of a wide variety of resources, both human and technical. The resources involved in DFR in large organisations typically include staff from multiple departments and business units, as well as network infrastructure and computing platforms. The state of DFR within large organisations may therefore be adversely affected if the myriad human and technical resources involved are not managed in an optimal manner. This paper contributes to DFR by proposing the novel concept of a digital forensic readiness management system (DFRMS). The purpose of a DFRMS is to assist large organisations in achieving an optimal level of management for DFR. In addition to this, we offer an architecture for a DFRMS. This architecture is based on requirements for DFR that we ascertained from an exhaustive review of the DFR literature. We describe the architecture in detail and show that it meets the requirements set out in the DFR literature. The merits and disadvantages of the architecture are also discussed. Finally, we describe and explain an early prototype of a DFRMS.", "title": "" }, { "docid": "fc9ddeeae99a4289d5b955c9ba90c682", "text": "In recent years there have been growing calls for forging greater connections between education and cognitive neuroscience.As a consequence great hopes for the application of empirical research on the human brain to educational problems have been raised. In this article we contend that the expectation that results from cognitive neuroscience research will have a direct and immediate impact on educational practice are shortsighted and unrealistic. Instead, we argue that an infrastructure needs to be created, principally through interdisciplinary training, funding and research programs that allow for bidirectional collaborations between cognitive neuroscientists, educators and educational researchers to grow.We outline several pathways for scaffolding such a basis for the emerging field of ‘Mind, Brain and Education’ to flourish as well as the obstacles that are likely to be encountered along the path.", "title": "" }, { "docid": "77e30fedf56545ba22ae9f1ef17b4dc9", "text": "Most of current self-checkout systems rely on barcodes, RFID tags, or QR codes attached on items to distinguish products. This paper proposes an Intelligent Self-Checkout System (ISCOS) embedded with a single camera to detect multiple products without any labels in real-time performance. In addition, deep learning skill is applied to implement product detection, and data mining techniques construct the image database employed as training dataset. Product information gathered from a number of markets in Taiwan is utilized to make recommendation to customers. The bounding boxes are annotated by background subtraction with a fixed camera to avoid time-consuming process for each image. The contribution of this work is to combine deep learning and data mining approaches to real-time multi-object detection in image-based checkout system.", "title": "" }, { "docid": "3c907a3e7ff704348e78239b2b54b917", "text": "Real-time traffic surveillance is essential in today’s intelligent transportation systems and will surely play a vital role in tomorrow’s smart cities. The work detailed in this paper reports on the development and implementation of a novel smart wireless sensor for traffic monitoring. Computationally efficient and reliable algorithms for vehicle detection, speed and length estimation, classification, and time-synchronization were fully developed, integrated, and evaluated. Comprehensive system evaluation and extensive data analysis were performed to tune and validate the system for a reliable and robust operation. Several field studies conducted on highway and urban roads for different scenarios and under various traffic conditions resulted in 99.98% detection accuracy, 97.11% speed estimation accuracy, and 97% length-based vehicle classification accuracy. The developed system is portable, reliable, and cost-effective. The system can also be used for short-term or long-term installment on surface of highway, roadway, and roadside. Implementation cost of a single node including enclosure is US $50.", "title": "" }, { "docid": "9f348ac8bae993ddf225f47dfa20182b", "text": "BACKGROUND\nTreatment of giant melanocytic nevi (GMN) remains a multidisciplinary challenge. We present analysis of diagnostics, treatment, and follow- up in children with GMN to establish obligatory procedures in these patients.\n\n\nMATERIAL/METHODS\nIn 24 children with GMN, we analyzed: localization, main nevus diameter, satellite nevi, brain MRI, catecholamines concentrations in 24-h urine collection, surgery stages number, and histological examinations. The t test was used to compare catecholamines concentrations in patient subgroups.\n\n\nRESULTS\nNine children had \"bathing trunk\" nevus, 7 had main nevus on the back, 6 on head/neck, and 2 on neck/shoulder and neck/thorax. Brain MRI revealed neurocutaneous melanosis (NCM) in 7/24 children (29.2%), symptomatic in 1. Among urine catecholamines levels from 20 patients (33 samples), dopamine concentration was elevated in 28/33, noradrenaline in 15, adrenaline in 11, and vanillylmandelic acid in 4. In 6 NCM children, all catecholamines concentrations were higher than in patients without NCM (statistically insignificant). In all patients, histological examination of excised nevi revealed compound nevus, with neurofibromatic component in 15 and melanoma in 2. They remain without recurrence/metastases at 8- and 3-year-follow-up. There were 4/7 NCM patients with more than 1 follow-up MRI; in 1 a new melanin deposit was found and in 3 there was no progression.\n\n\nCONCLUSIONS\nEarly excision with histological examination speeds the diagnosis of melanoma. Brain MRI is necessary to confirm/rule-out NCM. High urine dopamine concentration in GMN children, especially with NCM, is an unpublished finding that can indicate patients with more serious neurological disease. Treatment of GMN children should be tailored individually for each case with respect to all medical/psychological aspects.", "title": "" }, { "docid": "e8bbbc1864090b0246735868faa0e11f", "text": "A pre-trained deep convolutional neural network (DCNN) is the feed-forward computation perspective which is widely used for the embedded vision systems. In the DCNN, the 2D convolutional operation occupies more than 90% of the computation time. Since the 2D convolutional operation performs massive multiply-accumulation (MAC) operations, conventional realizations could not implement a fully parallel DCNN. The RNS decomposes an integer into a tuple of L integers by residues of moduli set. Since no pair of modulus have a common factor with any other, the conventional RNS decomposes the MAC unit into circuits with different sizes. It means that the RNS could not utilize resources of an FPGA with uniform size. In this paper, we propose the nested RNS (NRNS), which recursively decompose the RNS. It can decompose the MAC unit into circuits with small sizes. In the DCNN using the NRNS, a 48-bit MAC unit is decomposed into 4-bit ones realized by look-up tables of the FPGA. In the system, we also use binary to NRNS converters and NRNS to binary converters. The binary to NRNS converter is realized by on-chip BRAMs, while the NRNS to binary one is realized by DSP blocks and BRAMs. Thus, a balanced usage of FPGA resources leads to a high clock frequency with less hardware. The ImageNet DCNN using the NRNS is implemented on a Xilinx Virtex VC707 evaluation board. As for the performance per area GOPS (Giga operations per second) per a slice, the proposed one is 5.86 times better than the existing best realization.", "title": "" }, { "docid": "1ebb827b9baf3307bc20de78538d23e7", "text": "0747-5632/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2013.07.003 ⇑ Corresponding author. Address: University of North Texas, College of Business, 1155 Union Circle #311160, Denton, TX 76203-5017, USA. E-mail addresses: [email protected] (M. Salehan), arash.negah [email protected] (A. Negahban). 1 These authors contributed equally to the work. Mohammad Salehan 1,⇑, Arash Negahban 1", "title": "" }, { "docid": "d17f6ed783c0ec33e4c74171db82392b", "text": "Caffeic acid phenethyl ester, derived from natural propolis, has been reported to have anti-cancer properties. Voltage-gated sodium channels are upregulated in many cancers where they promote metastatic cell behaviours, including invasiveness. We found that micromolar concentrations of caffeic acid phenethyl ester blocked voltage-gated sodium channel activity in several invasive cell lines from different cancers, including breast (MDA-MB-231 and MDA-MB-468), colon (SW620) and non-small cell lung cancer (H460). In the MDA-MB-231 cell line, which was adopted as a 'model', long-term (48 h) treatment with 18 μM caffeic acid phenethyl ester reduced the peak current density by 91% and shifted steady-state inactivation to more hyperpolarized potentials and slowed recovery from inactivation. The effects of long-term treatment were also dose-dependent, 1 μM caffeic acid phenethyl ester reducing current density by only 65%. The effects of caffeic acid phenethyl ester on metastatic cell behaviours were tested on the MDA-MB-231 cell line at a working concentration (1 μM) that did not affect proliferative activity. Lateral motility and Matrigel invasion were reduced by up to 14% and 51%, respectively. Co-treatment of caffeic acid phenethyl ester with tetrodotoxin suggested that the voltage-gated sodium channel inhibition played a significant intermediary role in these effects. We conclude, first, that caffeic acid phenethyl ester does possess anti-metastatic properties. Second, the voltage-gated sodium channels, commonly expressed in strongly metastatic cancers, are a novel target for caffeic acid phenethyl ester. Third, more generally, ion channel inhibition can be a significant mode of action of nutraceutical compounds.", "title": "" }, { "docid": "b29f2d688e541463b80006fac19eaf20", "text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.", "title": "" }, { "docid": "be283056a8db3ab5b2481f3dc1f6526d", "text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.", "title": "" }, { "docid": "460b8f82e5c378c7d866d92339e14572", "text": "When the number of projections does not satisfy the Shannon/Nyquist sampling requirement, streaking artifacts are inevitable in x-ray computed tomography (CT) images reconstructed using filtered backprojection algorithms. In this letter, the spatial-temporal correlations in dynamic CT imaging have been exploited to sparsify dynamic CT image sequences and the newly proposed compressed sensing (CS) reconstruction method is applied to reconstruct the target image sequences. A prior image reconstructed from the union of interleaved dynamical data sets is utilized to constrain the CS image reconstruction for the individual time frames. This method is referred to as prior image constrained compressed sensing (PICCS). In vivo experimental animal studies were conducted to validate the PICCS algorithm, and the results indicate that PICCS enables accurate reconstruction of dynamic CT images using about 20 view angles, which corresponds to an under-sampling factor of 32. This undersampling factor implies a potential radiation dose reduction by a factor of 32 in myocardial CT perfusion imaging.", "title": "" }, { "docid": "cbc6bd586889561cc38696f758ad97d2", "text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.", "title": "" }, { "docid": "443637fcc9f9efcf1026bb64aa0a9c97", "text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.", "title": "" }, { "docid": "3b03af1736709e536a4a58363102bc60", "text": "Music transcription, as an essential component in music signal processing, contributes to wide applications in musicology, accelerates the development of commercial music industry, facilitates the music education as well as benefits extensive music lovers. However, the work relies on a lot of manual work due to heavy requirements on knowledge and experience. This project mainly examines two deep learning methods, DNN and LSTM, to automatize music transcription. We transform the audio files into spectrograms using constant Q transform and extract features from the spectrograms. Deep learning methods have the advantage of learning complex features in music transcription. The promising results verify that deep learning methods are capable of learning specific musical properties, including notes and rhythms. Keywords—automatic music transcription; deep learning; deep neural network (DNN); long shortterm memory networks (LSTM)", "title": "" }, { "docid": "3c4712f1c54f3d9d8d4297d9ab0b619f", "text": "In this paper, we introduce Cellular Automata-a dynamic evolution model to intuitively detect the salient object. First, we construct a background-based map using color and space contrast with the clustered boundary seeds. Then, a novel propagation mechanism dependent on Cellular Automata is proposed to exploit the intrinsic relevance of similar regions through interactions with neighbors. Impact factor matrix and coherence matrix are constructed to balance the influential power towards each cell's next state. The saliency values of all cells will be renovated simultaneously according to the proposed updating rule. It's surprising to find out that parallel evolution can improve all the existing methods to a similar level regardless of their original results. Finally, we present an integration algorithm in the Bayesian framework to take advantage of multiple saliency maps. Extensive experiments on six public datasets demonstrate that the proposed algorithm outperforms state-of-the-art methods.", "title": "" }, { "docid": "fdd998012aa9b76ba9fe4477796ddebb", "text": "Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99% of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations.", "title": "" }, { "docid": "df69a701bca12d3163857a9932ef51e2", "text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.", "title": "" }, { "docid": "b513d1cbf3b2f649afcea4d0ab6784ac", "text": "RoboSimian is a quadruped robot inspired by an ape-like morphology, with four symmetric limbs that provide a large dexterous workspace and high torque output capabilities. Advantages of using RoboSimian for rough terrain locomotion include (1) its large, stable base of support, and (2) existence of redundant kinematic solutions, toward avoiding collisions with complex terrain obstacles. However, these same advantages provide significant challenges in experimental implementation of walking gaits. Specifically: (1) a wide support base results in high variability of required body pose and foothold heights, in particular when compared with planning for humanoid robots, (2) the long limbs on RoboSimian have a strong proclivity for self-collision and terrain collision, requiring particular care in trajectory planning, and (3) having rear limbs outside the field of view requires adequate perception with respect to a world map. In our results, we present a tractable means of planning statically stable and collision-free gaits, which combines practical heuristics for kinematics with traditional randomized (RRT) search algorithms. In planning experiments, our method outperforms other tested methodologies. Finally, real-world testing indicates that perception limitations provide the greatest challenge in real-world implementation.", "title": "" }, { "docid": "04d110e130c5d7dc56c2d8e63857e9aa", "text": "OBJECTIVE\nThis study aimed to assess weight bias among professionals who specialize in treating eating disorders and identify to what extent their weight biases are associated with attitudes about treating obese patients.\n\n\nMETHOD\nParticipants were 329 professionals treating eating disorders, recruited through professional organizations that specialize in eating disorders. Participants completed anonymous, online self-report questionnaires, assessing their explicit weight bias, perceived causes of obesity, attitudes toward treating obese patients, perceptions of treatment compliance and success of obese patients, and perceptions of weight bias among other practitioners.\n\n\nRESULTS\nNegative weight stereotypes were present among some professionals treating eating disorders. Although professionals felt confident (289; 88%) and prepared (276; 84%) to provide treatment to obese patients, the majority (184; 56%) had observed other professionals in their field making negative comments about obese patients, 42% (138) believed that practitioners who treat eating disorders often have negative stereotypes about obese patients, 35% (115) indicated that practitioners feel uncomfortable caring for obese patients, and 29% (95) reported that their colleagues have negative attitudes toward obese patients. Compared to professionals with less weight bias, professionals with stronger weight bias were more likely to attribute obesity to behavioral causes, expressed more negative attitudes and frustrations about treating obese patients, and perceived poorer treatment outcomes for these patients.\n\n\nDISCUSSION\nSimilar to other health disciplines, professionals treating eating disorders are not immune to weight bias. This has important implications for provision of clinical treatment with obese individuals and efforts to reduce weight bias in the eating disorders field.", "title": "" } ]
scidocsrr
d15b94152661b013e935f44373d6bc23
The Good, The Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games
[ { "docid": "a52fce0b7419d745a85a2bba27b34378", "text": "Playing action video games enhances several different aspects of visual processing; however, the mechanisms underlying this improvement remain unclear. Here we show that playing action video games can alter fundamental characteristics of the visual system, such as the spatial resolution of visual processing across the visual field. To determine the spatial resolution of visual processing, we measured the smallest distance a distractor could be from a target without compromising target identification. This approach exploits the fact that visual processing is hindered as distractors are brought close to the target, a phenomenon known as crowding. Compared with nonplayers, action-video-game players could tolerate smaller target-distractor distances. Thus, the spatial resolution of visual processing is enhanced in this population. Critically, similar effects were observed in non-video-game players who were trained on an action video game; this result verifies a causative relationship between video-game play and augmented spatial resolution.", "title": "" } ]
[ { "docid": "bbeebb29c7220009c8d138dc46e8a6dd", "text": "Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array A of length n, with the promise that it has a majority element — a value that is repeated in strictly more than n/2 of the array’s entries. Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post-CS161 toolbox already contains a subroutine that gives a linear-time solution — just compute the median of A. (Note: it must be the majority element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:", "title": "" }, { "docid": "45dbc5a3adacd0cc1374f456fb421ee9", "text": "The purpose of this article is to discuss current techniques used with poly-l-lactic acid to safely and effectively address changes observed in the aging face. Several important points deserve mention. First, this unique agent is not a filler but a stimulator of the host's own collagen, which then acts to volumize tissue in a gradual, progressive, and predictable manner. The technical differences between the use of biostimulatory agents and replacement fillers are simple and straightforward, but are critically important to the safe and successful use of these products and will be reviewed in detail. Second, in addition to gains in technical insights that have improved our understanding of how to use the product to best advantage, where to use the product to best advantage in facial filling has also improved with ever-evolving insights into the changes observed in the aging face. Finally, it is important to recognize that a patient's final outcome, and the amount of product and work it will take to get there, is a reflection of the quality of tissues with which they start. This is, of course, an issue of patient selection and not product selection.", "title": "" }, { "docid": "dd741d612ee466aecbb03f5e1be89b90", "text": "To date, many of the methods for information extraction of biological information from scientific articles are restricted to the abstract of the article. However, full text articles in electronic version, which offer larger sources of data, are currently available. Several questions arise as to whether the effort of scanning full text articles is worthy, or whether the information that can be extracted from the different sections of an article can be relevant. In this work we addressed those questions showing that the keyword content of the different sections of a standard scientific article (abstract, introduction, methods, results, and discussion) is very heterogeneous. Although the abstract contains the best ratio of keywords per total of words, other sections of the article may be a better source of biologically relevant data.", "title": "" }, { "docid": "7f368ea27e9aa7035c8da7626c409740", "text": "The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.", "title": "" }, { "docid": "0d6a28cc55d52365986382f43c28c42c", "text": "Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics.", "title": "" }, { "docid": "a91add591aacaa333e109d77576ba463", "text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.", "title": "" }, { "docid": "8c79eb51cfbc9872a818cf6467648693", "text": "A compact frequency-reconfigurable slot antenna for LTE (2.3 GHz), AMT-fixed service (4.5 GHz), and WLAN (5.8 GHz) applications is proposed in this letter. A U-shaped slot with short ends and an L-shaped slot with open ends are etched in the ground plane to realize dual-band operation. By inserting two p-i-n diodes inside the slots, easy reconfigurability of three frequency bands over a frequency ratio of 2.62:1 can be achieved. In order to reduce the cross polarization of the antenna, another L-shaped slot is introduced symmetrically. Compared to the conventional reconfigurable slot antenna, the size of the antenna is reduced by 32.5%. Simulated and measured results show that the antenna can switch between two single-band modes (2.3 and 5.8 GHz) and two dual-band modes (2.3/4.5 and 4.5/5.8 GHz). Also, stable radiation patterns are obtained.", "title": "" }, { "docid": "94631c7be7b2a992d006cd642dcc502c", "text": "This paper describes nagging, a technique for parallelizing search in a heterogeneous distributed computing environment. Nagging exploits the speedup anomaly often observed when parallelizing problems by playing multiple reformulations of the problem or portions of the problem against each other. Nagging is both fault tolerant and robust to long message latencies. In this paper, we show how nagging can be used to parallelize several different algorithms drawn from the artificial intelligence literature, and describe how nagging can be combined with partitioning, the more traditional search parallelization strategy. We present a theoretical analysis of the advantage of nagging with respect to partitioning, and give empirical results obtained on a cluster of 64 processors that demonstrate nagging’s effectiveness and scalability as applied to A* search, α β minimax game tree search, and the Davis-Putnam algorithm.", "title": "" }, { "docid": "0e5eb8191cea7d3a59f192aa32a214c4", "text": "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copyand reconstructionbased extensions lead to noticeable improvements.", "title": "" }, { "docid": "54b094c7747c8ac0b1fbd1f93e78fd8e", "text": "It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.", "title": "" }, { "docid": "0fd61e297560ebb8bcf1aafdf011ae67", "text": "Research is fundamental to the advancement of medicine and critical to identifying the most optimal therapies unique to particular societies. This is easily observed through the dynamics associated with pharmacology, surgical technique and the medical equipment used today versus short years ago. Advancements in knowledge synthesis and reporting guidelines enhance the quality, scope and applicability of results; thus, improving health science and clinical practice and advancing health policy. While advancements are critical to the progression of optimal health care, the high cost associated with these endeavors cannot be ignored. Research fundamentally needs to be evaluated to identify the most efficient methods of evaluation. The primary objective of this paper is to look at a specific research methodology when applied to the area of clinical research, especially extracorporeal circulation and its prognosis for the future.", "title": "" }, { "docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33", "text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.", "title": "" }, { "docid": "8863a617cee49b578a3902d12841053b", "text": "N Engl J Med 2009;361:1475-85. Copyright © 2009 Massachusetts Medical Society. DNA damage has emerged as a major culprit in cancer and many diseases related to aging. The stability of the genome is supported by an intricate machinery of repair, damage tolerance, and checkpoint pathways that counteracts DNA damage. In addition, DNA damage and other stresses can trigger a highly conserved, anticancer, antiaging survival response that suppresses metabolism and growth and boosts defenses that maintain the integrity of the cell. Induction of the survival response may allow interventions that improve health and extend the life span. Recently, the first candidate for such interventions, rapamycin (also known as sirolimus), has been identified.1 Compromised repair systems in tumors also offer opportunities for intervention, making it possible to attack malignant cells in which maintenance of the genome has been weakened. Time-dependent accumulation of damage in cells and organs is associated with gradual functional decline and aging.2 The molecular basis of this phenomenon is unclear,3-5 whereas in cancer, DNA alterations are the major culprit. In this review, I present evidence that cancer and diseases of aging are two sides of the DNAdamage problem. An examination of the importance of DNA damage and the systems of genome maintenance in relation to aging is followed by an account of the derailment of genome guardian mechanisms in cancer and of how this cancerspecific phenomenon can be exploited for treatment.", "title": "" }, { "docid": "e9750bf1287847b6587ad28b19e78751", "text": "Biomedical engineering handles the organization and functioning of medical devices in the hospital. This is a strategic function of the hospital for its balance, development, and growth. This is a major focus in internal and external reports of the hospital. It's based on piloting of medical devices needs and the procedures of biomedical teams’ intervention. Multi-year projects of capital and operating expenditure in medical devices are planned as coherently as possible with the hospital's financial budgets. An information system is an essential tool for monitoring medical devices engineering and relationship with medical services.", "title": "" }, { "docid": "1203f22bfdfc9ecd211dbd79a2043a6a", "text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.", "title": "" }, { "docid": "1dcc48994fada1b46f7b294e08f2ed5d", "text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.", "title": "" }, { "docid": "222c51f079c785bb2aa64d2937e50ff0", "text": "Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers' data and the organizations' proprietary information have been subject to various attacks in the past. In this paper, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against a class of Multi-Armed Bandit (MAB) policy-based attacks. These attack policies capture the behavior of adversaries that seek to explore the allocation of VMs in the cloud and exploit the ones that provide the highest rewards (e.g., access to critical datasets, ability to observe credit card transactions, etc). We assess through simulation experiments the performance of our MTD strategies, showing that they can make MAB policy-based attacks no more effective than random attack policies. Additionally, we show the effects of critical parameters – such as discount factors, the time between randomizing the locations of the VMs and variance in the rewards obtained – on the performance of our defenses. We validate our results through simulations and a real OpenStack system implementation in our lab to assess migration times and down times under different system loads.", "title": "" }, { "docid": "cf999fc9b1a604dadfc720cf1bbfafdc", "text": "The characteristics of the extracellular polymeric substances (EPS) extracted with nine different extraction protocols from four different types of anaerobic granular sludge were studied. The efficiency of four physical (sonication, heating, cationic exchange resin (CER), and CER associated with sonication) and four chemical (ethylenediaminetetraacetic acid, ethanol, formaldehyde combined with heating, or NaOH) EPS extraction methods was compared to a control extraction protocols (i.e., centrifugation). The nucleic acid content and the protein/polysaccharide ratio of the EPS extracted show that the extraction does not induce abnormal cellular lysis. Chemical extraction protocols give the highest EPS extraction yields (calculated by the mass ratio between sludges and EPS dry weight (DW)). Infrared analyses as well as an extraction yield over 100% or organic carbon content over 1 g g−1 of DW revealed, nevertheless, a carry-over of the chemical extractants into the EPS extracts. The EPS of the anaerobic granular sludges investigated are predominantly composed of humic-like substances, proteins, and polysaccharides. The EPS content in each biochemical compound varies depending on the sludge type and extraction technique used. Some extraction techniques lead to a slightly preferential extraction of some EPS compounds, e.g., CER gives a higher protein yield.", "title": "" }, { "docid": "22719028c913aa4d0407352caf185d7a", "text": "Although the fact that genetic predisposition and environmental exposures interact to shape development and function of the human brain and, ultimately, the risk of psychiatric disorders has drawn wide interest, the corresponding molecular mechanisms have not yet been elucidated. We found that a functional polymorphism altering chromatin interaction between the transcription start site and long-range enhancers in the FK506 binding protein 5 (FKBP5) gene, an important regulator of the stress hormone system, increased the risk of developing stress-related psychiatric disorders in adulthood by allele-specific, childhood trauma–dependent DNA demethylation in functional glucocorticoid response elements of FKBP5. This demethylation was linked to increased stress-dependent gene transcription followed by a long-term dysregulation of the stress hormone system and a global effect on the function of immune cells and brain areas associated with stress regulation. This identification of molecular mechanisms of genotype-directed long-term environmental reactivity will be useful for designing more effective treatment strategies for stress-related disorders.", "title": "" }, { "docid": "44bd4ef644a18dc58a672eb91c873a98", "text": "Reactive oxygen species (ROS) contain one or more unpaired electrons and are formed as intermediates in a variety of normal biochemical reactions. However, when generated in excess amounts or not appropriately controlled, ROS initiate extensive cellular damage and tissue injury. ROS have been implicated in the progression of cancer, cardiovascular disease and neurodegenerative and neuroinflammatory disorders, such as multiple sclerosis (MS). In the last decade there has been a major interest in the involvement of ROS in MS pathogenesis and evidence is emerging that free radicals play a key role in various processes underlying MS pathology. To counteract ROS-mediated damage, the central nervous system is equipped with an intrinsic defense mechanism consisting of endogenous antioxidant enzymes. Here, we provide a comprehensive overview on the (sub)cellular origin of ROS during neuroinflammation as well as the detrimental effects of ROS in processing underlying MS lesion development and persistence. In addition, we will discuss clinical and experimental studies highlighting the therapeutic potential of antioxidant protection in the pathogenesis of MS.", "title": "" } ]
scidocsrr
03838238c643a539dbd5bda0dc913947
To Centralize or to Distribute: That Is the Question: A Comparison of Advanced Microgrid Management Systems
[ { "docid": "13ae30bc5bcb0714fe752fbe9c7e5de8", "text": "The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.", "title": "" }, { "docid": "e3d1282b2ed8c9724cf64251df7e14df", "text": "This paper describes and evaluates the feasibility of control strategies to be adopted for the operation of a microgrid when it becomes isolated. Normally, the microgrid operates in interconnected mode with the medium voltage network; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. An evaluation of the need of storage devices and load shedding strategies is included in this paper.", "title": "" }, { "docid": "3be99b1ef554fde94742021e4782a2aa", "text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.", "title": "" } ]
[ { "docid": "38c9cee29ef1ba82e45556d87de1ff24", "text": "This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While the sensor specifications only provide a rough estimation of the sensor accuracy, the present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness and material) as well as incidence angle. Since the sensor is intended to be used for measurements of a tubelike environment on an inspection robot, the characterization is extended by investigating the influence of the sensor orientation and dependency on lighting conditions. The sensor characteristics are compared to those of the Sick LMS 200 which is commonly used in robotic applications when size and weight are not critical constraints. The results show that the sensor accuracy is strongly depending on the target properties (color, brightness, material) and that it is consequently difficult to establish a calibration model. The paper also identifies cases for which the sensor returns faulty measurements, mainly when the surface has low reflectivity (dark surfaces, foam) or for high incidence angles on shiny surfaces. On the other hand, the repeatability of the sensor seems to be competitive with the LMS 200.", "title": "" }, { "docid": "525f9a7321a7b45111a19f458c9b976a", "text": "This paper provides a literature review on Adaptive Line Enhancer (ALE) methods based on adaptive noise cancellation systems. Such methods have been used in various applications, including communication systems, biomedical engineering, and industrial applications. Developments in ALE in noise cancellation are reviewed, including the principles, adaptive algorithms, and recent modifications on the filter design proposed to increase the convergence rate and reduce the computational complexity for future implementation. The advantages and drawbacks of various adaptive algorithms, such as the Least Mean Square, Recursive Least Square, Affine Projection Algorithm, and their variants, are discussed in this review. Design modifications of filter structures used in ALE are also evaluated. Such filters include Finite Impulse Response, Infinite Impulse Response, lattice, and nonlinear adaptive filters. These structural modifications aim to achieve better adaptive filter performance in ALE systems. Finally, a perspective of future research on ALE systems is presented for further consideration.", "title": "" }, { "docid": "8c1d51dd52bc14e8952d9e319eaacf16", "text": "This paper presents an approach to text recognition in natural scene images. Unlike most existing works which assume that texts are horizontal and frontal parallel to the image plane, our method is able to recognize perspective texts of arbitrary orientations. For individual character recognition, we adopt a bag-of-key points approach, in which Scale Invariant Feature Transform (SIFT) descriptors are extracted densely and quantized using a pre-trained vocabulary. Following [1, 2], the context information is utilized through lexicons. We formulate word recognition as finding the optimal alignment between the set of characters and the list of lexicon words. Furthermore, we introduce a new dataset called StreetViewText-Perspective, which contains texts in street images with a great variety of viewpoints. Experimental results on public datasets and the proposed dataset show that our method significantly outperforms the state-of-the-art on perspective texts of arbitrary orientations.", "title": "" }, { "docid": "7fd7aa4b2c721a06e3d21a2e5fe608e5", "text": "Self-organization can be approached in terms of developmental processes occurring within and between component systems of temperament. Within-system organization involves progressive shaping of cortical representations by subcortical motivational systems. As cortical representations develop, they feed back to provide motivational systems with enhanced detection and guidance capabilities. These reciprocal influences may amplify the underlying motivational functions and promote excessive impulsivity or anxiety. However, these processes also depend upon interactions arising between motivational and attentional systems. We discuss these between-system effects by considering the regulation of approach motivation by reactive attentional processes related to fear and by more voluntary processes related to effortful control. It is suggested than anxious and impulsive psychopathology may reflect limitations in these dual means of control, which can take the form of overregulation as well as underregulation.", "title": "" }, { "docid": "7b63daa48a700194f04293542c83bb20", "text": "BACKGROUND\nPresent treatment strategies for rheumatoid arthritis include use of disease-modifying antirheumatic drugs, but a minority of patients achieve a good response. We aimed to test the hypothesis that an improved outcome can be achieved by employing a strategy of intensive outpatient management of patients with rheumatoid arthritis--for sustained, tight control of disease activity--compared with routine outpatient care.\n\n\nMETHODS\nWe designed a single-blind, randomised controlled trial in two teaching hospitals. We screened 183 patients for inclusion. 111 were randomly allocated either intensive management or routine care. Primary outcome measures were mean fall in disease activity score and proportion of patients with a good response (defined as a disease activity score <2.4 and a fall in this score from baseline by >1.2). Analysis was by intention-to-treat.\n\n\nFINDINGS\nOne patient withdrew after randomisation and seven dropped out during the study. Mean fall in disease activity score was greater in the intensive group than in the routine group (-3.5 vs -1.9, difference 1.6 [95% CI 1.1-2.1], p<0.0001). Compared with routine care, patients treated intensively were more likely to have a good response (definition, 45/55 [82%] vs 24/55 [44%], odds ratio 5.8 [95% CI 2.4-13.9], p<0.0001) or be in remission (disease activity score <1.6; 36/55 [65%] vs 9/55 [16%], 9.7 [3.9-23.9], p<0.0001). Three patients assigned routine care and one allocated intensive management died during the study; none was judged attributable to treatment.\n\n\nINTERPRETATION\nA strategy of intensive outpatient management of rheumatoid arthritis substantially improves disease activity, radiographic disease progression, physical function, and quality of life at no additional cost.", "title": "" }, { "docid": "03c588f89216ee5b0b6392730fe2159f", "text": "In this paper, a three-port converter with three active full bridges, two series-resonant tanks, and a three-winding transformer is proposed. It uses a single power conversion stage with high-frequency link to control power flow between batteries, load, and a renewable source such as fuel cell. The converter has capabilities of bidirectional power flow in the battery and the load port. Use of series-resonance aids in high switching frequency operation with realizable component values when compared to existing three-port converter with only inductors. The converter has high efficiency due to soft-switching operation in all three bridges. Steady-state analysis of the converter is presented to determine the power flow equations, tank currents, and soft-switching region. Dynamic analysis is performed to design a closed-loop controller that will regulate the load-side port voltage and source-side port current. Design procedure for the three-port converter is explained and experimental results of a laboratory prototype are presented.", "title": "" }, { "docid": "a1e881c993ad507e16e55c952c6a47dc", "text": "Nowadays, most of the information available on the web is in Natural Language. Extracting such knowledge from Natural Language text is an essential work and a very remarkable research topic in the Semantic Web field. The logic programming language Prolog, based on the definite-clause formalism, is a useful tool for implementing a Natural Language Processing (NLP) systems. However, web-based services for NLP have also been developed recently, and they represent an important alternative to be considered. In this paper we present the comparison between two different approaches in NLP, for the automatic creation of an OWL ontology supporting the semantic annotation of text. The first one is a pure Prolog approach, based on grammar and logic analysis rules. The second one is based on Watson Relationship Extraction service of IBM Cloud platform Bluemix. We evaluate the two approaches in terms of performance, the quality of NLP result, OWL completeness and richness.", "title": "" }, { "docid": "18011cbde7d1a16da234c1e886371a6c", "text": "The increased prevalence of cardiovascular disease among the aging population has prompted greater interest in the field of smart home monitoring and unobtrusive cardiac measurements. This paper introduces the design of a capacitive electrocardiogram (ECG) sensor that measures heart rate with no conscious effort from the user. The sensor consists of two active electrodes and an analog processing circuit that is low cost and customizable to the surfaces of common household objects. Prototype testing was performed in a home laboratory by embedding the sensor into a couch, walker, office and dining chairs. The sensor produced highly accurate heart rate measurements (<; 2.3% error) via either direct skin contact or through one and two layers of clothing. The sensor requires no gel dielectric and no grounding electrode, making it particularly suited to the “zero-effort” nature of an autonomous smart home environment. Motion artifacts caused by deviations in body contact with the electrodes were identified as the largest source of unreliability in continuous ECG measurements and will be a primary focus in the next phase of this project.", "title": "" }, { "docid": "3bea5eeea1e3b74917ea25c98b169289", "text": "Dissociation as a clinical psychiatric condition has been defined primarily in terms of the fragmentation and splitting of the mind, and perception of the self and the body. Its clinical manifestations include altered perceptions and behavior, including derealization, depersonalization, distortions of perception of time, space, and body, and conversion hysteria. Using examples of animal models, and the clinical features of the whiplash syndrome, we have developed a model of dissociation linked to the phenomenon of freeze/immobility. Also employing current concepts of the psychobiology of posttraumatic stress disorder (PTSD), we propose a model of PTSD linked to cyclical autonomic dysfunction, triggered and maintained by the laboratory model of kindling, and perpetuated by increasingly profound dorsal vagal tone and endorphinergic reward systems. These physiologic events in turn contribute to the clinical state of dissociation. The resulting autonomic dysregulation is presented as the substrate for a diverse group of chronic diseases of unknown origin.", "title": "" }, { "docid": "d46434bbbf73460bf422ebe4bd65b590", "text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.", "title": "" }, { "docid": "8ab5ae25073b869ea28fc25df3cfdf5f", "text": "We present the TurkuNLP entry to the BioNLP Shared Task 2016 Bacteria Biotopes event extraction (BB3-event) subtask. We propose a deep learningbased approach to event extraction using a combination of several Long Short-Term Memory (LSTM) networks over syntactic dependency graphs. Features for the proposed neural network are generated based on the shortest path connecting the two candidate entities in the dependency graph. We further detail how this network can be efficiently trained to have good generalization performance even when only a very limited number of training examples are available and part-of-speech (POS) and dependency type feature representations must be learned from scratch. Our method ranked second among the entries to the shared task, achieving an F-score of 52.1% with 62.3% precision and 44.8% recall.", "title": "" }, { "docid": "19d35c0f4e3f0b90d0b6e4d925a188e4", "text": "This paper presents a new approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR)—a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since microaneurysms are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on multi-scale correlation filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, microaneurysm candidate detection (coarse level) and true microaneurysm classification (fine level). The approach was evaluated based on two public datasets—ROC (retinopathy on-line challenge, http://roc.healthcare.uiowa.edu) and DIARETDB1 (standard diabetic retinopathy database, http://www.it.lut.fi/project/imageret/diaretdb1). We conclude our method to be effective and efficient. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f20bbbd06561f9cde0f1d538667635e2", "text": "Artificial neural networks are finding many uses in the medical diagnosis application. The goal of this paper is to evaluate artificial neural network in disease diagnosis. Two cases are studied. The first one is acute nephritis disease; data is the disease symptoms. The second is the heart disease; data is on cardiac Single Proton Emission Computed Tomography (SPECT) images. Each patient classified into two categories: infected and non-infected. Classification is an important tool in medical diagnosis decision support. Feed-forward back propagation neural network is used as a classifier to distinguish between infected or non-infected person in both cases. The results of applying the artificial neural networks methodology to acute nephritis diagnosis based upon selected symptoms show abilities of the network to learn the patterns corresponding to symptoms of the person. In this study, the data were obtained from UCI machine learning repository in order to diagnosed diseases. The data is separated into inputs and targets. The targets for the neural network will be identified with 1's as infected and will be identified with 0's as non-infected. In the diagnosis of acute nephritis disease; the percent correctly classified in the simulation sample by the feed-forward back propagation network is 99 percent while in the diagnosis of heart disease; the percent correctly classified in the simulation sample by the feed-forward back propagation network is 95 percent.", "title": "" }, { "docid": "0757280353e6e1bd73b3d1cd11f6b031", "text": "OBJECTIVE\nTo investigate seasonal patterns in mood and behavior and estimate the prevalence of seasonal affective disorder (SAD) and subsyndromal seasonal affective disorder (S-SAD) in the Icelandic population.\n\n\nPARTICIPANTS AND SETTING\nA random sample generated from the Icelandic National Register, consisting of 1000 men and women aged 17 to 67 years from all parts of Iceland. It represents 6.4 per million of the Icelandic population in this age group.\n\n\nDESIGN\nThe Seasonal Pattern Assessment Questionnaire, an instrument for investigating mood and behavioral changes with the seasons, was mailed to a random sample of the Icelandic population. The data were compared with results obtained with similar methods in populations in the United States.\n\n\nMAIN OUTCOME MEASURES\nSeasonality score and prevalence rates of seasonal affective disorder and subsyndromal seasonal affective disorder.\n\n\nRESULTS\nThe prevalence of SAD and S-SAD were estimated at 3.8% and 7.5%, respectively, which is significantly lower than prevalence rates obtained with the same method on the east coast of the United States (chi 2 = 9.29 and 7.3; P < .01). The standardized rate ratios for Iceland compared with the United States were 0.49 and 0.63 for SAD and S-SAD, respectively. No case of summer SAD was found.\n\n\nCONCLUSIONS\nSeasonal affective disorder and S-SAD are more common in younger individuals and among women. The weight gained by patients during the winter does not seem to result in chronic obesity. The prevalence of SAD and S-SAD was lower in Iceland than on the East Coast of the United States, in spite of Iceland's more northern latitude. These results are unexpected since the prevalence of these disorders has been found to increase in more northern latitudes. The Icelandic population has remained remarkably isolated during the past 1000 years. It is conceivable that persons with a predisposition to SAD have been at a disadvantage and that there may have been a population selection toward increased tolerance of winter darkness.", "title": "" }, { "docid": "124c649cc8dc2d04e28043257ed8ddd4", "text": "TECSAR satellite is part of a spaceborne synthetic-aperture-radar (SAR) satellite technology demonstration program. The purpose of this program is to develop and evaluate the technologies required to achieve high-resolution images combined with large-area coverage. These requirements can be fulfilled by designing a satellite with multimode operation. The TECSAR satellite is developed by the MBT Space Division, Israel Aerospace Industries, acting as a prime contractor, which develops the satellite bus, and by ELTA Systems Ltd., which develops the SAR payload. This paper reviews the TECSAR radar system design, which enables to perform a variety of operational modes. It also describes the unique hardware components: deployable parabolic mesh antenna, multitube transmitter, and data-link transmission unit. The unique mosaic mode is presented. It is shown that this mode is the spot version of the scan mode.", "title": "" }, { "docid": "62c515d4b96f123b585a92a5aa919792", "text": "OBJECTIVE\nTo investigate the characteristics of the laryngeal mucosal microvascular network in suspected laryngeal cancer patients, using narrow band imaging, and to evaluate the value of narrow band imaging endoscopy in the early diagnosis of laryngeal precancerous and cancerous lesions.\n\n\nPATIENTS AND METHODS\nEighty-five consecutive patients with suspected precancerous or cancerous laryngeal lesions were enrolled in the study. Endoscopic narrow band imaging findings were classified into five types (I to V) according to the features of the mucosal intraepithelial papillary capillary loops assessed.\n\n\nRESULTS\nA total of 104 lesions (45 malignancies and 59 nonmalignancies) was detected under white light and narrow band imaging modes. The sensitivity and specificity of narrow band imaging in detecting malignant lesions were 88.9 and 93.2 per cent, respectively. The intraepithelial papillary capillary loop classification, as determined by narrow band imaging, was closely associated with the laryngeal lesions' histological findings. Type I to IV lesions were considered nonmalignant and type V lesions malignant. For type Va lesions, the sensitivity and specificity of narrow band imaging in detecting severe dysplasia or carcinoma in situ were 100 and 79.5 per cent, respectively. In patients with type Vb and Vc lesions, the sensitivity and specificity of narrow band imaging in detecting invasive carcinoma were 83.8 and 100 per cent, respectively.\n\n\nCONCLUSION\nNarrow band imaging is a promising approach enabling in vivo differentiation of nonmalignant from malignant laryngeal lesions by evaluating the morphology of mucosal capillaries. These results suggest endoscopic narrow band imaging may be useful in the early detection of laryngeal cancer and precancerous lesions.", "title": "" }, { "docid": "dfcb51bd990cce7fb7abfe8802dc0c4e", "text": "In this paper, we describe the machine learning approach we used in the context of the Automatic Cephalometric X-Ray Landmark Detection Challenge. Our solution is based on the use of ensembles of Extremely Randomized Trees combined with simple pixel-based multi-resolution features. By carefully tuning method parameters with cross-validation, our approach could reach detection rates ≥ 90% at an accuracy of 2.5mm for 8 landmarks. Our experiments show however a high variability between the different landmarks, with some landmarks detected at a much lower rate than others.", "title": "" }, { "docid": "6228498fed5b26c0def578251aa1c749", "text": "Observation-Level Interaction (OLI) is a sensemaking technique relying upon the interactive semantic exploration of data. By manipulating data items within a visualization, users provide feedback to an underlying mathematical model that projects multidimensional data into a meaningful two-dimensional representation. In this work, we propose, implement, and evaluate an OLI model which explicitly defines clusters within this data projection. These clusters provide targets against which data values can be manipulated. The result is a cooperative framework in which the layout of the data affects the clusters, while user-driven interactions with the clusters affect the layout of the data points. Additionally, this model addresses the OLI \"with respect to what\" problem by providing a clear set of clusters against which interaction targets are judged and computed.", "title": "" } ]
scidocsrr
c92e25f5d839b9fe1b8e7685305320fc
A novel paradigm for calculating Ramsey number via Artificial Bee Colony Algorithm
[ { "docid": "828c54f29339e86107f1930ae2a5e77f", "text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "35ab98f6e5b594261e52a21740c70336", "text": "Artificial Bee Colony (ABC) algorithm which is one of the most recently introduced optimization algorithms, simulates the intelligent foraging behavior of a honey bee swarm. Clustering analysis, used in many disciplines and applications, is an important tool and a descriptive task seeking to identify homogeneous groups of objects based on the values of their attributes. In this work, ABC is used for data clustering on benchmark problems and the performance of ABC algorithm is compared with Particle Swarm Optimization (PSO) algorithm and other nine classification techniques from the literature. Thirteen of typical test data sets from the UCI Machine Learning Repository are used to demonstrate the results of the techniques. The simulation results indicate that ABC algorithm can efficiently be used for multivariate data clustering. © 2009 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "4847eb4451c597d4656cf48c242cf252", "text": "Despite the independent evolution of multicellularity in plants and animals, the basic organization of their stem cell niches is remarkably similar. Here, we report the genome-wide regulatory potential of WUSCHEL, the key transcription factor for stem cell maintenance in the shoot apical meristem of the reference plant Arabidopsis thaliana. WUSCHEL acts by directly binding to at least two distinct DNA motifs in more than 100 target promoters and preferentially affects the expression of genes with roles in hormone signaling, metabolism, and development. Striking examples are the direct transcriptional repression of CLAVATA1, which is part of a negative feedback regulation of WUSCHEL, and the immediate regulation of transcriptional repressors of the TOPLESS family, which are involved in auxin signaling. Our results shed light on the complex transcriptional programs required for the maintenance of a dynamic and essential stem cell niche.", "title": "" }, { "docid": "9c0d65ee42ccfaa291b576568bad59e0", "text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.", "title": "" }, { "docid": "ac52504a90be9cd685a10f73603d3776", "text": "Unsupervised domain adaption aims to learn a powerful classifier for the target domain given a labeled source data set and an unlabeled target data set. To alleviate the effect of ‘domain shift’, the major challenge in domain adaptation, studies have attempted to align the distributions of the two domains. Recent research has suggested that generative adversarial network (GAN) has the capability of implicitly capturing data distribution. In this paper, we thus propose a simple but effective model for unsupervised domain adaption leveraging adversarial learning. The same encoder is shared between the source and target domains which is expected to extract domain-invariant representations with the help of an adversarial discriminator. With the labeled source data, we introduce the center loss to increase the discriminative power of feature learned. We further align the conditional distribution of the two domains to enforce the discrimination of the features in the target domain. Unlike previous studies where the source features are extracted with a fixed pre-trained encoder, our method jointly learns feature representations of two domains. Moreover, by sharing the encoder, the model does not need to know the source of images during testing and hence is more widely applicable. We evaluate the proposed method on several unsupervised domain adaption benchmarks and achieve superior or comparable performance to state-of-the-art results.", "title": "" }, { "docid": "1aac7dedc18b437966b31cf04f1b7efc", "text": "Massive open online courses (MOOCs) continue to appear across the higher education landscape, originating from many institutions in the USA and around the world. MOOCs typically have low completion rates, at least when compared with traditional courses, as this course delivery model is very different from traditional, fee-based models, such as college courses. This research examined MOOC student demographic data, intended behaviours and course interactions to better understand variables that are indicative of MOOC completion. The results lead to ideas regarding how these variables can be used to support MOOC students through the application of learning analytics tools and systems.", "title": "" }, { "docid": "575d8fed62c2afa1429d16444b6b173c", "text": "Research into learning and teaching in higher education over the last 25 years has provided a variety of concepts, methods, and findings that are of both theoretical interest and practical relevance. It has revealed the relationships between students’ approaches to studying, their conceptions of learning, and their perceptions of their academic context. It has revealed the relationships between teachers’ approaches to teaching, their conceptions of teaching, and their perceptions of the teaching environment. And it has provided a range of tools that can be exploited for developing our understanding of learning and teaching in particular contexts and for assessing and enhancing the student experience on specific courses and programs.", "title": "" }, { "docid": "12d4c8ff1072fece3fea7eeac43c3fc5", "text": "Multi-agent path finding (MAPF) is well-studied in artificial intelligence, robotics, theoretical computer science and operations research. We discuss issues that arise when generalizing MAPF methods to real-world scenarios and four research directions that address them. We emphasize the importance of addressing these issues as opposed to developing faster methods for the standard formulation of the MAPF problem.", "title": "" }, { "docid": "e94f453a3301ca86bed19162ad1cb6e1", "text": "Linux scheduling is based on the time-sharing technique already introduced in the section \"CPU's Time Sharing\" in Chapter 5, Timing Measurements: several processes are allowed to run \"concurrently,\" which means that the CPU time is roughly divided into \"slices,\" one for each runnable process.[1] Of course, a single processor can run only one process at any given instant. If a currently running process is not terminated when its time slice or quantum expires, a process switch may take place. Time-sharing relies on timer interrupts and is thus transparent to processes. No additional code needs to be inserted in the programs in order to ensure CPU time-sharing.", "title": "" }, { "docid": "4d69284c25e1a9a503dd1c12fde23faa", "text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.", "title": "" }, { "docid": "2c266af949495f7cd32b8abdf1a04803", "text": "Humans rely on eye gaze and hand manipulations extensively in their everyday activities. Most often, users gaze at an object to perceive it and then use their hands to manipulate it. We propose applying a multimodal, gaze plus free-space gesture approach to enable rapid, precise and expressive touch-free interactions. We show the input methods are highly complementary, mitigating issues of imprecision and limited expressivity in gaze-alone systems, and issues of targeting speed in gesture-alone systems. We extend an existing interaction taxonomy that naturally divides the gaze+gesture interaction space, which we then populate with a series of example interaction techniques to illustrate the character and utility of each method. We contextualize these interaction techniques in three example scenarios. In our user study, we pit our approach against five contemporary approaches; results show that gaze+gesture can outperform systems using gaze or gesture alone, and in general, approach the performance of \"gold standard\" input systems, such as the mouse and trackpad.", "title": "" }, { "docid": "ceb42399b7cd30b15d27c30d7c4b57b6", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated from an informationtheoretic perspective. The relationships among the capacity r egion of broadcast channels and two rate regions achieved by NOMA and time-division multiple access (TDMA) are illustrated first. Then, the performance of NOMA is evaluated by considering TDMA as the benchmark, where both the sum rate and the individual use r rates are used as the criteria. In a wireless downlink scenar io with user pairing, the developed analytical results show that NOMA can outperform TDMA not only for the sum rate but also for each user’s individual rate, particularly when the difference between the users’ channels is large. I. I NTRODUCTION Because of its superior spectral efficiency, non-orthogona l multiple access (NOMA) has been recognized as a promising technique to be used in the fifth generation (5G) networks [1] – [4]. NOMA utilizes the power domain for achieving multiple access, i.e., different users are served at different power levels. Unlike conventional orthogonal MA, such as timedivision multiple access (TDMA), NOMA faces strong cochannel interference between different users, and success ive interference cancellation (SIC) is used by the NOMA users with better channel conditions for interference managemen t. The concept of NOMA is essentially a special case of superposition coding developed for broadcast channels (BC ). Cover first found the capacity region of a degraded discrete memoryless BC by using superposition coding [5]. Then, the capacity region of the Gaussian BC with single-antenna terminals was established in [6]. Moreover, the capacity re gion of the multiple-input multiple-output (MIMO) Gaussian BC was found in [7], by applying dirty paper coding (DPC) instea d of superposition coding. This paper mainly focuses on the single-antenna scenario. Specifically, consider a Gaussian BC with a single-antenna transmitter and two single-antenna receivers, where each r eceiver is corrupted by additive Gaussian noise with unit var iance. Denote the ordered channel gains from the transmitter to the two receivers byhw andhb, i.e., |hw| < |hb|. For a given channel pair(hw, hb), the capacity region is given by [6] C , ⋃ a1+a2=1, a1, a2 ≥ 0 { (R1, R2) : R1, R2 ≥ 0, R1≤ log2 ( 1+ a1x 1+a2x ) , R2≤ log2 (1+a2y) }", "title": "" }, { "docid": "8b5b4950177030e7664d57724acd52a3", "text": "With the fast development of industrial Internet of things (IIoT), a large amount of data is being generated continuously by different sources. Storing all the raw data in the IIoT devices locally is unwise considering that the end devices’ energy and storage spaces are strictly limited. In addition, the devices are unreliable and vulnerable to many threats because the networks may be deployed in remote and unattended areas. In this paper, we discuss the emerging challenges in the aspects of data processing, secure data storage, efficient data retrieval and dynamic data collection in IIoT. Then, we design a flexible and economical framework to solve the problems above by integrating the fog computing and cloud computing. Based on the time latency requirements, the collected data are processed and stored by the edge server or the cloud server. Specifically, all the raw data are first preprocessed by the edge server and then the time-sensitive data (e.g., control information) are used and stored locally. The non-time-sensitive data (e.g., monitored data) are transmitted to the cloud server to support data retrieval and mining in the future. A series of experiments and simulation are conducted to evaluate the performance of our scheme. The results illustrate that the proposed framework can greatly improve the efficiency and security of data storage and retrieval in IIoT.", "title": "" }, { "docid": "dc9a92313c58b5e688a3502b994e6d3a", "text": "This paper explores the application of Activity-Based Costing and Activity-Based Management in ecommerce. The proposed application may lead to better firm performance of many companies in offering their products and services over the Internet. A case study of a fictitious Business-to-Customer (B2C) company is used to illustrate the proposed structured implementation procedure and effects of an Activity-Based Costing analysis. The analysis is performed by using matrixes in order to trace overhead. The Activity-Based Costing analysis is then used to demonstrate operational and strategic Activity-Based Management in e-commerce.", "title": "" }, { "docid": "e3566963e4307c15086a54afe7661f32", "text": "Next-generation wireless networks must support ultra-reliable, low-latency communication and intelligently manage a massive number of Internet of Things (IoT) devices in real-time, within a highly dynamic environment. This need for stringent communication quality-of-service (QoS) requirements as well as mobile edge and core intelligence can only be realized by integrating fundamental notions of artificial intelligence (AI) and machine learning across the wireless infrastructure and end-user devices. In this context, this paper provides a comprehensive tutorial that introduces the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications. For this purpose, we present a comprehensive overview on a number of key types of neural networks that include feed-forward, recurrent, spiking, and deep neural networks. For each type of neural network, we present the basic architecture and training procedure, as well as the associated challenges and opportunities. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality and edge caching.For each individual application, we present the main motivation for using ANNs along with the associated challenges while also providing a detailed example for a use case scenario and outlining future works that can be addressed using ANNs. In a nutshell, this article constitutes one of the first holistic tutorials on the development of machine learning techniques tailored to the needs of future wireless networks. This research was supported by the U.S. National Science Foundation under Grants CNS-1460316 and IIS-1633363. ar X iv :1 71 0. 02 91 3v 1 [ cs .I T ] 9 O ct 2 01 7", "title": "" }, { "docid": "ea8685f27096f3e3e589ea8af90e78f5", "text": "Acoustic data transmission is a technique to embed the data in a sound wave imperceptibly and to detect it at the receiver. This letter proposes a novel acoustic data transmission system designed based on the modulated complex lapped transform (MCLT). In the proposed system, data is embedded in an audio file by modifying the phases of the original MCLT coefficients. The data can be transmitted by playing the embedded audio and extracting it from the received audio. By embedding the data in the MCLT domain, the perceived quality of the resulting audio could be kept almost similar as the original audio. The system can transmit data at several hundreds of bits per second (bps), which is sufficient to deliver some useful short messages.", "title": "" }, { "docid": "a0f8af71421d484cbebb550a0bf59a6d", "text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.", "title": "" }, { "docid": "c21a1a07918d86dab06d84e0e4e7dc05", "text": "Big data potential value across business sectors has received tremendous attention from the practitioner and academia world. The huge amount of data collected in different forms in organizations promises to radically transform the business landscape globally. The impact of big data, which is spreading across all business sectors, has potential to create new opportunities for growth. With organizations now able to store huge diverse amounts of data from different sources and forms, big data is expected to deliver tremendous value across business sectors. This paper focuses on building a business case for big data adoption in organizations. This paper discusses some of the opportunities and potential benefits associated with big data adoption across various business sectors globally. The discussion is important for making a business case for big data investment in organizations, which is major challenge for its adoption globally. The paper uses the IT strategic grid to understand the current and future potential benefits of big data for different business sectors. The results of the study suggest that there is no one-size-fits-all to big data adoption potential benefits in organizations.", "title": "" }, { "docid": "636851f2fc41fbeb488d27c813d175dc", "text": "We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes according to dropout probabilities adaptively decided for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are input-adaptively learned via variational inference. This stochastic regularization has an effect of building an ensemble classifier out of exponentially many classifiers with different decision boundaries. Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains significantly improved accuracy over the regular softmax classifier and other baselines. Further analysis of the learned dropout probabilities shows that our model indeed selects confusing classes more often when it performs classification.", "title": "" }, { "docid": "6cfdad2bb361713616dd2971026758a7", "text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.", "title": "" }, { "docid": "a58130841813814dacd7330d04efe735", "text": "Under-reporting of food intake is one of the fundamental obstacles preventing the collection of accurate habitual dietary intake data. The prevalence of under-reporting in large nutritional surveys ranges from 18 to 54% of the whole sample, but can be as high as 70% in particular subgroups. This wide variation between studies is partly due to different criteria used to identify under-reporters and also to non-uniformity of under-reporting across populations. The most consistent differences found are between men and women and between groups differing in body mass index. Women are more likely to under-report than men, and under-reporting is more common among overweight and obese individuals. Other associated characteristics, for which there is less consistent evidence, include age, smoking habits, level of education, social class, physical activity and dietary restraint. Determining whether under-reporting is specific to macronutrients or food is problematic, as most methods identify only low energy intakes. Studies that have attempted to measure under-reporting specific to macronutrients express nutrients as percentage of energy and have tended to find carbohydrate under-reported and protein over-reported. However, care must be taken when interpreting these results, especially when data are expressed as percentages. A logical conclusion is that food items with a negative health image (e.g. cakes, sweets, confectionery) are more likely to be under-reported, whereas those with a positive health image are more likely to be over-reported (e.g. fruits and vegetables). This also suggests that dietary fat is likely to be under-reported. However, it is necessary to distinguish between under-reporting and genuine under-eating for the duration of data collection. The key to understanding this problem, but one that has been widely neglected, concerns the processes that cause people to under-report their food intakes. The little work that has been done has simply confirmed the complexity of this issue. The importance of obtaining accurate estimates of habitual dietary intakes so as to assess health correlates of food consumption can be contrasted with the poor quality of data collected. This phenomenon should be considered a priority research area. Moreover, misreporting is not simply a nutritionist's problem, but requires a multidisciplinary approach (including psychology, sociology and physiology) to advance the understanding of under-reporting in dietary intake studies.", "title": "" }, { "docid": "80e0a6c270bb146a1a45994d27340639", "text": "BACKGROUND\nThe promotion of active and healthy ageing is becoming increasingly important as the population ages. Physical activity (PA) significantly reduces all-cause mortality and contributes to the prevention of many chronic illnesses. However, the proportion of people globally who are active enough to gain these health benefits is low and decreases with age. Social support (SS) is a social determinant of health that may improve PA in older adults, but the association has not been systematically reviewed. This review had three aims: 1) Systematically review and summarise studies examining the association between SS, or loneliness, and PA in older adults; 2) clarify if specific types of SS are positively associated with PA; and 3) investigate whether the association between SS and PA differs between PA domains.\n\n\nMETHODS\nQuantitative studies examining a relationship between SS, or loneliness, and PA levels in healthy, older adults over 60 were identified using MEDLINE, PSYCInfo, SportDiscus, CINAHL and PubMed, and through reference lists of included studies. Quality of these studies was rated.\n\n\nRESULTS\nThis review included 27 papers, of which 22 were cross sectional studies, three were prospective/longitudinal and two were intervention studies. Overall, the study quality was moderate. Four articles examined the relation of PA with general SS, 17 with SS specific to PA (SSPA), and six with loneliness. The results suggest that there is a positive association between SSPA and PA levels in older adults, especially when it comes from family members. No clear associations were identified between general SS, SSPA from friends, or loneliness and PA levels. When measured separately, leisure time PA (LTPA) was associated with SS in a greater percentage of studies than when a number of PA domains were measured together.\n\n\nCONCLUSIONS\nThe evidence surrounding the relationship between SS, or loneliness, and PA in older adults suggests that people with greater SS for PA are more likely to do LTPA, especially when the SS comes from family members. However, high variability in measurement methods used to assess both SS and PA in included studies made it difficult to compare studies.", "title": "" } ]
scidocsrr
d7dcdb0f375f3cd055764fb1951a7241
AND: Autoregressive Novelty Detectors
[ { "docid": "5d80ce0bffd5bc2016aac657669a98de", "text": "Information and Communication Technology (ICT) has a great impact on social wellbeing, economic growth and national security in todays world. Generally, ICT includes computers, mobile communication devices and networks. ICT is also embraced by a group of people with malicious intent, also known as network intruders, cyber criminals, etc. Confronting these detrimental cyber activities is one of the international priorities and important research area. Anomaly detection is an important data analysis task which is useful for identifying the network intrusions. This paper presents an in-depth analysis of four major categories of anomaly detection techniques which include classification, statistical, information theory and clustering. The paper also discusses research challenges with the datasets used for network intrusion detection. & 2015 Published by Elsevier Ltd.", "title": "" }, { "docid": "a7456ecf7af7e447cdde61f371128965", "text": "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN.", "title": "" } ]
[ { "docid": "bba81ac392b87a123a1e2f025bffd30c", "text": "This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to http://www.deakin.edu.au/~thanhthi/drl.htm.", "title": "" }, { "docid": "b7961e6b82ca38e65fcfefcb5309bd46", "text": "IMPORTANCE\nCryolipolysis is the noninvasive reduction of fat with localized cutaneous cooling. Since initial introduction, over 650,000 cryolipolysis treatment cycles have been performed worldwide. We present a previously unreported, rare adverse effect following cryolipolysis: paradoxical adipose hyperplasia.\n\n\nOBSERVATIONS\nA man in his 40s underwent a single cycle of cryolipolysis to his abdomen. Three months following his treatment, a gradual enlargement of the treatment area was noted. This enlargement was a large, well-demarcated subcutaneous mass, slightly tender to palpation. Imaging studies revealed accumulation of adipose tissue with normal signal intensity within the treatment area.\n\n\nCONCLUSIONS AND RELEVANCE\nParadoxical adipose hyperplasia is a rare, previously unreported adverse effect of cryolipolysis with an incidence of 0.0051%. No single unifying risk factor has been identified. The phenomenon seems to be more common in male patients undergoing cryolipolysis. At this time, there is no evidence of spontaneous resolution. Further studies are needed to characterize the pathogenesis and histologic findings of this rare adverse event.", "title": "" }, { "docid": "88a8ea1de5ad5cb8883890c1e30b3491", "text": "Service robots will have to accomplish more and more complex, open-ended tasks and regularly acquire new skills. In this work, we propose a new approach to the problem of generating plans for such household robots. Instead composing them from atomic actions — the common approach in robot planning — we propose to transform task descriptions on web sites like ehow.com into executable robot plans. We present methods for automatically converting the instructions from natural language into a formal, logic-based representation, for resolving the word senses using the WordNet database and the Cyc ontology, and for exporting the generated plans into the mobile robot's plan language RPL. We discuss the problem of inferring information that is missing in these descriptions and the problem of grounding the abstract task descriptions in the perception and action system, and we propose techniques for solving them. The whole system works autonomously without human interaction. It has successfully been tested with a set of about 150 natural language directives, of which up to 80% could be correctly transformed.", "title": "" }, { "docid": "d62c2e7ca3040900d04f83ef4f99de4f", "text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.", "title": "" }, { "docid": "9adf653a332e07b8aa055b62449e1475", "text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.", "title": "" }, { "docid": "a4a5c6cbec237c2cd6fb3abcf6b4a184", "text": "Developing automatic diagnostic tools for the early detection of skin cancer lesions in dermoscopic images can help to reduce melanoma-induced mortality. Image segmentation is a key step in the automated skin lesion diagnosis pipeline. In this paper, a fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented. Delaunay Triangulation is used to extract a binary mask of the lesion region, without the need of any training stage. A quantitative experimental evaluation has been conducted on a publicly available database, by taking into account six well-known state-of-the-art segmentation methods for comparison. The results of the experimental analysis demonstrate that the proposed approach is highly accurate when dealing with benign lesions, while the segmentation accuracy significantly decreases when melanoma images are processed. This behavior led us to consider geometrical and color features extracted from the binary masks generated by our algorithm for classification, achieving promising results for melanoma detection.", "title": "" }, { "docid": "1debcbf981ae6115efcc4a853cd32bab", "text": "Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.", "title": "" }, { "docid": "2c39eafa87d34806dd1897335fdfe41c", "text": "One of the issues facing credit card fraud detection systems is that a significant percentage of transactions labeled as fraudulent are in fact legitimate. These &quot;false alarms&quot; delay the detection of fraudulent transactions and can cause unnecessary concerns for customers. In this study, over 1 million unique credit card transactions from 11 months of data from a large Canadian bank were analyzed. A meta-classifier model was applied to the transactions after being analyzed by the Bank&apos;s existing neural network based fraud detection algorithm. This meta-classifier model consists of 3 base classifiers constructed using the decision tree, naïve Bayesian, and k-nearest neighbour algorithms. The naïve Bayesian algorithm was also used as the meta-level algorithm to combine the base classifier predictions to produce the final classifier. Results from the research show that when a meta-classifier was deployed in series with the Bank&apos;s existing fraud detection algorithm improvements of up to 28% to their existing system can be achieved.", "title": "" }, { "docid": "88229017a9d4df8dfc44e996a116cbad", "text": "BACKGROUND\nThe Society of Thoracic Surgeons (STS)/American College of Cardiology Transcatheter Valve Therapy (TVT) Registry captures all procedures with Food and Drug Administration-approved transcatheter valve devices performed in the United States, and is mandated as a condition of reimbursement by the Centers for Medicaid & Medicare Services.\n\n\nOBJECTIVES\nThis annual report focuses on patient characteristics, trends, and outcomes of transcatheter aortic and mitral valve catheter-based valve procedures in the United States.\n\n\nMETHODS\nWe reviewed data for all patients receiving commercially approved devices from 2012 through December 31, 2015, that are entered in the TVT Registry.\n\n\nRESULTS\nThe 54,782 patients with transcatheter aortic valve replacement demonstrated decreases in expected risk of 30-day operative mortality (STS Predicted Risk of Mortality [PROM]) of 7% to 6% and transcatheter aortic valve replacement PROM (TVT PROM) of 4% to 3% (both p < 0.0001) from 2012 to 2015. Observed in-hospital mortality decreased from 5.7% to 2.9%, and 1-year mortality decreased from 25.8% to 21.6%. However, 30-day post-procedure pacemaker insertion increased from 8.8% in 2013 to 12.0% in 2015. The 2,556 patients who underwent transcatheter mitral leaflet clip in 2015 were similar to patients from 2013 to 2014, with hospital mortality of 2% and with mitral regurgitation reduced to grade ≤2 in 87% of patients (p < 0.0001). The 349 patients who underwent mitral valve-in-valve and mitral valve-in-ring procedures were high risk, with an STS PROM for mitral valve replacement of 11%. The observed hospital mortality was 7.2%, and 30-day post-procedure mortality was 8.5%.\n\n\nCONCLUSIONS\nThe TVT Registry is an innovative registry that that monitors quality, patient safety and trends for these rapidly evolving new technologies.", "title": "" }, { "docid": "b1dd6c2db60cae5405c07c3757ed6696", "text": "In this paper, we present the Smartbin system that identifies fullness of litter bin. The system is designed to collect data and to deliver the data through wireless mesh network. The system also employs duty cycle technique to reduce power consumption and to maximize operational time. The Smartbin system was tested in an outdoor environment. Through the testbed, we collected data and applied sense-making methods to obtain litter bin utilization and litter bin daily seasonality information. With such information, litter bin providers and cleaning contractors are able to make better decision to increase productivity.", "title": "" }, { "docid": "34623fb38c81af8efaf8e7073e4c43bc", "text": "The k-means problem consists of finding k centers in R that minimize the sum of the squared distances of all points in an input set P from R to their closest respective center. Awasthi et. al. recently showed that there exists a constant ε′ > 0 such that it is NP-hard to approximate the k-means objective within a factor of 1 + ε′. We establish that the constant ε′ is at least 0.0013. For a given set of points P ⊂ R, the k-means problem consists of finding a partition of P into k clusters (C1, . . . , Ck) with corresponding centers (c1, . . . , ck) that minimize the sum of the squared distances of all points in P to their corresponding center, i.e. the quantity arg min (C1,...,Ck),(c1,...,ck) k ∑", "title": "" }, { "docid": "45bf73a93f0014820864d1805f257bfc", "text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.", "title": "" }, { "docid": "efddb60143c59ee9e459e1048a09787c", "text": "The aim of this paper is to determine the possibilities of using commercial off the shelf FPGA based Software Defined Radio Systems to develop a system capable of detecting and locating small drones.", "title": "" }, { "docid": "7b4567b9f32795b267f2fb2d39bbee51", "text": "BACKGROUND\nWearable and mobile devices that capture multimodal data have the potential to identify risk factors for high stress and poor mental health and to provide information to improve health and well-being.\n\n\nOBJECTIVE\nWe developed new tools that provide objective physiological and behavioral measures using wearable sensors and mobile phones, together with methods that improve their data integrity. The aim of this study was to examine, using machine learning, how accurately these measures could identify conditions of self-reported high stress and poor mental health and which of the underlying modalities and measures were most accurate in identifying those conditions.\n\n\nMETHODS\nWe designed and conducted the 1-month SNAPSHOT study that investigated how daily behaviors and social networks influence self-reported stress, mood, and other health or well-being-related factors. We collected over 145,000 hours of data from 201 college students (age: 18-25 years, male:female=1.8:1) at one university, all recruited within self-identified social groups. Each student filled out standardized pre- and postquestionnaires on stress and mental health; during the month, each student completed twice-daily electronic diaries (e-diaries), wore two wrist-based sensors that recorded continuous physical activity and autonomic physiology, and installed an app on their mobile phone that recorded phone usage and geolocation patterns. We developed tools to make data collection more efficient, including data-check systems for sensor and mobile phone data and an e-diary administrative module for study investigators to locate possible errors in the e-diaries and communicate with participants to correct their entries promptly, which reduced the time taken to clean e-diary data by 69%. We constructed features and applied machine learning to the multimodal data to identify factors associated with self-reported poststudy stress and mental health, including behaviors that can be possibly modified by the individual to improve these measures.\n\n\nRESULTS\nWe identified the physiological sensor, phone, mobility, and modifiable behavior features that were best predictors for stress and mental health classification. In general, wearable sensor features showed better classification performance than mobile phone or modifiable behavior features. Wearable sensor features, including skin conductance and temperature, reached 78.3% (148/189) accuracy for classifying students into high or low stress groups and 87% (41/47) accuracy for classifying high or low mental health groups. Modifiable behavior features, including number of naps, studying duration, calls, mobility patterns, and phone-screen-on time, reached 73.5% (139/189) accuracy for stress classification and 79% (37/47) accuracy for mental health classification.\n\n\nCONCLUSIONS\nNew semiautomated tools improved the efficiency of long-term ambulatory data collection from wearable and mobile devices. Applying machine learning to the resulting data revealed a set of both objective features and modifiable behavioral features that could classify self-reported high or low stress and mental health groups in a college student population better than previous studies and showed new insights into digital phenotyping.", "title": "" }, { "docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a", "text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.", "title": "" }, { "docid": "b18ca3607462ba54ec86055dfd4683fe", "text": "Electric power transmission lines face increased threats from malicious attacks and natural disasters. This underscores the need to develop new techniques to ensure safe and reliable transmission of electric power. This paper deals with the development of an online monitoring technique based on mechanical state estimation to determine the sag levels of overhead transmission lines in real time and hence determine if these lines are in normal physical condition or have been damaged or downed. A computational algorithm based on least squares state estimation is applied to the physical transmission line equations to determine the conductor sag levels from measurements of tension, temperature, and other transmission line conductor parameters. The estimated conductor sag levels are used to generate warning signals of vertical clearance violations in the energy management system. These warning signals are displayed to the operator to make appropriate decisions to maintain the line within the prescribed clearance limits and prevent potential cascading failures.", "title": "" }, { "docid": "c7fd5a26da59fab4e66e0cb3e93530d6", "text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.", "title": "" }, { "docid": "dcf9cba8bf8e2cc3f175e63e235f6b81", "text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.", "title": "" }, { "docid": "8a20feb22ce8797fa77b5d160919789c", "text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.", "title": "" } ]
scidocsrr
7b388588d67297cec35614d2702025c2
SEMAFOR 1.0: A Probabilistic Frame-Semantic Parser
[ { "docid": "33b2c5abe122a66b73840506aa3b443e", "text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.", "title": "" } ]
[ { "docid": "55772e55adb83d4fd383ddebcf564a71", "text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.", "title": "" }, { "docid": "0a0f826f1a8fa52d61892632fd403502", "text": "We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word’s grammatical role in a sentence and thus tells of the word’s meaning. Jones and Mewhort (2007) show that word order can be included in the semantic vectors using holographic reduced representation and convolution. We show here that the order information can be captured also by permuting of vector coordinates, thus providing a general and computationally light alternative to convolution.", "title": "" }, { "docid": "6a2d1dfb61a4e37c8554900e0d366f51", "text": "Attention Deficit/Hyperactivity Disorder (ADHD) is a neurobehavioral disorder which leads to the difficulty on focusing, paying attention and controlling normal behavior. Globally, the prevalence of ADHD is estimated to be 6.5%. Medicine has been widely used for the treatment of ADHD symptoms, but the patient may have a chance to suffer from the side effects of drug, such as vomit, rash, urticarial, cardiac arrthymia and insomnia. In this paper, we propose the alternative medicine system based on the brain-computer interface (BCI) technology called neurofeedback. The proposed neurofeedback system simultaneously employs two important signals, i.e. electroencephalogram (EEG) and hemoencephalogram (HEG), which can quickly reveal the brain functional network. The treatment criteria are that, for EEG signals, the patient needs to maintain the beta activities (13-30 Hz) while reducing the alpha activities (7-13 Hz). Simultaneously, HEG signals need to be maintained continuously increasing to some setting thresholds of the brain blood oxygenation levels. Time-frequency selective multilayer perceptron (MLP) is employed to capture the mentioned phenomena in real-time. The experimental results show that the proposed system yields the sensitivity of 98.16% and the specificity of 95.57%. Furthermore, from the resulting weights of the proposed MLP, we can also conclude that HEG signals yield the most impact to our neurofeedback treatment followed by the alpha, beta, and theta activities, respectively.", "title": "" }, { "docid": "eba769c6246b44d8ed7e5f08aac17731", "text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.", "title": "" }, { "docid": "1924730db532936166d07c6bab058800", "text": "The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum.", "title": "" }, { "docid": "3f1d69e8a2fdfc69e451679255782d70", "text": "This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision).\n The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production.\n Visit the tutorial website at http://hunch.net/~large_scale_survey/", "title": "" }, { "docid": "2732b8453269834e481428f054ff4992", "text": "Otsu reference proposed a criterion for maximizing the between-class variance of pixel intensity to perform picture thresholding. However, Otsu’s method for image segmentation is very time-consuming because of the inefficient formulation of the between-class variance. In this paper, a faster version of Otsu’s method is proposed for improving the efficiency of computation for the optimal thresholds of an image. First, a criterion for maximizing a modified between-class variance that is equivalent to the criterion of maximizing the usual between-class variance is proposed for image segmentation. Next, in accordance with the new criterion, a recursive algorithm is designed to efficiently find the optimal threshold. This procedure yields the same set of thresholds as the original method. In addition, the modified between-class variance can be pre-computed and stored in a look-up table. Our analysis of the new criterion clearly shows that it takes less computation to compute both the cumulative probability (zeroth order moment) and the mean (first order moment) of a class, and that determining the modified between-class variance by accessing a look-up table is quicker than that by performing mathematical arithmetic operations. For example, the experimental results of a five-level threshold selection show that our proposed method can reduce down the processing time from more than one hour by the conventional Otsu’s method to less than 107 seconds.", "title": "" }, { "docid": "44ea81d223e3c60c7b4fd1192ca3c4ba", "text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes", "title": "" }, { "docid": "b40ef74fd41676d51d0870578e483b27", "text": "In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.", "title": "" }, { "docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9", "text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.", "title": "" }, { "docid": "0e803e853422328aeef59e426410df48", "text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.", "title": "" }, { "docid": "121a388391c12de1329e74fdeebdaf10", "text": "In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.", "title": "" }, { "docid": "b3cb053d44a90a2a9a9332ac920f0e90", "text": "This study develops a crowdfunding sponsor typology based on sponsors’ motivations for participating in a project. Using a two by two crowdfunding motivation framework, we analyzed six relevant funding motivations—interest, playfulness, philanthropy, reward, relationship, and recognition—and identified four types of crowdfunding sponsors: angelic backer, reward hunter, avid fan, and tasteful hermit. They are profiled in terms of the antecedents and consequences of funding motivations. Angelic backers are similar in some ways to traditional charitable donors while reward hunters are analogous to market investors; thus they differ in their approach to crowdfunding. Avid fans comprise the most passionate sponsor group, and they are similar to members of a brand community. Tasteful hermits support their projects as actively as avid fans, but they have lower extrinsic and others-oriented motivations. The results show that these sponsor types reflect the nature of crowdfunding as a new form of co-creation in the E-commerce context. 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "25d913188ee5790d5b3a9f5fb8b68dda", "text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.", "title": "" }, { "docid": "370767f85718121dc3975f383bf99d8b", "text": "A combinatorial classification and a phylogenetic analysis of the ten 12/8 time, seven-stroke bell rhythm timelines in African and Afro-American music are presented. New methods for rhythm classification are proposed based on measures of rhythmic oddity and off-beatness. These combinatorial classifications reveal several new uniqueness properties of the Bembé bell pattern that may explain its widespread popularity and preference among the other patterns in this class. A new distance measure called the swap-distance is introduced to measure the non-similarity of two rhythms that have the same number of strokes. A swap in a sequence of notes and rests of equal duration is the location interchange of a note and a rest that are adjacent in the sequence. The swap distance between two rhythms is defined as the minimum number of swaps required to transform one rhythm to the other. A phylogenetic analysis using Splits Graphs with the swap distance shows that each of the ten bell patterns can be derived from one of two “canonical” patterns with at most four swap operations, or from one with at most five swap operations. Furthermore, the phylogenetic analysis suggests that for these ten bell patterns there are no “ancestral” rhythms not contained in this set.", "title": "" }, { "docid": "774394b64cf9a98f481b343866f648a6", "text": "The aim of this study was to evaluate the anatomy of the central myelin portion and the central myelin-peripheral myelin transitional zone of the trigeminal, facial, glossopharyngeal and vagus nerves from fresh cadavers. The aim was also to investigate the relationship between the length and volume of the central myelin portion of these nerves with the incidences of the corresponding cranial dysfunctional syndromes caused by their compression to provide some more insights for a better understanding of mechanisms. The trigeminal, facial, glossopharyngeal and vagus nerves from six fresh cadavers were examined. The length of these nerves from the brainstem to the foramen that they exit were measured. Longitudinal sections were stained and photographed to make measurements. The diameters of the nerves where they exit/enter from/to brainstem, the diameters where the transitional zone begins, the distances to the most distal part of transitional zone from brainstem and depths of the transitional zones were measured. Most importantly, the volume of the central myelin portion of the nerves was calculated. Correlation between length and volume of the central myelin portion of these nerves and the incidences of the corresponding hyperactive dysfunctional syndromes as reported in the literature were studied. The distance of the most distal part of the transitional zone from the brainstem was 4.19 ± 0.81 mm for the trigeminal nerve, 2.86 ± 1.19 mm for the facial nerve, 1.51 ± 0.39 mm for the glossopharyngeal nerve, and 1.63 ± 1.15 mm for the vagus nerve. The volume of central myelin portion was 24.54 ± 9.82 mm3 in trigeminal nerve; 4.43 ± 2.55 mm3 in facial nerve; 1.55 ± 1.08 mm3 in glossopharyngeal nerve; 2.56 ± 1.32 mm3 in vagus nerve. Correlations (p < 0.001) have been found between the length or volume of central myelin portions of the trigeminal, facial, glossopharyngeal and vagus nerves and incidences of the corresponding diseases. At present it is rather well-established that primary trigeminal neuralgia, hemifacial spasm and vago-glossopharyngeal neuralgia have as one of the main causes a vascular compression. The strong correlations found between the lengths and volumes of the central myelin portions of the nerves and the incidences of the corresponding diseases is a plea for the role played by this anatomical region in the mechanism of these diseases.", "title": "" }, { "docid": "83de0252b28e4dcedefc239aaaee79e5", "text": "Recently, there has been immense interest in using unmanned aerial vehicles (UAVs) for civilian operations such as package delivery, aerial surveillance, and disaster response. As a result, UAV traffic management systems are needed to support potentially thousands of UAVs flying simultaneously in the air space, in order to ensure their liveness and safety requirements are met. Currently, the analysis of large multi-agent systems cannot tractably provide these guarantees if the agents’ set of maneuvers are unrestricted. In this paper, we propose to have platoons of UAVs flying on air highways in order to impose the air space structure that allows for tractable analysis and intuitive monitoring. For the air highway placement problem, we use the flexible and efficient fast marching method to solve the Eikonal equation, which produces a sequence of air highways that minimizes the cost of flying from an origin to any destination. Within the platoons that travel on the air highways, we model each vehicle as a hybrid system with modes corresponding to its role in the platoon. Using Hamilton-Jacobi reachability, we propose several liveness controllers and a safety controller that guarantee the success and safety of all mode transitions. For a single altitude range, our approach guarantees safety for one safety breach per vehicle; in the unlikely event of multiple safety breaches, safety can be guaranteed over multiple altitude ranges. We demonstrate the satisfaction of liveness and safety requirements through simulations of three common scenarios.", "title": "" }, { "docid": "06f27036cd261647c7670bdf854f5fb4", "text": "OBJECTIVE\nTo determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures.\n\n\nMETHOD AND MATERIALS\nFrom each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride.\n\n\nRESULTS\nApplication of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH.\n\n\nCONCLUSION\nConsidering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.", "title": "" }, { "docid": "c2db241a94d9fec15af613d593730dea", "text": "This study investigated the influence of Cloisite-15A nanoclay on the physical, performance, and mechanical properties of bitumen binder. Cloisite-15A was blended in the bitumen in variegated percentages from 1% to 9% with increment of 2%. The blended bitumen was characterized using penetration, softening point, and dynamic viscosity using rotational viscometer, and compared with unmodified bitumen equally penetration grade 60/70. The rheological parameters were investigated using Dynamic Shear Rheometer (DSR), and mechanical properties were investigated by using Marshall Stability test. The results indicated an increase in softening point, dynamic viscosity and decrease in binder penetration. Rheological properties of bitumen increase complex modulus, decrease phase angle and improve rutting resistances as well. There was significant improvement in Marshall Stability, rather marginal improvement in flow value. The best improvement in the modified binder was obtained with 5% Cloisite-15A nanoclay. Keywords—Cloisite-15A, complex shear modulus, phase angle, rutting resistance.", "title": "" } ]
scidocsrr
530f3888d99b1b7dd8a7446b3dfabb97
Requirements and languages for the semantic representation of manufacturing systems
[ { "docid": "2464b1f28815b6f502f06ce6b45ef8ed", "text": "In this paper we review and compare the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them. Ontology technology is nowadays mature enough: many methodologies, tools and languages are already available. The future work in this field should be driven towards the creation of a common integrated workbench for ontology developers to facilitate ontology development, exchange, evaluation, evolution and management, to provide methodological support for these tasks, and translations to and from different ontology languages. This workbench should not be created from scratch, but instead integrating the technology components that are currently available. 2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "204df6c32bde81851ebdb0a0b4d18b93", "text": "Language experience systematically constrains perception of speech contrasts that deviate phonologically and/or phonetically from those of the listener’s native language. These effects are most dramatic in adults, but begin to emerge in infancy and undergo further development through at least early childhood. The central question addressed here is: How do nonnative speech perception findings bear on phonological and phonetic aspects of second language (L2) perceptual learning? A frequent assumption has been that nonnative speech perception can also account for the relative difficulties that late learners have with specific L2 segments and contrasts. However, evaluation of this assumption must take into account the fact that models of nonnative speech perception such as the Perceptual Assimilation Model (PAM) have focused primarily on naïve listeners, whereas models of L2 speech acquisition such as the Speech Learning Model (SLM) have focused on experienced listeners. This chapter probes the assumption that L2 perceptual learning is determined by nonnative speech perception principles, by considering the commonalities and complementarities between inexperienced listeners and those learning an L2, as viewed from PAM and SLM. Among the issues examined are how language learning may affect perception of phonetic vs. phonological information, how monolingual vs. multiple language experience may impact perception, and what these may imply for attunement of speech perception to changes in the listener’s language environment. Commonalities and complementarities 3", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "8230003e8be37867e0e4fc7320e24448", "text": "This document was approved as policy of the American Psychological Association (APA) by the APA Council of Representatives in August, 2002. This document was drafted by a joint Task Force of APA Divisions 17 (Counseling Psychology) and 45 (The Society for the Psychological Study of Ethnic Minority Issues). These guidelines have been in the process of development for 22 years, so many individuals and groups require acknowledgement. The Divisions 17/45 writing team for the present document included Nadya Fouad, PhD, Co–Chair, Patricia Arredondo, EdD, Co– Chair, Michael D'Andrea, EdD and Allen Ivey, EdD. These guidelines build on work related to multicultural counseling competencies by Division 17 (Sue et al., 1982) and the Association of Multicultural Counseling and Development (Arredondo et al., 1996; Sue, Arredondo, & McDavis, 1992). The Task Force acknowledges Allen Ivey, EdD, Thomas Parham, PhD, and Derald Wing Sue, PhD for their leadership related to the work on competencies. The Divisions 17/45 writing team for these guidelines was assisted in reviewing the relevant literature by Rod Goodyear, PhD, Jeffrey S. Mio, PhD, Ruperto (Toti) Perez, PhD, William Parham, PhD, and Derald Wing Sue, PhD. Additional writing contributions came from Gail Hackett, PhD, Jeanne Manese, PhD, Louise Douce, PhD, James Croteau, PhD, Janet Helms, PhD, Sally Horwatt, PhD, Kathleen Boggs, PhD, Gerald Stone, PhD, and Kathleen Bieschke, PhD. Editorial contributions were provided by Nancy Downing Hansen, PhD, Patricia Perez, Tiffany Rice, and Dan Rosen. The Task Force is grateful for the active support and contributions of a series of presidents of APA Divisions 17, 35, and 45, including Rosie Bingham, PhD, Jean Carter, PhD, Lisa Porche Burke, PhD, Gerald Stone, PhD, Joseph Trimble, PhD, Melba Vasquez, PhD, and Jan Yoder, PhD. Other individuals who contributed through their advocacy include Guillermo Bernal, PhD, Robert Carter, PhD, J. Manuel Casas, PhD, Don Pope–Davis, PhD, Linda Forrest, PhD, Margaret Jensen, PhD, Teresa LaFromboise, PhD, Joseph G. Ponterotto, PhD, and Ena Vazquez Nuttall, EdD.", "title": "" }, { "docid": "1314f4c6bafefd229f2a8b192ba881f7", "text": "Face recognition is an area that has attracted a l ot of interest. Much of the research in this field was conducted using visible images. With visible cameras the recognition is prone to errors due to illumination changes. To avoid the problems encountered in the visible spectrum many authors ha ve proposed the use of infrared. In this paper we give an overview of the state of the art in face recognition using infrared images. Emphasis is given to more recent works. A growing fi eld n this area is multimodal fusion; work conducted in this field is also presented in th is paper and publicly available Infrared face image databases are introduced.", "title": "" }, { "docid": "e90755afe850d597ad7b3f4b7e590b66", "text": "Privacy is considered to be a fundamental human right (Movius and Krup, 2009). Around the world this has led to a large amount of legislation in the area of privacy. Nearly all national governments have imposed local privacy legislation. In the United States several states have imposed their own privacy legislation. In order to maintain a manageable scope this paper only addresses European Union wide and federal United States laws. In addition several US industry (self) regulations are also considered. Privacy regulations in emerging technologies are surrounded by uncertainty. This paper aims to clarify the uncertainty relating to privacy regulations with respect to Cloud Computing and to identify the main open issues that need to be addressed for further research. This paper is based on existing literature and a series of interviews and questionnaires with various Cloud Service Providers (CSPs) that have been performed for the first author’s MSc thesis (Ruiter, 2009). The interviews and questionnaires resulted in data on privacy and security procedures from ten CSPs and while this number is by no means large enough to make any definite conclusions the results are, in our opinion, interesting enough to publish in this paper. The remainder of the paper is organized as follows: the next section gives some basic background on Cloud Computing. Section 3 provides", "title": "" }, { "docid": "2e3cee13657129d26ec236f9d2641e6c", "text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds", "title": "" }, { "docid": "3fce18c6e1f909b91f95667a563aa194", "text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.", "title": "" }, { "docid": "a91a57326a2d961e24d13b844a3556cf", "text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.", "title": "" }, { "docid": "1e1706e1bd58a562a43cc7719f433f4f", "text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.", "title": "" }, { "docid": "d3a8457c4c65652855e734556652c6be", "text": "We consider a supervised learning problem in which data are revealed sequentially and the goal is to determine what will next be revealed. In the context of this problem, algorithms based on association rules have a distinct advantage over classical statistical and machine learning methods; however, there has not previously been a theoretical foundation established for using association rules in supervised learning. We present two simple algorithms that incorporate association rules, and provide generalization guarantees on these algorithms based on algorithmic stability analysis from statistical learning theory. We include a discussion of the strict minimum support threshold often used in association rule mining, and introduce an “adjusted confidence” measure that provides a weaker minimum support condition that has advantages over the strict minimum support. The paper brings together ideas from statistical learning theory, association rule mining and Bayesian analysis.", "title": "" }, { "docid": "a88c0d45ca7859c050e5e76379f171e6", "text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.", "title": "" }, { "docid": "5227c1679d83168eeb4d82d9a94a3a0f", "text": "Driver decisions and behaviors regarding the surrounding traffic are critical to traffic safety. It is important for an intelligent vehicle to understand driver behavior and assist in driving tasks according to their status. In this paper, the consumer range camera Kinect is used to monitor drivers and identify driving tasks in a real vehicle. Specifically, seven common tasks performed by multiple drivers during driving are identified in this paper. The tasks include normal driving, left-, right-, and rear-mirror checking, mobile phone answering, texting using a mobile phone with one or both hands, and the setup of in-vehicle video devices. The first four tasks are considered safe driving tasks, while the other three tasks are regarded as dangerous and distracting tasks. The driver behavior signals collected from the Kinect consist of a color and depth image of the driver inside the vehicle cabin. In addition, 3-D head rotation angles and the upper body (hand and arm at both sides) joint positions are recorded. Then, the importance of these features for behavior recognition is evaluated using random forests and maximal information coefficient methods. Next, a feedforward neural network (FFNN) is used to identify the seven tasks. Finally, the model performance for task recognition is evaluated with different features (body only, head only, and combined). The final detection result for the seven driving tasks among five participants achieved an average of greater than 80% accuracy, and the FFNN tasks detector is proved to be an efficient model that can be implemented for real-time driver distraction and dangerous behavior recognition.", "title": "" }, { "docid": "222b853f23cbcea9794c83c1471273b8", "text": "Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.", "title": "" }, { "docid": "84f1cdf2729e206bf56d336e0c09d9d9", "text": "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net [30] for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO [20], DeepFashion [21, 23], shoes [43], Market-1501 [47] and handbags [49] the approach demonstrates significant improvements over the state-of-the-art.", "title": "" }, { "docid": "ccd5f02b97643b3c724608a4e4a67fdb", "text": "Modular robotic systems that integrate distally with commercially available endoscopic equipment have the potential to improve the standard-of-care in therapeutic endoscopy by granting clinicians with capabilities not present in commercial tools, such as precision dexterity and feedback sensing. With the desire to integrate both sensing and actuation distally for closed-loop position control in fully deployable, endoscope-based robotic modules, commercial sensor and actuator options that acquiesce to the strict form-factor requirements are sparse or nonexistent. Herein, we describe a proprioceptive angle sensor for potential closed-loop position control applications in distal robotic modules. Fabricated monolithically using printed-circuit MEMS, the sensor employs a kinematic linkage and the principle of light intensity modulation to sense the angle of articulation with a high degree of fidelity. Onboard temperature and environmental irradiance measurements, coupled with linear regression techniques, provide robust angle measurements that are insensitive to environmental disturbances. The sensor is capable of measuring $\\pm$45 degrees of articulation with an RMS error of 0.98 degrees. An ex vivo demonstration shows that the sensor can give real-time proprioceptive feedback when coupled with an actuator module, opening up the possibility of fully distal closed-loop control.", "title": "" }, { "docid": "17797efad4f13f961ed300316eb16b6b", "text": "Cellular senescence, which has been linked to age-related diseases, occurs during normal aging or as a result of pathological cell stress. Due to their incapacity to proliferate, senescent cells cannot contribute to normal tissue maintenance and tissue repair. Instead, senescent cells disturb the microenvironment by secreting a plethora of bioactive factors that may lead to inflammation, regenerative dysfunction and tumor progression. Recent understanding of stimuli and pathways that induce and maintain cellular senescence offers the possibility to selectively eliminate senescent cells. This novel strategy, which so far has not been tested in humans, has been coined senotherapy or senolysis. In mice, senotherapy proofed to be effective in models of accelerated aging and also during normal chronological aging. Senotherapy prolonged lifespan, rejuvenated the function of bone marrow, muscle and skin progenitor cells, improved vasomotor function and slowed down atherosclerosis progression. While initial studies used genetic approaches for the killing of senescent cells, recent approaches showed similar effects with senolytic drugs. These observations open up exciting possibilities with a great potential for clinical development. However, before the integration of senotherapy into patient care can be considered, we need further research to improve our insight into the safety and efficacy of this strategy during short- and long-term use.", "title": "" }, { "docid": "9f037fd53e6547b689f88fc1c1bed10a", "text": "We study feature selection as a means to optimize the baseline clickbait detector employed at the Clickbait Challenge 2017 [6]. The challenge’s task is to score the “clickbaitiness” of a given Twitter tweet on a scale from 0 (no clickbait) to 1 (strong clickbait). Unlike most other approaches submitted to the challenge, the baseline approach is based on manual feature engineering and does not compete out of the box with many of the deep learning-based approaches. We show that scaling up feature selection efforts to heuristically identify better-performing feature subsets catapults the performance of the baseline classifier to second rank overall, beating 12 other competing approaches and improving over the baseline performance by 20%. This demonstrates that traditional classification approaches can still keep up with deep learning on this task.", "title": "" }, { "docid": "81fc9abd3e2ad86feff7bd713cff5915", "text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.", "title": "" }, { "docid": "b4cb716b235ece6ee647fc17b6bb13b6", "text": "Prof. Jay W. Forrester pioneered industrial Dynamics. It enabled the management scientists to understand well enough the dynamics of change in economics/business systems. Four basic foundations on which System Dynamics rest were discussed. The thought process prevailing and their shortcomings are pointed out and the success story of System Dynamics was explained with the help of Production-Distribution model. System Dynamics graduated to Learning Organisations. Senge with his concept of integrating five distinct disciplines of Systems Thinking, Personal Mastery, Mental Models, Shared Vision and Team Learning succeeded in bringing forth the System Dynamics to the reach of large number of practitioners and teachers of management. However, Systems Thinking part of the Learning Organisation fails to reach out because it lacks the architecture needed to support it. Richmond provided the much-needed architectural support. It enables the mapping language to be economical, consistent and relate to the dynamic behaviour of the system. Progression from Industrial Dynamics to Systems Thinking has been slow due to different postures taken by the professionals. It is suggested that Systems Thinking has a lot to adopt from different disciplines and should celebrate synergies and avail cross-fertilisation or opportunities. Systems Thinking is transparent and can seamlessly leverage the way the business is performed. ★ A. K. Rao is Member of Faculty at Administrative Staff College of India, Bellavista, Hyderabad 500 082, India. E-mail: [email protected] and A.Subash Babu is Professor in Industrial Engineering and Operations Research at Indian Institute of Technology, Bombay 400 076, India E-mail: [email protected] Industrial Dynamics to Systems Thinking A.K.Rao & A. Subash Babu Introduction: In the year 1958, the first words penned down by the pioneer of System Dynamics (then Industrial Dynamics) Jay W. Forrester were “Management is on the verge of a major breakthrough in understanding how industrial company success depends on the interaction between the flows of information, materials, manpower and capital equipment”. The article titled “Industrial Dynamics: A Major Breakthrough for Decision Makers” in Harvard Business Review attracted attention of management scientists. Several controversies arose when further articles appeared subsequently. Today, 40 years since the first article in the field of System Dynamics appeared in print, the progress when evaluated evokes mixed response. If it were a major breakthrough for decisionmakers, then why did it not proliferate into the curriculum of business schools as common as that of Principles of Management or Business Statistics or any other standard subjects of study? The purpose of this article is to critically review three seminal works in the field of System Dynamics: Industrial Dynamics by Jay W. Forrester (1960), Fifth Discipline: The Art and Practice of Learning Organisations by Peter Senge (1990) and Systems Thinking by Barry Richmond (1997) and to understand the pitfalls in reaching out to the large body of academia and practising managers. Forrester in his work raised a few fundamental issues way back in early 60’s that most of the corporate managers are able to comprehend only now. He clearly answered the question on what is the next frontier of our knowledge. The great advances and opportunities in the future he predicted would appear in the field of management and economics. The shift from technical to the social front was evidenced in the way global competition and the rules of the game changed. The test of leadership is to show the way to economic development and stability. The leading question therefore is whether we understand well enough the dynamics of change in economic/business systems to pioneer this new frontier? Forrester offered the much-needed solution: System Dynamics. The foundations for a body of knowledge called system dynamics were the concepts of servomechanism, controlled experiments, and digital computing and better understanding of control theory. Servomechanism of information feedback theory was evolved during the World War II. Till then, time delays, amplification effects and the structure of the system were taken for granted. The realisation that interaction between components is more crucial to the system behaviour than the components themselves are of recent origin. The thesis out of information-feedback study led to the conclusion that information-feedback system is all pervasive in the nature. It exists whenever the environment changes, and leads to a decision that results in action, which in-turn affects the environment. This leads us to an axiom that everything that we do as an individual, as an organisation, as an industry, as a nation, or even as a society irrespective of the divisibility of the unit is done in the context of information-feedback system. This is the bedrock philosophy of system dynamics. The second foundation is the realisation of the importance of the experimental approach to understanding of system dynamics. The standard acceptable format of research study of going from general analytical solution to the particular special case was reversed to the empirical approach. In this format a number of particular situations were studied and from these generalisations were inferred. This is the basis for learning. The activity basis for learning is experience. Some of these generalisations were given a name by Senge (1990) as Nature’s Templates. The third foundation for progress of system dynamics was digital computing machines. By 1945, systems of twenty variables were difficult to handle. By 1955, the digital computer appeared, opening the way to the simulation of systems far beyond the capability of analogue machines. Models of 2000 and more variables with out any restrictions on representing non-linear phenomena could easily be simulated on a digital computer at costs within the reach of the academia and the research organisations. The simulation of information feedback models of important managerial and economic questions is an area demanding high efficiency. A cost reduction factor of ten thousand or more in computation infrastructure placed one in a completely different environment than that existed a few years ago. The fourth foundation was better appreciation of policy and decision. There is an orderly basis that prescribes most of our present managerial decisions. These decisions are not entirely adhoc but are strongly conditioned by the environment. This being so, policies governing decisions can be laid down and their effect on economic/business behaviour can be studied. Forrester’s Postulates and Applications : The idea that economic and industrial systems could be depicted through linear analysis was the major stumbling block to begin thinking dynamically. Most of the policy analysis goes on to define the problem on hand as narrowly as possible in the name of attaining the objective of being specific and crisp. On one hand, it enables the mathematics of such analysis tractable but unfortunately, it ignores the fact that almost every factor in the economic or industrial systems is non-linear. Much of the important behaviour of the system is the direct manifestation of nonlinear characteristic of the system components. Social systems are assumed to be inherently stable and that they constantly seek to achieve the equilibrium status. While it is the system’s tendency to r each the equ ilibrium in its inanimate consideration, the players in the system keep working towards disturbing the equilibrium conditions. Perfect market is the stated goal of the simple economic system with its most important components the supply and the demand trying to equal each other in the long run. But during this period, the players in the market disturb the initial conditions by several means such as inducing technology, introducing substitutes, differentiating the products etc. which makes that the seemingly achievable perfect market an impossible dream. Therefore, the notion of sustainable competitive advantage is only fleeting in nature. The analysis used for solving the market problems with an assumption of stable systems in thus not valid. There appears ample evidence that much of our industrial and economic systems exhibit behaviours characterised by instability. Mathematical economics and management science have often been more closely allied to formal mathematics than to economics or management. The difference of orientation is glaringly evident on comparison of business literature with publications on management science. Another evidence of the bias towards mathematical rather than managerial motivation is seen in preoccupation in optimum solutions. In the linear analysis, the first action that is performed is to define the objective function. Thus specifying the purpose of a model of an economic system being its ability to predict specific future action. Further, it is used to validate the model. Models are required to predict the character and the nature of the system in question so that redesign could take place in congruence with the desired state. This is entirely different and more useful than the objective functions, which provide the events such as specific future times of peaks or valleys such as in a sales curve. It is a belief that a model must be limited to considering those variables, which have generally accepted definitions and must have objective value, attached to them. Many undefined concepts are known to be of crucial importance to business systems, which are known as soft variables. Linear models are not capable of capturing these details in the traditional methodology of problem solving. If the subjective matters are considered to be of crucial importance to the business system behaviour, it must be conceded that they must some how be incorporated in the model. Therefore, it is necessary to provide legi", "title": "" }, { "docid": "6416eb9235954730b8788b7b744d9e5b", "text": "This paper presents a machine learning based handover management scheme for LTE to improve the Quality of Experience (QoE) of the user in the presence of obstacles. We show that, in this scenario, a state-of-the-art handover algorithm is unable to select the appropriate target cell for handover, since it always selects the target cell with the strongest signal without taking into account the perceived QoE of the user after the handover. In contrast, our scheme learns from past experience how the QoE of the user is affected when the handover was done to a certain eNB. Our performance evaluation shows that the proposed scheme substantially improves the number of completed downloads and the average download time compared to state-of-the-art. Furthermore, its performance is close to an optimal approach in the coverage region affected by an obstacle.", "title": "" } ]
scidocsrr
913cbf1c706a47094aabf3fc2f764150
The Impacts of Social Media on Bitcoin Performance
[ { "docid": "c02d207ed8606165e078de53a03bf608", "text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: [email protected]), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: [email protected]), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*", "title": "" } ]
[ { "docid": "7e40c98b9760e1f47a0140afae567b7f", "text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "title": "" }, { "docid": "b78f1e6a5e93c1ad394b1cade293829f", "text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing", "title": "" }, { "docid": "fb31ead676acdd048d699ddfb4ddd17a", "text": "Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.", "title": "" }, { "docid": "8e654ace264f8062caee76b0a306738c", "text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.", "title": "" }, { "docid": "06672f6316878c80258ad53988a7e953", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "fe57e844c12f7392bdd29a2e2396fc50", "text": "With the help of modern information communication technology, mobile banking as a new type of financial services carrier can provide efficient and effective financial services for clients. Compare with Internet banking, mobile banking is more secure and user friendly. The implementation of wireless communication technologies may result in more complicated information security problems. Based on the principles of information security, this paper presented issues of information security of mobile banking and discussed the security protection measures such as: encryption technology, identity authentication, digital signature, WPKI technology.", "title": "" }, { "docid": "64ba4467dc4495c6828f2322e8f415f2", "text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.", "title": "" }, { "docid": "10f3cafc05b3fb3b235df34aebbe0e23", "text": "To cope with monolithic controller replicas and the current unbalance situation in multiphase converters, a pseudo-ramp current balance technique is proposed to achieve time-multiplexing current balance in voltage-mode multiphase DC-DC buck converter. With only one modulation controller, silicon area and power consumption caused by the replicas of controller can be reduced significantly. Current balance accuracy can be further enhanced since the mismatches between different controllers caused by process, voltage, and temperature variations are removed. Moreover, the offset cancellation control embedded in the current matching unit is used to eliminate intrinsic offset voltage existing at the operational transconductance amplifier for improved current balance. An explicit model, which contains both voltage and current balance loops with non-ideal effects, is derived for analyzing system stability. Experimental results show that current difference between each phase can be decreased by over 83% under both heavy and light load conditions.", "title": "" }, { "docid": "358faa358eb07b8c724efcdb72334dc7", "text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.", "title": "" }, { "docid": "c0440776fdd2adab39e9a9ba9dd56741", "text": "Corynebacterium glutamicum is an important industrial metabolite producer that is difficult to genetically engineer. Although the Streptococcus pyogenes (Sp) CRISPR-Cas9 system has been adapted for genome editing of multiple bacteria, it cannot be introduced into C. glutamicum. Here we report a Francisella novicida (Fn) CRISPR-Cpf1-based genome-editing method for C. glutamicum. CRISPR-Cpf1, combined with single-stranded DNA (ssDNA) recombineering, precisely introduces small changes into the bacterial genome at efficiencies of 86-100%. Large gene deletions and insertions are also obtained using an all-in-one plasmid consisting of FnCpf1, CRISPR RNA, and homologous arms. The two CRISPR-Cpf1-assisted systems enable N iterative rounds of genome editing in 3N+4 or 3N+2 days. A proof-of-concept, codon saturation mutagenesis at G149 of γ-glutamyl kinase relieves L-proline inhibition using Cpf1-assisted ssDNA recombineering. Thus, CRISPR-Cpf1-based genome editing provides a highly efficient tool for genetic engineering of Corynebacterium and other bacteria that cannot utilize the Sp CRISPR-Cas9 system.", "title": "" }, { "docid": "9a6ce56536585e54d3e15613b2fa1197", "text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.", "title": "" }, { "docid": "de63a161a9539931f834908477fb5ad1", "text": "Network function virtualization introduces additional complexity for network management through the use of virtualization environments. The amount of managed data and the operational complexity increases, which makes service assurance and failure recovery harder to realize. In response to this challenge, the paper proposes a distributed management function, called virtualized network management function (vNMF), to detect failures related to virtualized services. vNMF detects the failures by monitoring physical-layer statistics that are processed with a self-organizing map algorithm. Experimental results show that memory leaks and network congestion failures can be successfully detected and that and the accuracy of failure detection can be significantly improved compared to common k-means clustering.", "title": "" }, { "docid": "5c40b6fadf2f8f4b39c7adf1e894e600", "text": "Monitoring the flow of traffic along network paths is essential for SDN programming and troubleshooting. For example, traffic engineering requires measuring the ingress-egress traffic matrix; debugging a congested link requires determining the set of sources sending traffic through that link; and locating a faulty device might involve detecting how far along a path the traffic makes progress. Past path-based monitoring systems operate by diverting packets to collectors that perform \"after-the-fact\" analysis, at the expense of large data-collection overhead. In this paper, we show how to do more efficient \"during-the-fact\" analysis. We introduce a query language that allows each SDN application to specify queries independently of the forwarding state or the queries of other applications. The queries use a regular-expression-based path language that includes SQL-like \"groupby\" constructs for count aggregation. We track the packet trajectory directly on the data plane by converting the regular expressions into an automaton, and tagging the automaton state (i.e., the path prefix) in each packet as it progresses through the network. The SDN policies that implement the path queries can be combined with arbitrary packet-forwarding policies supplied by other elements of the SDN platform. A preliminary evaluation of our prototype shows that our \"during-the-fact\" strategy reduces data-collection overhead over \"after-the-fact\" strategies.", "title": "" }, { "docid": "0499618380bc33d376160a770683e807", "text": "As multicore and manycore processor architectures are emerging and the core counts per chip continue to increase, it is important to evaluate and understand the performance and scalability of Parallel Discrete Event Simulation (PDES) on these platforms. Most existing architectures are still limited to a modest number of cores, feature simple designs and do not exhibit heterogeneity, making it impossible to perform comprehensive analysis and evaluations of PDES on these platforms. Instead, in this paper we evaluate PDES using a full-system cycle-accurate simulator of a multicore processor and memory subsystem. With this approach, it is possible to flexibly configure the simulator and perform exploration of the impact of architecture design choices on the performance of PDES. In particular, we answer the following four questions with respect to PDES performance and scalability: (1) For the same total chip area, what is the best design point in terms of the number of cores and the size of the on-chip cache? (2) What is the impact of using in-order vs. out-of-order cores? (3) What is the impact of a heterogeneous system with a mix of in-order and out-of-order cores? (4) What is the impact of object partitioning on PDES performance in heterogeneous systems? To answer these questions, we use MARSSx86 simulator for evaluating performance, and rely on Cacti and McPAT tools to derive the area and latency estimates for cores and caches.", "title": "" }, { "docid": "5a601e08824185bafeb94ac432b6e92e", "text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.", "title": "" }, { "docid": "e58882a41c4335caf957105df192edc5", "text": "Credit card fraud is a serious problem in financial services. Billions of dollars are lost due to credit card fraud every year. There is a lack of research studies on analyzing real-world credit card data owing to confidentiality issues. In this paper, machine learning algorithms are used to detect credit card fraud. Standard models are first used. Then, hybrid methods which use AdaBoost and majority voting methods are applied. To evaluate the model efficacy, a publicly available credit card data set is used. Then, a real-world credit card data set from a financial institution is analyzed. In addition, noise is added to the data samples to further assess the robustness of the algorithms. The experimental results positively indicate that the majority voting method achieves good accuracy rates in detecting fraud cases in credit cards.", "title": "" }, { "docid": "3d5bbe4dcdc3ad787e57583f7b621e36", "text": "A miniaturized antenna employing a negative index metamaterial with modified split-ring resonator (SRR) and capacitance-loaded strip (CLS) unit cells is presented for Ultra wideband (UWB) microwave imaging applications. Four left-handed (LH) metamaterial (MTM) unit cells are located along one axis of the antenna as the radiating element. Each left-handed metamaterial unit cell combines a modified split-ring resonator (SRR) with a capacitance-loaded strip (CLS) to obtain a design architecture that simultaneously exhibits both negative permittivity and negative permeability, which ensures a stable negative refractive index to improve the antenna performance for microwave imaging. The antenna structure, with dimension of 16 × 21 × 1.6 mm³, is printed on a low dielectric FR4 material with a slotted ground plane and a microstrip feed. The measured reflection coefficient demonstrates that this antenna attains 114.5% bandwidth covering the frequency band of 3.4-12.5 GHz for a voltage standing wave ratio of less than 2 with a maximum gain of 5.16 dBi at 10.15 GHz. There is a stable harmony between the simulated and measured results that indicate improved nearly omni-directional radiation characteristics within the operational frequency band. The stable surface current distribution, negative refractive index characteristic, considerable gain and radiation properties make this proposed negative index metamaterial antenna optimal for UWB microwave imaging applications.", "title": "" }, { "docid": "406e06e00799733c517aff88c9c85e0b", "text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.", "title": "" }, { "docid": "cef4c47b512eb4be7dcadcee35f0b2ca", "text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.", "title": "" }, { "docid": "986a0b910a4674b3c4bf92a668780dd6", "text": "One of the most important attributes of the polymerase chain reaction (PCR) is its exquisite sensitivity. However, the high sensitivity of PCR also renders it prone to falsepositive results because of, for example, exogenous contamination. Good laboratory practice and specific anti-contamination strategies are essential to minimize the chance of contamination. Some of these strategies, for example, physical separation of the areas for the handling samples and PCR products, may need to be taken into consideration during the establishment of a laboratory. In this chapter, different strategies for the detection, avoidance, and elimination of PCR contamination will be discussed.", "title": "" } ]
scidocsrr
e1640b20b57f2db83b41db76947416dc
Data Mining in the Dark : Darknet Intelligence Automation
[ { "docid": "22bdd2c36ef72da312eb992b17302fbe", "text": "In this paper, we present an operational system for cyber threat intelligence gathering from various social platforms on the Internet particularly sites on the darknet and deepnet. We focus our attention to collecting information from hacker forum discussions and marketplaces offering products and services focusing on malicious hacking. We have developed an operational system for obtaining information from these sites for the purposes of identifying emerging cyber threats. Currently, this system collects on average 305 high-quality cyber threat warnings each week. These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack. This provides a significant service to cyber-defenders. The system is significantly augmented through the use of various data mining and machine learning techniques. With the use of machine learning models, we are able to recall 92% of products in marketplaces and 80% of discussions on forums relating to malicious hacking with high precision. We perform preliminary analysis on the data collected, demonstrating its application to aid a security expert for better threat analysis.", "title": "" }, { "docid": "6d31ee4b0ad91e6500c5b8c7e3eaa0ca", "text": "A host of tools and techniques are now available for data mining on the Internet. The explosion in social media usage and people reporting brings a new range of problems related to trust and credibility. Traditional media monitoring systems have now reached such sophistication that real time situation monitoring is possible. The challenge though is deciding what reports to believe, how to index them and how to process the data. Vested interests allow groups to exploit both social media and traditional media reports for propaganda purposes. The importance of collecting reports from all sides in a conflict and of balancing claims and counter-claims becomes more important as ease of publishing increases. Today the challenge is no longer accessing open source information but in the tagging, indexing, archiving and analysis of the information. This requires the development of general-purpose and domain specific knowledge bases. Intelligence tools are needed which allow an analyst to rapidly access relevant data covering an evolving situation, ranking sources covering both facts and opinions.", "title": "" } ]
[ { "docid": "a854ee8cf82c4bd107e93ed0e70ee543", "text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.", "title": "" }, { "docid": "bc6877a5a83531a794ac1c8f7a4c7362", "text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.", "title": "" }, { "docid": "a33486dfec199cd51e885d6163082a96", "text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.", "title": "" }, { "docid": "7394f3000da8af0d4a2b33fed4f05264", "text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.", "title": "" }, { "docid": "2216f853543186e73b1149bb5a0de297", "text": "Scaffolds have been utilized in tissue regeneration to facilitate the formation and maturation of new tissues or organs where a balance between temporary mechanical support and mass transport (degradation and cell growth) is ideally achieved. Polymers have been widely chosen as tissue scaffolding material having a good combination of biodegradability, biocompatibility, and porous structure. Metals that can degrade in physiological environment, namely, biodegradable metals, are proposed as potential materials for hard tissue scaffolding where biodegradable polymers are often considered as having poor mechanical properties. Biodegradable metal scaffolds have showed interesting mechanical property that was close to that of human bone with tailored degradation behaviour. The current promising fabrication technique for making scaffolds, such as computation-aided solid free-form method, can be easily applied to metals. With further optimization in topologically ordered porosity design exploiting material property and fabrication technique, porous biodegradable metals could be the potential materials for making hard tissue scaffolds.", "title": "" }, { "docid": "501f9cb511e820c881c389171487f0b4", "text": "An omnidirectional circularly polarized (CP) antenna array is proposed. The antenna array is composed of four identical CP antenna elements and one parallel strip-line feeding network. Each of CP antenna elements comprises a dipole and a zero-phase-shift (ZPS) line loop. The in-phase fed dipole and the ZPS line loop generate vertically and horizontally polarized omnidirectional radiation, respectively. Furthermore, the vertically polarized dipole is positioned in the center of the horizontally polarized ZPS line loop. The size of the loop is designed such that a 90° phase difference is realized between the two orthogonal components because of the spatial difference and, therefore, generates CP omnidirectional radiation. A 1 × 4 antenna array at 900 MHz is prototyped and targeted to ultra-high frequency (UHF) radio frequency identification (RFID) applications. The measurement results show that the antenna array achieves a 10-dB return loss over a frequency range of 900-935 MHz and 3-dB axial-ratio (AR) from 890 to 930 MHz. At the frequency of 915 MHz, the measured maximum AR of 1.53 dB, maximum gain of 5.4 dBic, and an omnidirectionality of ±1 dB are achieved.", "title": "" }, { "docid": "58d19a5460ce1f830f7a5e2cb1c5ebca", "text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.", "title": "" }, { "docid": "54bdabea83e86d21213801c990c60f4d", "text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.", "title": "" }, { "docid": "b5babae9b9bcae4f87f5fe02459936de", "text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.", "title": "" }, { "docid": "19b8acf4e5c68842a02e3250c346d09b", "text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.", "title": "" }, { "docid": "fe903498e0c3345d7e5ebc8bf3407c2f", "text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.", "title": "" }, { "docid": "07a6de40826f4c5bab4a8b8c51aba080", "text": "Prior studies on alternative work schedules have focused primarily on the main effects of compressed work weeks and shift work on individual outcomes. This study explores the combined effects of alternative and preferred work schedules on nurses' satisfaction with their work schedules, perceived patient care quality, and interferences with their personal lives.", "title": "" }, { "docid": "62ff5888ad0c8065097603da8ff79cd6", "text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.", "title": "" }, { "docid": "3910a3317ea9ff4ea6c621e562b1accc", "text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.", "title": "" }, { "docid": "263c04402cfe80649b1d3f4a8578e99b", "text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.", "title": "" }, { "docid": "06755f8680ee8b43e0b3d512b4435de4", "text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.", "title": "" }, { "docid": "cc9f566eb8ef891d76c1c4eee7e22d47", "text": "In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments", "title": "" }, { "docid": "3b5340113d583b138834119614046151", "text": "This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math></inline-formula> and a performance indicator <inline-formula><tex-math notation=\"LaTeX\">$\\rho$ </tex-math></inline-formula> (the ratio of <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math> </inline-formula> with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.", "title": "" }, { "docid": "3ea021309fd2e729ffced7657e3a6038", "text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.", "title": "" }, { "docid": "637e73416c1a6412eeeae63e1c73c2c3", "text": "Disgust, an emotion related to avoiding harmful substances, has been linked to moral judgments in many behavioral studies. However, the fact that participants report feelings of disgust when thinking about feces and a heinous crime does not necessarily indicate that the same mechanisms mediate these reactions. Humans might instead have separate neural and physiological systems guiding aversive behaviors and judgments across different domains. The present interdisciplinary study used functional magnetic resonance imaging (n = 50) and behavioral assessment to investigate the biological homology of pathogen-related and moral disgust. We provide evidence that pathogen-related and sociomoral acts entrain many common as well as unique brain networks. We also investigated whether morality itself is composed of distinct neural and behavioral subdomains. We provide evidence that, despite their tendency to elicit similar ratings of moral wrongness, incestuous and nonsexual immoral acts entrain dramatically separate, while still overlapping, brain networks. These results (i) provide support for the view that the biological response of disgust is intimately tied to immorality, (ii) demonstrate that there are at least three separate domains of disgust, and (iii) suggest strongly that morality, like disgust, is not a unified psychological or neurological phenomenon.", "title": "" } ]
scidocsrr
6e3a1a74ece7e0c49866c42f870f1d8d
Data Integration: The Current Status and the Way Forward
[ { "docid": "d95cd76008dd65d5d7f00c82bad013d3", "text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.", "title": "" }, { "docid": "c6abeae6e9287f04b472595a47e974ad", "text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.", "title": "" } ]
[ { "docid": "0f3cad05c9c267f11c4cebd634a12c59", "text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.", "title": "" }, { "docid": "6858c559b78c6f2b5000c22e2fef892b", "text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.", "title": "" }, { "docid": "86ededf9b452bbc51117f5a117247b51", "text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.", "title": "" }, { "docid": "831b153045d9afc8f92336b3ba8019c6", "text": "The progress in the field of electronics and technology as well as the processing of signals coupled with advance in the use of computer technology has given the opportunity to record and analyze the bio-electric signals from the human body in real time that requires dealing with many challenges according to the nature of the signal and its frequency. This could be up to 1 kHz, in addition to the need to transfer data from more than one channel at the same time. Moreover, another challenge is a high sensitivity and low noise measurements of the acquired bio-electric signals which may be tens of micro volts in amplitude. For these reasons, a low power wireless Electromyography (EMG) data transfer system is designed in order to meet these challenging demands. In this work, we are able to develop an EMG analogue signal processing hardware, along with computer based supporting software. In the development of the EMG analogue signal processing hardware, many important issues have been addressed. Some of these issues include noise and artifact problems, as well as the bias DC current. The computer based software enables the user to analyze the collected EMG data and plot them on graphs for visual decision making. The work accomplished in this study enables users to use the surface EMG device for recording EMG signals for various purposes in movement analysis in medical diagnosis, rehabilitation sports medicine and ergonomics. Results revealed that the proposed system transmit and receive the signal without any losing in the information of signals.", "title": "" }, { "docid": "835b7a2b3d9c457a962e6b432665c7ce", "text": "In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. Our proposed GAN allows us to augment face datasets by generating both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with augmented datasets can indeed increase the accuracy of face recognition models as compared with models trained with real images alone.", "title": "" }, { "docid": "495be81dda82d3e4d90a34b6716acf39", "text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.", "title": "" }, { "docid": "6fdeeea1714d484c596468aea053848f", "text": "Standard slow start does not work well under large bandwidthdelay product (BDP) networks. We find two causes of this problem in existing three popular operating systems, Linux, FreeBSD and Windows XP. The first cause is that because of the exponential increase of cwnd during standard slow start, heavy packet losses occur. Recovering from heavy packet losses puts extremely high load on end systems which renders the end systems completely unresponsive for a long time, resulting in a long blackout period of no transmission. This problem commonly occurs with the three operating systems. The second cause is that some of proprietary protocol optimizations applied for slow start by these operating systems to relieve the system load happen to slow down the loss recovery followed by slow start. To remedy this problem, we propose a new slow start algorithm, called Hybrid Start (HyStart) that finds a “safe” exit point of slow start at which slow start can finish and safely move to congestion avoidance without causing any heavy packet losses. HyStart uses ACK trains and RTT delay samples to detect whether (1) the forward path is congested or (2) the current size of congestion window has reached the available capacity of the forward path. HyStart is a plug-in to the TCP sender and does not require any change in TCP receivers. We implemented HyStart for TCP-NewReno and TCP-SACK in Linux and compare its performance with five different slow start schemes with the TCP receivers of the three different operating systems in the Internet and also in the lab testbeds. Our results indicate that HyStart works consistently well under diverse network environments including asymmetric links and high and low BDP networks. Especially with different operating system receivers (Windows XP and FreeBSD), HyStart improves the start-up throughput of TCP more than 2 to 3 times.", "title": "" }, { "docid": "4e85039497c60f8241d598628790f543", "text": "Knowledge management (KM) is a dominant theme in the behavior of contemporary organizations. While KM has been extensively studied in developed economies, it is much less well understood in developing economies, notably those that are characterized by different social and cultural traditions to the mainstream of Western societies. This is notably the case in China. This chapter develops and tests a theoretical model that explains the impact of leadership style and interpersonal trust on the intention of information and knowledge workers in China to share their knowledge with their peers. All the hypotheses are supported, showing that both initiating structure and consideration have a significant effect on employees’ intention to share knowledge through trust building: 28.2% of the variance in employees’ intention to share knowledge is explained. The authors discuss the theoretical contributions of the chapter, identify future research opportunities, and highlight the implications for practicing managers. DOI: 10.4018/978-1-60566-920-5.ch009", "title": "" }, { "docid": "da45568bf2ec4bfe32f927eb54e78816", "text": "We explore controller input mappings for games using a deformable prototype that combines deformation gestures with standard button input. In study one, we tested discrete gestures using three simple games. We categorized the control schemes as binary (button only), action, and navigation, the latter two named based on the game mechanics mapped to the gestures. We found that the binary scheme performed the best, but gesture-based control schemes are stimulating and appealing. Results also suggest that the deformation gestures are best mapped to simple and natural tasks. In study two, we tested continuous gestures in a 3D racing game using the same control scheme categorization. Results were mostly consistent with study one but showed an improvement in performance and preference for the action control scheme.", "title": "" }, { "docid": "0df2ca944dcdf79369ef5a7424bf3ffe", "text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.", "title": "" }, { "docid": "375766c4ae473312c73e0487ab57acc8", "text": "There are three reasons why the asymmetric crooked nose is one of the greatest challenges in rhinoplasty surgery. First, the complexity of the problem is not appreciated by the patient nor understood by the surgeon. Patients often see the obvious deviation of the nose, but not the distinct differences between the right and left sides. Surgeons fail to understand and to emphasize to the patient that each component of the nose is asymmetric. Second, these deformities can be improved, but rarely made flawless. For this reason, patients are told that the result will be all \"-er words,\" better, straighter, cuter, but no \"t-words,\" there is no perfect nor straight. Most surgeons fail to realize that these cases represent asymmetric noses on asymmetric faces with the variable of ipsilateral and contralateral deviations. Third, these cases demand a wide range of sophisticated surgical techniques, some of which have a minimal margin of error. This article offers an in-depth look at analysis, preoperative planning, and surgical techniques available for dealing with the asymmetric crooked nose.", "title": "" }, { "docid": "5e6175d56150485d559d0c1a963e12b8", "text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.", "title": "" }, { "docid": "571a4de4ac93b26d55252dab86e2a0d3", "text": "Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD.", "title": "" }, { "docid": "97b212bb8fde4859e368941a4e84ba90", "text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.", "title": "" }, { "docid": "af0df66f001ffd9601ac3c89edf6af0f", "text": "State-of-the-art speech recognition systems rely on fixed, handcrafted features such as mel-filterbanks to preprocess the waveform before the training pipeline. In this paper, we study end-toend systems trained directly from the raw waveform, building on two alternatives for trainable replacements of mel-filterbanks that use a convolutional architecture. The first one is inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015), and the second one by the scattering transform (Zeghidour et al., 2017). We propose two modifications to these architectures and systematically compare them to mel-filterbanks, on the Wall Street Journal dataset. The first modification is the addition of an instance normalization layer, which greatly improves on the gammatone-based trainable filterbanks and speeds up the training of the scattering-based filterbanks. The second one relates to the low-pass filter used in these approaches. These modifications consistently improve performances for both approaches, and remove the need for a careful initialization in scattering-based trainable filterbanks. In particular, we show a consistent improvement in word error rate of the trainable filterbanks relatively to comparable mel-filterbanks. It is the first time end-to-end models trained from the raw signal significantly outperform mel-filterbanks on a large vocabulary task under clean recording conditions.", "title": "" }, { "docid": "a2f4005c681554cc422b11a6f5087793", "text": "Emerged as salient in the recent home appliance consumer market is a new generation of home cleaning robot featuring the capability of Simultaneous Localization and Mapping (SLAM). SLAM allows a cleaning robot not only to selfoptimize its work paths for efficiency but also to self-recover from kidnappings for user convenience. By kidnapping, we mean that a robot is displaced, in the middle of cleaning, without its SLAM aware of where it moves to. This paper presents a vision-based kidnap recovery with SLAM for home cleaning robots, the first of its kind, using a wheel drop switch and an upwardlooking camera for low-cost applications. In particular, a camera with a wide-angle lens is adopted for a kidnapped robot to be able to recover its pose on a global map with only a single image. First, the kidnapping situation is effectively detected based on a wheel drop switch. Then, for S. Lee · S. Lee (B) School of Information and Communication Engineering and Department of Interaction Science, Sungkyunkwan University, Suwon, South Korea e-mail: [email protected] S. Lee e-mail: [email protected] S. Lee · S. Baek Future IT Laboratory, LG Electronics Inc., Seoul, South Korea e-mail: [email protected] an efficient kidnap recovery, a coarse-to-fine approach to matching the image features detected with those associated with a large number of robot poses or nodes, built as a map in graph representation, is adopted. The pose ambiguity, e.g., due to symmetry is taken care of, if any. The final robot pose is obtained with high accuracy from the fine level of the coarse-to-fine hierarchy by fusing poses estimated from a chosen set of matching nodes. The proposed method was implemented as an embedded system with an ARM11 processor on a real commercial home cleaning robot and tested extensively. Experimental results show that the proposed method works well even in the situation in which the cleaning robot is suddenly kidnapped during the map building process.", "title": "" }, { "docid": "b5b7bef8ec2d38bb2821dc380a3a49bf", "text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.", "title": "" }, { "docid": "cf8cdd70dde3f55ed097972be1d2fde7", "text": "BACKGROUND\nText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.\n\n\nMETHODS\nWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.\n\n\nRESULTS\nPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.\n\n\nCONCLUSION\nWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm.", "title": "" }, { "docid": "1b647a09085a41e66f8c1e3031793fed", "text": "In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents.", "title": "" }, { "docid": "7f2403a849690fb12a184ec67b0a2872", "text": "Deep reinforcement learning achieves superhuman performance in a range of video game environments, but requires that a designer manually specify a reward function. It is often easier to provide demonstrations of a target behavior than to design a reward function describing that behavior. Inverse reinforcement learning (IRL) algorithms can infer a reward from demonstrations in low-dimensional continuous control environments, but there has been little work on applying IRL to high-dimensional video games. In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator. To stabilize training, we normalize the reward and increase the size of the discriminator training dataset. We additionally learn a low-dimensional state representation using a novel autoencoder architecture tuned for video game environments. This embedding is used as input to the reward network, improving the sample efficiency of expert demonstrations. Our method achieves high-level performance on the simple Catcher video game, substantially outperforming the CNN-AIRL baseline. We also score points on the Enduro Atari racing game, but do not match expert performance, highlighting the need for further work.", "title": "" } ]
scidocsrr
9901f05894b9deb977fd2f8ab00096ad
Analysis of the antecedents of knowledge sharing and its implication for SMEs internationalization
[ { "docid": "d5464818af641aae509549f586c5526d", "text": "The learning and knowledge that we have, is, at the most, but little compared with that of which we are ignorant. Plato Knowledge management (KM) is a vital and complex topic of current interest to so many in business, government and the community in general, that there is an urgent need to expand the role of empirical research to inform knowledge management practice. However, one of the most striking aspects of knowledge management is the diversity of the field and the lack of universally accepted definitions of the term itself and its derivatives, knowledge and management. As a consequence of the multidisciplinary nature of KM, the terms inevitably hold a difference in meaning and emphasis for different people. The initial chapter of this book addresses the challenges brought about by these differences. This chapter begins with a critical assessment of some diverse frameworks for knowledge management that have been appearing in the international academic literature of many disciplines for some time. Then follows a description of ways that these have led to some holistic and integrated frameworks currently being developed by KM researchers in Australia.", "title": "" }, { "docid": "5e04372f08336da5b8ab4d41d69d3533", "text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.", "title": "" } ]
[ { "docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c", "text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.", "title": "" }, { "docid": "affc663476dc4d5299de5f89f67e5f5a", "text": "Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empirically and theoretically, that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense; and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.", "title": "" }, { "docid": "20a90ed3aa2b428b19e85aceddadce90", "text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.", "title": "" }, { "docid": "6e848928859248e0597124cee0560e43", "text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.", "title": "" }, { "docid": "be43b90cce9638b0af1c3143b6d65221", "text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-", "title": "" }, { "docid": "ea544ffc7eeee772388541d0d01812a7", "text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.", "title": "" }, { "docid": "3ba65ec924fff2d246197bb2302fb86e", "text": "Guidelines for evaluating the levels of evidence based on quantitative research are well established. However, the same cannot be said for the evaluation of qualitative research. This article discusses a process members of an evidence-based clinical practice guideline development team with the Association of Women's Health, Obstetric and Neonatal Nurses used to create a scoring system to determine the strength of qualitative research evidence. A brief history of evidence-based clinical practice guideline development is provided, followed by discussion of the development of the Nursing Management of the Second Stage of Labor evidence-based clinical practice guideline. The development of the qualitative scoring system is explicated, and implications for nursing are proposed.", "title": "" }, { "docid": "46ff38a51f766cd5849a537cc0632660", "text": "BACKGROUND\nLinear IgA bullous dermatosis (LABD) is an acquired autoimmune sub-epidermal vesiculobullous disease characterized by continuous linear IgA deposit on the basement membrane zone, as visualized on direct immunofluorescence microscopy. LABD can affect both adults and children. The disease is very uncommon, with a still unknown incidence in the South American population.\n\n\nMATERIALS AND METHODS\nAll confirmed cases of LABD by histological and immunofluorescence in our hospital were studied.\n\n\nRESULTS\nThe confirmed cases were three females and two males, aged from 8 to 87 years. Precipitant events associated with LABD were drug consumption (non-steroid inflammatory agents in two cases) and ulcerative colitis (one case). Most of our patients were treated with dapsone, resulting in remission.\n\n\nDISCUSSION\nOur series confirms the heterogeneous clinical features of this uncommon disease in concordance with a larger series of patients reported in the literature.", "title": "" }, { "docid": "7970ec4bd6e17d70913d88e07a39f82d", "text": "This thesis deals with Chinese characters (Hanzi): their key characteristics and how they could be used as a kind of knowledge resource in the (Chinese) NLP. Part 1 deals with basic issues. In Chapter 1, the motivation and the reasons for reconsidering the writing system will be presented, and a short introduction to Chinese and its writing system will be given in Chapter 2. Part 2 provides a critical review of the current, ongoing debate about Chinese characters. Chapter 3 outlines some important linguistic insights from the vantage point of indigenous scriptological and Western linguistic traditions, as well as a new theoretical framework in contemporary studies of Chinese characters. The focus of Chapter 4 concerns the search for appropriate mathematical descriptions with regard to the systematic knowledge information hidden in characters. The subject matter of mathematical formalization of the shape structure of Chinese characters is depicted as well. Part 3 illustrates the representation issues. Chapter 5 addresses the design and construction of the HanziNet, an enriched conceptual network of Chinese characters. Topics that are covered in this chapter include the ideas, architecture, methods and ontology design. In Part 4, a case study based on the above mentioned ideas will be launched. Chapter 6 presents an experiment exploring the character-triggered semantic class of Chinese unknown words. Finally, Chapter 7 summarizes the major findings of this thesis. Next, it depicts some potential avenues in the future, and assesses the theoretical implications of these findings for computational linguistic theory.", "title": "" }, { "docid": "09085fc15308a96cd9441bb0e23e6c1a", "text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.", "title": "" }, { "docid": "a017ab9f310f9f36f88bf488ac833f05", "text": "Wireless data communication technology has eliminated wired connections for data transfer to portable devices. Wireless power technology offers the possibility of eliminating the remaining wired connection: the power cord. For ventricular assist devices (VADs), wireless power technology will eliminate the complications and infections caused by the percutaneous wired power connection. Integrating wireless power technology into VADs will enable VAD implants to become a more viable option for heart failure patients (of which there are 80 000 in the United States each year) than heart transplants. Previous transcutaneous energy transfer systems (TETS) have attempted to wirelessly power VADs ; however, TETS-based technologies are limited in range to a few millimeters, do not tolerate angular misalignment, and suffer from poor efficiency. The free-range resonant electrical delivery (FREE-D) wireless power system aims to use magnetically coupled resonators to efficiently transfer power across a distance to a VAD implanted in the human body, and to provide robustness to geometric changes. Multiple resonator configurations are implemented to improve the range and efficiency of wireless power transmission to both a commercially available axial pump and a VentrAssist centrifugal pump [3]. An adaptive frequency tuning method allows for maximum power transfer efficiency for nearly any angular orientation over a range of separation distances. Additionally, laboratory results show the continuous operation of both pumps using the FREE-D system with a wireless power transfer efficiency upwards of 90%.", "title": "" }, { "docid": "819f5df03cebf534a51eb133cd44cb0d", "text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.", "title": "" }, { "docid": "225b834e820b616e0ccfed7259499fd6", "text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.", "title": "" }, { "docid": "be41d072e3897506fad111549e7bf862", "text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.", "title": "" }, { "docid": "b851cf64be0684f63e63e7317aaada5c", "text": "With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of potentially sensitive personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files while protecting privacy of both the queries and the database, by allowing some reasonable leakage information. To this end, the notion of searchable symmetric encryption (SSE) was proposed. Meanwhile, recent literature has shown that most dynamic SSE solutions leaking information on updated keywords are vulnerable to devastating file-injection attacks. The only way to thwart these attacks is to design forward-private schemes. In this paper, we investigate new privacy-preserving indexing and query processing protocols which meet a number of desirable properties, including the multi-keyword query processing with conjunction and disjunction logic queries, practically high privacy guarantees with adaptive chosen keyword attack (CKA2) security and forward privacy, the support of dynamic data operations, and so on. Compared with previous schemes, our solutions are highly compact, practical, and flexible. Their performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over a large representative data set demonstrate that our solutions can achieve modest search time efficiency, and they are practical for use in large-scale encrypted database systems.", "title": "" }, { "docid": "124729483d5db255b60690e2facbfe45", "text": "Human social intelligence depends on a diverse array of perceptual, cognitive, and motivational capacities. Some of these capacities depend on neural systems that may have evolved through modification of ancestral systems with non-social or more limited social functions (evolutionary repurposing). Social intelligence, in turn, enables new forms of repurposing within the lifetime of an individual (cultural and instrumental repurposing), which entail innovating over and exploiting pre-existing circuitry to meet problems our brains did not evolve to solve. Considering these repurposing processes can provide insight into the computations that brain regions contribute to social information processing, generate testable predictions that usefully constrain social neuroscience theory, and reveal biologically imposed constraints on cultural inventions and our ability to respond beneficially to contemporary challenges.", "title": "" }, { "docid": "c5e078cb9835db450be894aee477d00c", "text": "I would like to jump on the blockchain bandwagon. I would like to be able to say that blockchain is the solution to the longstanding problem of secure identity on the Internet. I would like to say that everyone in the world will soon have a digital identity. Put yourself on the blockchain and never again ask yourself, Who am I? - you are your blockchain address.", "title": "" }, { "docid": "762d6e9a8f0061e3a2f1b1c0eeba2802", "text": "A new prior is proposed for representation learning, which can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by the phenomenon of consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., consciousness as awareness at a particular time instant. This provides a powerful constraint on the representation in that such low-dimensional thought vectors can correspond to statements about reality which are true, highly probable, or very useful for taking decisions. The fact that a few elements of the current state can be combined into such a predictive or useful statement is a strong constraint and deviates considerably from the maximum likelihood approaches to modelling data and how states unfold in the future based on an agent's actions. Instead of making predictions in the sensory (e.g. pixel) space, the consciousness prior allows the agent to make predictions in the abstract space, with only a few dimensions of that space being involved in each of these predictions. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in the form of facts and rules, although the conscious states may be richer than what can be expressed easily in the form of a sentence, a fact or a rule.", "title": "" }, { "docid": "57e2adea74edb5eaf5b2af00ab3c625e", "text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.", "title": "" }, { "docid": "1d3b2a5906d7db650db042db9ececed1", "text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.", "title": "" } ]
scidocsrr
42557afb223c11fb89eb19dc57f28634
AVID: Adversarial Visual Irregularity Detection
[ { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "6470b7d1532012e938063d971f3ead29", "text": "As society continues to accumulate more and more data, demand for machine learning algorithms that can learn from data with limited human intervention only increases. Semi-supervised learning (SSL) methods, which extend supervised learning algorithms by enabling them to use unlabeled data, play an important role in addressing this challenge. In this thesis, a framework unifying the traditional assumptions and approaches to SSL is defined. A synthesis of SSL literature then places a range of contemporary approaches into this common framework. Our focus is on methods which use generative adversarial networks (GANs) to perform SSL. We analyse in detail one particular GAN-based SSL approach. This is shown to be closely related to two preceding approaches. Through synthetic experiments we provide an intuitive understanding and motivate the formulation of our focus approach. We then theoretically analyse potential alternative formulations of its loss function. This analysis motivates a number of research questions that centre on possible improvements to, and experiments to better understand the focus model. While we find support for our hypotheses, our conclusion more broadly is that the focus method is not especially robust.", "title": "" } ]
[ { "docid": "de016ffaace938c937722f8a47cc0275", "text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.", "title": "" }, { "docid": "39ee9e4c7dad30d875d70e0a41a37034", "text": "The aim of the present study is to investigate the effect of daily injection of ginger Zingiber officinale extract on the physiological parameters, as well as the histological structure of the l iver of adult rats. Adult male rats were divided into four groups; (G1, G2, G3, and Control groups). The first group received 500 ml/kg b. wt/day of aqueous extract of Zingiber officinale i.p. for four weeks, G2 received 500 ml/kg b wt/day of aqueous extract of Zingiber officinale for three weeks and then received carbon tetrachloride CCl4 0.1ml/150 g b. wt. for one week, G3 received 500 ml/kg body weight/day of aqueous extract of ginger Zingiber officinale i .p. for three weeks and then received CCl4 for one week combined with ginger). The control group (C) received a 500 ml/kg B WT/day of saline water i.p. for four weeks. The results indicated a significant decrease in the total protein and increase in the albumin/globulin ratio in the third group compared with first and second group. Also, the results reported a significant decrease in the body weight in the third and the fourth groups compared with the first and the second groups. A significant decrease in the globulin levels in the third and the fourth groups were detected compared with the first and the second groups. The obtained results showed that treating rats with ginger improved the histopathological changes induced in the liver by CCl4. The study suggests that ginger extract can be used as antioxidant, free radical scavenging and protective action against carbon tetrachloride oxidative damage in the l iver.", "title": "" }, { "docid": "10f2726026dbe1deac859715f57b15b6", "text": "Monte-Carlo Tree Search, especially UCT and its POMDP version POMCP, have demonstrated excellent performance on many problems. However, to efficiently scale to large domains one should also exploit hierarchical structure if present. In such hierarchical domains, finding rewarded states typically requires to search deeply; covering enough such informative states very far from the root becomes computationally expensive in flat non-hierarchical search approaches. We propose novel, scalable MCTS methods which integrate a task hierarchy into the MCTS framework, specifically leading to hierarchical versions of both, UCT and POMCP. The new method does not need to estimate probabilistic models of each subtask, it instead computes subtask policies purely sample-based. We evaluate the hierarchical MCTS methods on various settings such as a hierarchical MDP, a Bayesian model-based hierarchical RL problem, and a large hierarchi-", "title": "" }, { "docid": "c57fa27a4745e3a5440bd7209cf109a2", "text": "OBJECTIVES\nWe sought to use natural language processing to develop a suite of language models to capture key symptoms of severe mental illness (SMI) from clinical text, to facilitate the secondary use of mental healthcare data in research.\n\n\nDESIGN\nDevelopment and validation of information extraction applications for ascertaining symptoms of SMI in routine mental health records using the Clinical Record Interactive Search (CRIS) data resource; description of their distribution in a corpus of discharge summaries.\n\n\nSETTING\nElectronic records from a large mental healthcare provider serving a geographic catchment of 1.2 million residents in four boroughs of south London, UK.\n\n\nPARTICIPANTS\nThe distribution of derived symptoms was described in 23 128 discharge summaries from 7962 patients who had received an SMI diagnosis, and 13 496 discharge summaries from 7575 patients who had received a non-SMI diagnosis.\n\n\nOUTCOME MEASURES\nFifty SMI symptoms were identified by a team of psychiatrists for extraction based on salience and linguistic consistency in records, broadly categorised under positive, negative, disorganisation, manic and catatonic subgroups. Text models for each symptom were generated using the TextHunter tool and the CRIS database.\n\n\nRESULTS\nWe extracted data for 46 symptoms with a median F1 score of 0.88. Four symptom models performed poorly and were excluded. From the corpus of discharge summaries, it was possible to extract symptomatology in 87% of patients with SMI and 60% of patients with non-SMI diagnosis.\n\n\nCONCLUSIONS\nThis work demonstrates the possibility of automatically extracting a broad range of SMI symptoms from English text discharge summaries for patients with an SMI diagnosis. Descriptive data also indicated that most symptoms cut across diagnoses, rather than being restricted to particular groups.", "title": "" }, { "docid": "c1538df6d2aa097d5c4a8c4fc7e42d01", "text": "During the First International EEG Congress, London in 1947, it was recommended that Dr. Herbert H. Jasper study methods to standardize the placement of electrodes used in EEG (Jasper 1958). A report with recommendations was to be presented to the Second International Congress in Paris in 1949. The electrode placement systems in use at various centers were found to be similar, with only minor differences, although their designations, letters and numbers were entirely different. Dr. Jasper established some guidelines which would be established in recommending a speci®c system to the federation and these are listed below.", "title": "" }, { "docid": "adae03c768e3bc72f325075cf22ef7b1", "text": "The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR). In this review, I discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensive taxonomy of potential solutions, address advantages and shortfalls of each design, and cover various ways to better evaluate the solutions. The review describes how VAC is addressed in monocular, stereoscopic, and multiscopic HMDs, including retinal scanning and accommodation-free displays. Eye-tracking-based approaches that do not provide natural focal cues-gaze-guided blur and dynamic stereoscopy-are also covered. Promising future research directions in this area are identified.", "title": "" }, { "docid": "e4493c56867bfe62b7a96b33fb171fad", "text": "In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.", "title": "" }, { "docid": "71ee8396220ce8f3d9c4c6aca650fa42", "text": "In order to increase our ability to use measurement to support software development practise we need to do more analysis of code. However, empirical studies of code are expensive and their results are difficult to compare. We describe the Qualitas Corpus, a large curated collection of open source Java systems. The corpus reduces the cost of performing large empirical studies of code and supports comparison of measurements of the same artifacts. We discuss its design, organisation, and issues associated with its development.", "title": "" }, { "docid": "23c2ea4422ec6057beb8fa0be12e57b3", "text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3a9bba31f77f4026490d7a0faf4aeaa4", "text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.", "title": "" }, { "docid": "6974bf94292b51fc4efd699c28c90003", "text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.", "title": "" }, { "docid": "07db8f037ff720c8b8b242879c14531f", "text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.", "title": "" }, { "docid": "e9af5e2bfc36dd709ae6feefc4c38976", "text": "Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.", "title": "" }, { "docid": "f34b41a7f0dd902119197550b9bcf111", "text": "Tachyzoites, bradyzoites (in tissue cysts), and sporozoites (in oocysts) are the three infectious stages of Toxoplasma gondii. The prepatent period (time to shedding of oocysts after primary infection) varies with the stage of T. gondii ingested by the cat. The prepatent period (pp) after ingesting bradyzoites is short (3-10 days) while it is long (18 days or longer) after ingesting oocysts or tachyzoites, irrespective of the dose. The conversion of bradyzoites to tachyzoites and tachyzoites to bradyzoites is biologically important in the life cycle of T. gondii. In the present paper, the pp was used to study in vivo conversion of tachyzoites to bradyzoites using two isolates, VEG and TgCkAr23. T. gondii organisms were obtained from the peritoneal exudates (pex) of mice inoculated intraperitoneally (i.p.) with these isolates and administered to cats orally by pouring in the mouth or by a stomach tube. In total, 94 of 151 cats shed oocysts after ingesting pex. The pp after ingesting pex was short (5-10 days) in 50 cats, intermediate (11-17) in 30 cats, and long (18 or higher) in 14 cats. The strain of T. gondii (VEG, TgCKAr23) or the stage (bradyzoite, tachyzoite, and sporozoite) used to initiate infection in mice did not affect the results. In addition, six of eight cats fed mice infected 1-4 days earlier shed oocysts with a short pp; the mice had been inoculated i.p. with bradyzoites of the VEG strain and their whole carcasses were fed to cats 1, 2, 3, or 4 days post-infection. Results indicate that bradyzoites may be formed in the peritoneal cavities of mice inoculated intraperitoneally with T. gondii and some bradyzoites might give rise directly to bradyzoites without converting to tachyzoites.", "title": "" }, { "docid": "343f45efbdbf654c421b99927c076c5d", "text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.", "title": "" }, { "docid": "cc980260540d9e9ae8e7219ff9424762", "text": "The persuasive design of e-commerce websites has been shown to support people with online purchases. Therefore, it is important to understand how persuasive applications are used and assimilated into e-commerce website designs. This paper demonstrates how the PSD model’s persuasive features could be used to build a bridge supporting the extraction and evaluation of persuasive features in such e-commerce websites; thus practically explaining how feature implementation can enhance website persuasiveness. To support a deeper understanding of persuasive e-commerce website design, this research, using the Persuasive Systems Design (PSD) model, identifies the distinct persuasive features currently assimilated in ten successful e-commerce websites. The results revealed extensive use of persuasive features; particularly features related to dialogue support, credibility support, and primary task support; thus highlighting weaknesses in the implementation of social support features. In conclusion we suggest possible ways for enhancing persuasive feature implementation via appropriate contextual examples and explanation.", "title": "" }, { "docid": "2f9e5a34137fe7871c9388078c57dc8e", "text": "This paper presents a new model of measuring semantic similarity in the taxonomy of WordNet. The model takes the path length between two concepts and IC value of each concept as its metric, furthermore, the weight of two metrics can be adapted artificially. In order to evaluate our model, traditional and widely used datasets are used. Firstly, coefficients of correlation between human ratings of similarity and six computational models are calculated, the result shows our new model outperforms their homologues. Then, the distribution graphs of similarity value of 65 word pairs are discussed our model having no faulted zone more centralized than other five methods. So our model can make up the insufficient of other methods which only using one metric(path length or IC value) in their model.", "title": "" }, { "docid": "1056fbe244f25672680ea45d6e8a4c73", "text": "In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.", "title": "" }, { "docid": "f9ba9cb0d10c6e44e40c7a5f06e87b5e", "text": "Graphomotor impressions are a product of complex cognitive, perceptual and motor skills and are widely used as psychometric tools for the diagnosis of a variety of neuro-psychological disorders. Apparent deformations in these responses are quantified as errors and are used are indicators of various conditions. Contrary to conventional assessment methods where manual analysis of impressions is carried out by trained clinicians, an automated scoring system is marked by several challenges. Prior to analysis, such computerized systems need to extract and recognize individual shapes drawn by subjects on a sheet of paper as an important pre-processing step. The aim of this study is to apply deep learning methods to recognize visual structures of interest produced by subjects. Experiments on figures of Bender Gestalt Test (BGT), a screening test for visuo-spatial and visuo-constructive disorders, produced by 120 subjects, demonstrate that deep feature representation brings significant improvements over classical approaches. The study is intended to be extended to discriminate coherent visual structures between produced figures and expected prototypes.", "title": "" } ]
scidocsrr
931d129c91a8a84ef68653fc27a5f21d
Named entity recognition in query
[ { "docid": "419c721c2d0a269c65fae59c1bdb273c", "text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.", "title": "" } ]
[ { "docid": "758a922ccba0fc70574af94de5a4c2d9", "text": "We study unsupervised learning by developing a generative model built from progressively learned deep convolutional neural networks. The resulting generator is additionally a discriminator, capable of \"introspection\" in a sense — being able to self-evaluate the difference between its generated samples and the given training data. Through repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. Specifically, our model learns a sequence of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and unsupervised feature learning.", "title": "" }, { "docid": "36e3489f2d144be867fa4f2ff05324d4", "text": "Sentiment classification of Twitter data has been successfully applied in finding predictions in a variety of domains. However, using sentiment classification to predict stock market variables is still challenging and ongoing research. The main objective of this study is to compare the overall accuracy of two machine learning techniques (logistic regression and neural network) with respect to providing a positive, negative and neutral sentiment for stock-related tweets. Both classifiers are compared using Bigram term frequency (TF) and Unigram term frequency - inverse document term frequency (TF-IDF) weighting schemes. Classifiers are trained using a dataset that contains 42,000 automatically annotated tweets. The training dataset forms positive, negative and neutral tweets covering four technology-related stocks (Twitter, Google, Facebook, and Tesla) collected using Twitter Search API. Classifiers give the same results in terms of overall accuracy (58%). However, empirical experiments show that using Unigram TF-IDF outperforms TF.", "title": "" }, { "docid": "d0c8e58e06037d065944fc59b0bd7a74", "text": "We propose a new discrete choice model that generalizes the random utility model (RUM). We show that this model, called the Generalized Stochastic Preference (GSP) model can explain several choice phenomena that can’t be represented by a RUM. In particular, the model can easily (and also exactly) replicate some well known examples that are not RUM, as well as controlled choice experiments carried out since 1980’s that possess strong regularity violations. One of such regularity violation is the decoy effect in which the probability of choosing a product increases when a similar, but inferior product is added to the choice set. An appealing feature of the GSP is that it is non-parametric and therefore it has very high flexibility. The model has also a simple description and interpretation: it builds upon the well known representation of RUM as a stochastic preference, by allowing some additional consumer types to be non-rational.", "title": "" }, { "docid": "33b405dbbe291f6ba004fa6192501861", "text": "A quasi-static analysis of an open-ended coaxial line terminated by a semi-infinite medium on ground plane is presented in this paper. The analysis is based on a vtiriation formulation of the problem. A comparison of results obtained by this method with the experimental and the other theoretical approaches shows an excellent agreement. This analysis is expected to be helpful in the inverse problem of calculating the pertnittivity of materials in oico for a given iuput impedance of the coaxial line.", "title": "" }, { "docid": "369cdea246738d5504669e2f9581ae70", "text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.", "title": "" }, { "docid": "c776fccb35d9aa43e965604573156c6a", "text": "BACKGROUND\nMalnutrition in children is a major public health concern. This study aimed to determine the association between dietary diversity and stunting, underweight, wasting, and diarrhea and that between consumption of each specific food group and these nutritional and health outcomes among children.\n\n\nMETHODS\nA nationally representative household survey of 6209 children aged 12 to 59 months was conducted in Cambodia. We examined the consumption of food in the 24 hours before the survey and stunting, underweight, wasting, and diarrhea that had occurred in the preceding 2 weeks. A food variety score (ranging from 0 to 9) was calculated to represent dietary diversity.\n\n\nRESULTS\nStunting was negatively associated with dietary diversity (adjusted odd ratios [ORadj] 0.95, 95% confident interval [CI] 0.91-0.99, P = 0.01) after adjusting for socioeconomic and geographical factors. Consumption of animal source foods was associated with reduced risk of stunting (ORadj 0.69, 95% CI 0.54-0.89, P < 0.01) and underweight (ORadj 0.74, 95% CI 0.57-0.96, P = 0.03). On the other hand, the higher risk of diarrhea was significantly associated with consumption of milk products (ORadj 1.46, 95% CI 1.10-1.92, P = 0.02) and it was significantly pronounced among children from the poorer households (ORadj 1.85, 95% CI 1.17-2.93, P < 0.01).\n\n\nCONCLUSIONS\nConsumption of a diverse diet was associated with a reduction in stunting. In addition to dietary diversity, animal source food was a protective factor of stunting and underweight. Consumption of milk products was associated with an increase in the risk of diarrhea, particularly among the poorer households. Both dietary diversity and specific food types are important considerations of dietary recommendation.", "title": "" }, { "docid": "aab5aaf24c421cc75fce9b657a886ab4", "text": "This study aimed to identify the similarities and differences among half-marathon runners in relation to their performance level. Forty-eight male runners were classified into 4 groups according to their performance level in a half-marathon (min): Group 1 (n = 11, < 70 min), Group 2 (n = 13, < 80 min), Group 3 (n = 13, < 90 min), Group 4 (n = 11, < 105 min). In two separate sessions, training-related, anthropometric, physiological, foot strike pattern and spatio-temporal variables were recorded. Significant differences (p<0.05) between groups (ES = 0.55-3.16) and correlations with performance were obtained (r = 0.34-0.92) in training-related (experience and running distance per week), anthropometric (mass, body mass index and sum of 6 skinfolds), physiological (VO2max, RCT and running economy), foot strike pattern and spatio-temporal variables (contact time, step rate and length). At standardized submaximal speeds (11, 13 and 15 km·h-1), no significant differences between groups were observed in step rate and length, neither in contact time when foot strike pattern was taken into account. In conclusion, apart from training-related, anthropometric and physiological variables, foot strike pattern and step length were the only biomechanical variables sensitive to half-marathon performance, which are essential to achieve high running speeds. However, when foot strike pattern and running speeds were controlled (submaximal test), the spatio-temporal variables were similar. This indicates that foot strike pattern and running speed are responsible for spatio-temporal differences among runners of different performance level.", "title": "" }, { "docid": "0946b5cb25e69f86b074ba6d736cd50f", "text": "Increase of malware and advanced cyber-attacks are now becoming a serious problem. Unknown malware which has not determined by security vendors is often used in these attacks, and it is becoming difficult to protect terminals from their infection. Therefore, a countermeasure for after infection is required. There are some malware infection detection methods which focus on the traffic data comes from malware. However, it is difficult to perfectly detect infection only using traffic data because it imitates benign traffic. In this paper, we propose malware process detection method based on process behavior in possible infected terminals. In proposal, we investigated stepwise application of Deep Neural Networks to classify malware process. First, we train the Recurrent Neural Network (RNN) to extract features of process behavior. Second, we train the Convolutional Neural Network (CNN) to classify feature images which are generated by the extracted features from the trained RNN. The evaluation result in several image size by comparing the AUC of obtained ROC curves and we obtained AUC= 0:96 in best case.", "title": "" }, { "docid": "4874f55e577bea77deed2750a9a73b30", "text": "Best practice exemplars suggest that digital platforms play a critical role in managing supply chain activities and partnerships that generate perjormance gains for firms. However, there is Umited academic investigation on how and why information technology can create performance gains for firms in a supply chain management (SCM) context. Grant's (1996) theoretical notion of higher-order capabilities and a hierarchy of capabilities has been used in recent information systems research by Barua et al. (2004). Sambamurthy et al. (2003), and Mithas et al. (2004) to reframe the conversation from the direct performance impacts of IT resources and investments to how and why IT shapes higher-order proeess capabilities that ereate performance gains for firms. We draw on the emerging IT-enabled organizational capabilities perspective to suggest that firms that develop IT infrastrueture integration for SCM and leverage it to create a higher-order supply chain integration capability generate significant and sustainable performance gains. A research model is developed to investigate the hierarchy oflT-related capabilities and their impaet on firm performance. Data were collected from } 10 supply chain and logisties managers in manufacturing and retail organizations. Our results suggest that integrated IT infrastructures enable firms to develop the higher-order capability of supply chain process integration. This eapability enables firms to unbundle information flows from physical flows, and to share information with their supply chain partners to create information-based approaches for superior demand planning, for the staging and movement of physical products, and for streamlining voluminous and complex financial work processes. Furthermore. IT-enabled supply chain integration capability results in significant and sustained firm performance gains, especially in operational excellence and revenue growth. Managerial", "title": "" }, { "docid": "ba3e9746291c2a355321125093b41c88", "text": "Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, — a “sentiment lexicon” or “affective word lists”. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.", "title": "" }, { "docid": "f119b0ee9a237ab1e9acdae19664df0f", "text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "afec537d95b185d8bda4e0e48799bfd3", "text": "We propose a method for optimizing an acoustic feature extractor for anomalous sound detection (ASD). Most ASD systems adopt outlier-detection techniques because it is difficult to collect a massive amount of anomalous sound data. To improve the performance of such outlier-detection-based ASD, it is essential to extract a set of efficient acoustic features that is suitable for identifying anomalous sounds. However, the ideal property of a set of acoustic features that maximizes ASD performance has not been clarified. By considering outlier-detection-based ASD as a statistical hypothesis test, we defined optimality as an objective function that adopts Neyman-Pearson lemma; the acoustic feature extractor is optimized to extract a set of acoustic features which maximize the true positive rate under an arbitrary false positive rate. The variational auto-encoder is applied as an acoustic feature extractor and optimized to maximize the objective function. We confirmed that the proposed method improved the F-measure score from 0.02 to 0.06 points compared to those of conventional methods, and ASD results of a stereolithography 3D-printer in a real-environment show that the proposed method is effective in identifying anomalous sounds.", "title": "" }, { "docid": "ab4cada23ae2142e52c98a271c128c58", "text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.", "title": "" }, { "docid": "f27547cfee95505fe8a2f44f845ddaed", "text": "High-performance, two-dimensional arrays of parallel-addressed InGaN blue micro-light-emitting diodes (LEDs) with individual element diameters of 8, 12, and 20 /spl mu/m, respectively, and overall dimensions 490 /spl times/490 /spl mu/m, have been fabricated. In order to overcome the difficulty of interconnecting multiple device elements with sufficient step-height coverage for contact metallization, a novel scheme involving the etching of sloped-sidewalls has been developed. The devices have current-voltage (I-V) characteristics approaching those of broad-area reference LEDs fabricated from the same wafer, and give comparable (3-mW) light output in the forward direction to the reference LEDs, despite much lower active area. The external efficiencies of the micro-LED arrays improve as the dimensions of the individual elements are scaled down. This is attributed to scattering at the etched sidewalls of in-plane propagating photons into the forward direction.", "title": "" }, { "docid": "f0f88be4a2b7619f6fb5cdcca1741d1f", "text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)", "title": "" }, { "docid": "66127055aff890d3f3f9d40bd1875980", "text": "A simple, but comprehensive model of heat transfer and solidification of the continuous casting of steel slabs is described, including phenomena in the mold and spray regions. The model includes a one-dimensional (1-D) transient finite-difference calculation of heat conduction within the solidifying steel shell coupled with two-dimensional (2-D) steady-state heat conduction within the mold wall. The model features a detailed treatment of the interfacial gap between the shell and mold, including mass and momentum balances on the solid and liquid interfacial slag layers, and the effect of oscillation marks. The model predicts the shell thickness, temperature distributions in the mold and shell, thickness of the resolidified and liquid powder layers, heat-flux profiles down the wide and narrow faces, mold water temperature rise, ideal taper of the mold walls, and other related phenomena. The important effect of the nonuniform distribution of superheat is incorporated using the results from previous threedimensional (3-D) turbulent fluid-flow calculations within the liquid pool. The FORTRAN program CONID has a user-friendly interface and executes in less than 1 minute on a personal computer. Calibration of the model with several different experimental measurements on operating slab casters is presented along with several example applications. In particular, the model demonstrates that the increase in heat flux throughout the mold at higher casting speeds is caused by two combined effects: a thinner interfacial gap near the top of the mold and a thinner shell toward the bottom. This modeling tool can be applied to a wide range of practical problems in continuous casters.", "title": "" }, { "docid": "5491c265a1eb7166bb174097b49d258e", "text": "The importance of service quality for business performance has been recognized in the literature through the direct effect on customer satisfaction and the indirect effect on customer loyalty. The main objective of the study was to measure hotels' service quality performance from the customer perspective. To do so, a performance-only measurement scale (SERVPERF) was administered to customers stayed in three, four and five star hotels in Aqaba and Petra. Although the importance of service quality and service quality measurement has been recognized, there has been limited research that has addressed the structure and antecedents of the concept for the hotel industry. The clarification of the dimensions is important for managers in the hotel industry as it identifies the bundles of service attributes consumers find important. The results of the study demonstrate that SERVPERF is a reliable and valid tool to measure service quality in the hotel industry. The instrument consists of five dimensions, namely \"tangibles\", \"responsiveness\", \"empathy\", \"assurance\" and \"reliability\". Hotel customers are expecting more improved services from the hotels in all service quality dimensions. However, hotel customers have the lowest perception scores on empathy and tangibles. In the light of the results, possible managerial implications are discussed and future research subjects are recommended.", "title": "" }, { "docid": "e2de8284e14cb3abbd6e3fbcfb5bc091", "text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.", "title": "" }, { "docid": "9415adaa3ec2f7873a23cc2017a2f1ee", "text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.", "title": "" }, { "docid": "bf00f7d7cdcbdc3e9d082bf92eec075c", "text": "Network software is a critical component of any distributed system. Because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. Typical protocol graphs—including those standardized in the ISO and TCP/IP network architectures—share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. In our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. The main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software.", "title": "" } ]
scidocsrr
6dc8bd3bc0c04c92fc132f2697cdf226
Combining control-flow integrity and static analysis for efficient and validated data sandboxing
[ { "docid": "83c81ecb870e84d4e8ab490da6caeae2", "text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.", "title": "" } ]
[ { "docid": "d945ae2fe20af58c2ca4812c797d361d", "text": "Triple-negative breast cancers (TNBC) are genetically characterized by aberrations in TP53 and a low rate of activating point mutations in common oncogenes, rendering it challenging in applying targeted therapies. We performed whole-exome sequencing (WES) and RNA sequencing (RNA-seq) to identify somatic genetic alterations in mouse models of TNBCs driven by loss of Trp53 alone or in combination with Brca1 Amplifications or translocations that resulted in elevated oncoprotein expression or oncoprotein-containing fusions, respectively, as well as frameshift mutations of tumor suppressors were identified in approximately 50% of the tumors evaluated. Although the spectrum of sporadic genetic alterations was diverse, the majority had in common the ability to activate the MAPK/PI3K pathways. Importantly, we demonstrated that approved or experimental drugs efficiently induce tumor regression specifically in tumors harboring somatic aberrations of the drug target. Our study suggests that the combination of WES and RNA-seq on human TNBC will lead to the identification of actionable therapeutic targets for precision medicine-guided TNBC treatment.Significance: Using combined WES and RNA-seq analyses, we identified sporadic oncogenic events in TNBC mouse models that share the capacity to activate the MAPK and/or PI3K pathways. Our data support a treatment tailored to the genetics of individual tumors that parallels the approaches being investigated in the ongoing NCI-MATCH, My Pathway Trial, and ESMART clinical trials. Cancer Discov; 8(3); 354-69. ©2017 AACR.See related commentary by Natrajan et al., p. 272See related article by Matissek et al., p. 336This article is highlighted in the In This Issue feature, p. 253.", "title": "" }, { "docid": "e2ce393fade02f0dfd20b9aca25afd0f", "text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.", "title": "" }, { "docid": "42b810b7ecd48590661cc5a538bec427", "text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.", "title": "" }, { "docid": "ca41837dd01a66259854c03b820a46ff", "text": "We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.", "title": "" }, { "docid": "05d8383eb6b1c6434f75849859c35fd0", "text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.", "title": "" }, { "docid": "f91ba4b37a2a9d80e5db5ace34e6e50a", "text": "Bearing currents and shaft voltages of an induction motor are measured under hardand soft-switching inverter excitation. The objective is to investigate whether the soft-switching technologies can provide solutions for reducing the bearing currents and shaft voltages. Two of the prevailing soft-switching inverters, the resonant dc-link inverter and the quasi-resonant dc-link inverter, are tested. The results are compared with those obtained using the conventional hard-switching inverter. To ensure objective comparisons between the softand hard-switching inverters, all inverters were configured identically and drove the same induction motor under the same operating conditions when the test data were collected. An insightful explanation of the experimental results is also provided to help understand the mechanisms of bearing currents and shaft voltages produced in the inverter drives. Consistency between the bearing current theory and the experimental results has been demonstrated. Conclusions are then drawn regarding the effectiveness of the soft-switching technologies as a solution to the bearing current and shaft voltage problems.", "title": "" }, { "docid": "3eaba817610278c4b1a82036ccfb6cc4", "text": "We propose to use thought-provoking children's questions (TPCQs), namely Highlights BrainPlay questions, to drive artificial intelligence research. These questions are designed to stimulate thought and learning in children , and they can be used to do the same thing in AI systems. We introduce the TPCQ task, which consists of taking a TPCQ question as input and producing as output both (1) answers to the question and (2) learned generalizations. We discuss how BrainPlay questions stimulate learning. We analyze 244 BrainPlay questions, and we report statistics on question type, question class, answer cardinality, answer class, types of knowledge needed, and types of reasoning needed. We find that BrainPlay questions span many aspects of intelligence. We envision an AI system based on the society of mind (Minsky 1986; Minsky 2006) consisting of a multilevel architecture with diverse resources that run in parallel to jointly answer and learn from questions. Because the answers to BrainPlay questions and the generalizations learned from them are often highly open-ended, we suggest using human judges for evaluation.", "title": "" }, { "docid": "b4b20c33b7f683cfead2fede8088f09b", "text": "Bus protection is typically a station-wide protection function, as it uses the majority of the high voltage (HV) electrical signals available in a substation. All current measurements that define the bus zone of protection are needed. Voltages may be included in bus protection relays, as the number of voltages is relatively low, so little additional investment is not needed to integrate them into the protection system. This paper presents a new Distributed Bus Protection System that represents a step forward in the concept of a Smart Substation solution. This Distributed Bus Protection System has been conceived not only as a protection system, but as a platform that incorporates the data collection from the HV equipment in an IEC 61850 process bus scheme. This new bus protection system is still a distributed bus protection solution. As opposed to dedicated bay units, this system uses IEC 61850 process interface units (that combine both merging units and contact I/O) for data collection. The main advantage then, is that as the bus protection is deployed, it is also deploying the platform to do data collection for other protection, control, and monitoring functions needed in the substation, such as line, transformer, and feeder. By installing the data collection pieces, this provides for the simplification of engineering tasks, and substantial savings in wiring, number of components, cabinets, installation, and commissioning. In this way the new bus protection system is the gateway to process bus, as opposed to an addon to a process bus system. The paper analyzes and describes the new Bus Protection System as a new conceptual design for a Smart Substation, highlighting the advantages in a vision that comprises not only a single element, but the entire installation. Keyword: Current Transformer, Digital Fault Recorder, Fiber Optic Cable, International Electro Technical Commission, Process Interface Units", "title": "" }, { "docid": "ca6001c3ed273b4f23565f4d40ddeb29", "text": "Learning semantic representations and tree structures of bilingual phrases is beneficial for statistical machine translation. In this paper, we propose a new neural network model called Bilingual Correspondence Recursive Autoencoder (BCorrRAE) to model bilingual phrases in translation. We incorporate word alignments into BCorrRAE to allow it freely access bilingual constraints at different levels. BCorrRAE minimizes a joint objective on the combination of a recursive autoencoder reconstruction error, a structural alignment consistency error and a crosslingual reconstruction error so as to not only generate alignment-consistent phrase structures, but also capture different levels of semantic relations within bilingual phrases. In order to examine the effectiveness of BCorrRAE, we incorporate both semantic and structural similarity features built on bilingual phrase representations and tree structures learned by BCorrRAE into a state-of-the-art SMT system. Experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.55 BLEU points over the baseline.", "title": "" }, { "docid": "f698b77df48a5fac4df7ba81b4444dd5", "text": "Discontinuous-conduction mode (DCM) operation is usually employed in DC-DC converters for small inductor on printed circuit board (PCB) and high efficiency at light load. However, it is normally difficult for synchronous converter to realize the DCM operation, especially in high frequency applications, which requires a high speed and high precision comparator to detect the zero crossing point at cost of extra power losses. In this paper, a novel zero current detector (ZCD) circuit with an adaptive delay control loop for high frequency synchronous buck converter is presented. Compared to the conventional ZCD, proposed technique is proven to offer 8.5% efficiency enhancement when performed in a buck converter at the switching frequency of 4MHz and showed less sensitivity to the transistor mismatch of the sensor circuit.", "title": "" }, { "docid": "5bebef3a6ca0d595b6b3232e18f8789f", "text": "The usability of a software product has recently become a key software quality factor. The International Organization for Standardization (ISO) has developed a variety of models to specify and measure software usability but these individual models do not support all usability aspects. Furthermore, they are not yet well integrated into current software engineering practices and lack tool support. The aim of this research is to survey the actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existing limitations and address them by proposing an enhanced, normative model for the evaluation of software usability.", "title": "" }, { "docid": "bac623d79d39991032fc46cc215b9fdd", "text": "The convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive. For these applications, end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device. This dissertation designs and implements a new architectural element called a cloudlet, that arises from the convergence of mobile computing and cloud computing. Cloudlets represent the middle tier of a 3-tier hierarchy, mobile device — cloudlet — cloud, to achieve the right balance between cloud consolidation and network responsiveness. We first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation. We then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure. Finally, we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing, and address corresponding technical challenges.", "title": "" }, { "docid": "0b71458d700565bec9b91318023243df", "text": "The Humor Styles Questionnaire (HSQ; Martin et al., 2003) is one of the most frequently used questionnaires in humor research and has been adapted to several languages. The HSQ measures four humor styles (affiliative, self-enhancing, aggressive, and self-defeating), which should be adaptive or potentially maladaptive to psychosocial well-being. The present study analyzes the internal consistency, factorial validity, and factorial invariance of the HSQ on the basis of several German-speaking samples combined (total N = 1,101). Separate analyses were conducted for gender (male/female), age groups (16-24, 25-35, >36 years old), and countries (Germany/Switzerland). Internal consistencies were good for the overall sample and the demographic subgroups (.80-.89), with lower values obtained for the aggressive scale (.66-.73). Principal components and confirmatory factor analyses mostly supported the four-factor structure of the HSQ. Weak factorial invariance was found across gender and age groups, while strong factorial invariance was supported across countries. Two subsamples also provided self-ratings on ten styles of humorous conduct (n = 344) and of eight comic styles (n = 285). The four HSQ scales showed small to large correlations to the styles of humorous conduct (-.54 to .65) and small to medium correlations to the comic styles (-.27 to .42). The HSQ shared on average 27.5-35.0% of the variance with the styles of humorous conduct and 13.0-15.0% of the variance with the comic styles. Thus-despite similar labels-these styles of humorous conduct and comic styles differed from the HSQ humor styles.", "title": "" }, { "docid": "e677799d3bee1b25e74dc6c547c1b6c2", "text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.", "title": "" }, { "docid": "fdaf0a7bc6dfa30d0c3ed3a96950d8c8", "text": "In this article we exploit the discrete-time dynamics of a single neuron with self-connection to systematically design simple signal filters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configurations. Extending this neuro-module by two more recurrent neurons leads to versatile highand band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots.", "title": "" }, { "docid": "2af0ef7c117ace38f44a52379c639e78", "text": "Examination of a child with genital or anal disease may give rise to suspicion of sexual abuse. Dermatologic, traumatic, infectious, and congenital disorders may be confused with sexual abuse. Seven children referred to us are representative of such confusion.", "title": "" }, { "docid": "52017fa7d6cf2e6a18304b121225fc6f", "text": "In comparison to dense matrices multiplication, sparse matrices multiplication real performance for CPU is roughly 5–100 times lower when expressed in GFLOPs. For sparse matrices, microprocessors spend most of the time on comparing matrices indices rather than performing floating-point multiply and add operations. For 16-bit integer operations, like indices comparisons, computational power of the FPGA significantly surpasses that of CPU. Consequently, this paper presents a novel theoretical study how matrices sparsity factor influences the indices comparison to floating-point operation workload ratio. As a result, a novel FPGAs architecture for sparse matrix-matrix multiplication is presented for which indices comparison and floating-point operations are separated. We also verified our idea in practice, and the initial implementations results are very promising. To further decrease hardware resources required by the floating-point multiplier, a reduced width multiplication is proposed in the case when IEEE-754 standard compliance is not required.", "title": "" }, { "docid": "6341eaeb32d0e25660de6be6d3943e81", "text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.", "title": "" }, { "docid": "47ef46ef69a23e393d8503154f110a81", "text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.", "title": "" }, { "docid": "028be19d9b8baab4f5982688e41bfec8", "text": "The activation function for neurons is a prominent element in the deep learning architecture for obtaining high performance. Inspired by neuroscience findings, we introduce and define two types of neurons with different activation functions for artificial neural networks: excitatory and inhibitory neurons, which can be adaptively selected by selflearning. Based on the definition of neurons, in the paper we not only unify the mainstream activation functions, but also discuss the complementariness among these types of neurons. In addition, through the cooperation of excitatory and inhibitory neurons, we present a compositional activation function that leads to new state-of-the-art performance comparing to rectifier linear units. Finally, we hope that our framework not only gives a basic unified framework of the existing activation neurons to provide guidance for future design, but also contributes neurobiological explanations which can be treated as a window to bridge the gap between biology and computer science.", "title": "" } ]
scidocsrr
f97d72f8e43ed080e21db780ff110aa4
Tropical rat mites (Ornithonyssus bacoti) - serious ectoparasites.
[ { "docid": "5d7d7a49b254e08c95e40a3bed0aa10e", "text": "Five mentally handicapped individuals living in a home for disabled persons in Southern Germany were seen in our outpatient department with pruritic, red papules predominantly located in groups on the upper extremities, neck, upper trunk and face. Over several weeks 40 inhabitants and 5 caretakers were affected by the same rash. Inspection of their home and the sheds nearby disclosed infestation with rat populations and mites. Finally the diagnosis of tropical rat mite dermatitis was made by the identification of the arthropod Ornithonyssus bacoti or so-called tropical rat mite. The patients were treated with topical corticosteroids and antihistamines. After elimination of the rats and disinfection of the rooms by a professional exterminator no new cases of rat mite dermatitis occurred. The tropical rat mite is an external parasite occurring on rats, mice, gerbils, hamsters and various other small mammals. When the principal animal host is not available, human beings can become the victim of mite infestation.", "title": "" } ]
[ { "docid": "447e62529ed6b1b428e6edd78aabb637", "text": "Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.", "title": "" }, { "docid": "7d0dfce24bd539cb790c0c25348d075d", "text": "When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of X conditional on Y = 1, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many realworld applications, the observed positive examples are dependent on the conditional probability P (Y = 1|X) and should be sampled biasedly. In this paper, we assume that a positive example with a higher P (Y = 1|X) is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Speci€cally, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classi€er with a consistency guarantee. Œe relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. Œe proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets. ∗UBTECH Sydney Arti€cial Intelligence Centre and the School of Information Technologies, Faculty of Engineering and Information Technologies, Œe University of Sydney, Darlington, NSW 2008, Australia, [email protected]; [email protected]; [email protected]. †Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia, geo‚[email protected]. 1 ar X iv :1 80 8. 02 18 0v 1 [ cs .L G ] 7 A ug 2 01 8", "title": "" }, { "docid": "af0178d0bb154c3995732e63b94842ca", "text": "Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.", "title": "" }, { "docid": "b4ac5df370c0df5fdb3150afffd9158b", "text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.", "title": "" }, { "docid": "7fe0c40d6f62d24b4fb565d3341c1422", "text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.", "title": "" }, { "docid": "f01a1679095a163894660cb0748334d3", "text": "We present a novel approach for event extraction and abstraction from movie descriptions. Our event frame consists of ‘who”, “did what” “to whom”, “where”, and “when”. We formulate our problem using a recurrent neural network, enhanced with structural features extracted from syntactic parser, and trained using curriculum learning by progressively increasing the difficulty of the sentences. Our model serves as an intermediate step towards question answering systems, visual storytelling, and story completion tasks. We evaluate our approach on MovieQA dataset.", "title": "" }, { "docid": "130efef512294d14094a900693efebfd", "text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.", "title": "" }, { "docid": "c8e23bc60783125d5bf489cddd3e8290", "text": "An efficient probabilistic algorithm for the concurrent mapping and localization problem that arises in mobile robotics is presented. The algorithm addresses the problem in which a team of robots builds a map on-line while simultaneously accommodating errors in the robots’ odometry. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an on-line algorithm that can cope with large odometric errors typically found when mapping environments with cycles. The algorithm can be implemented in a distributed manner on multiple robot platforms, enabling a team of robots to cooperatively generate a single map of their environment. Finally, an extension is described for acquiring three-dimensional maps, which capture the structure and visual appearance of indoor environments in three dimensions. KEY WORDS—mobile robotics, map acquisition, localization, robotic exploration, multi-robot systems, threedimensional modeling", "title": "" }, { "docid": "b69f7c0db77c3012ae5e550b23a313fb", "text": "Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters - wavelet denoising, total variation filtering, and anisotropic diffusion - and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.", "title": "" }, { "docid": "84f2072f32d2a29d372eef0f4622ddce", "text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure", "title": "" }, { "docid": "e36e0c8659b8bae3acf0f178fce362c3", "text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.", "title": "" }, { "docid": "56c5ec77f7b39692d8b0d5da0e14f82a", "text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.", "title": "" }, { "docid": "9d37baf5ce33826a59cc7bd0fd7955c0", "text": "A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.", "title": "" }, { "docid": "d46434bbbf73460bf422ebe4bd65b590", "text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.", "title": "" }, { "docid": "7830c4737197e84a247349f2e586424e", "text": "This paper describes VPL, a Virtual Programming Lab module for Moodle, developed at the University of Las Palmas of Gran Canaria (ULPGC) and released for free uses under GNU/GPL license. For the students, it is a simple development environment with auto evaluation capabilities. For the instructors, it is a students' work management system, with features to facilitate the preparation of assignments, manage the submissions, check for plagiarism, and do assessments with the aid of powerful and flexible assessment tools based on program testing, all of that being independent of the programming language used for the assignments and taken into account critical security issues.", "title": "" }, { "docid": "1241bc6b7d3522fe9e285ae843976524", "text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.", "title": "" }, { "docid": "51cd0219f96b4ae6984df37ed439bbaa", "text": "This paper introduces an unsupervised framework to extract semantically rich features for video representation. Inspired by how the human visual system groups objects based on motion cues, we propose a deep convolutional neural network that disentangles motion, foreground and background information. The proposed architecture consists of a 3D convolutional feature encoder for blocks of 16 frames, which is trained for reconstruction tasks over the first and last frames of the sequence. A preliminary supervised experiment was conducted to verify the feasibility of proposed method by training the model with a fraction of videos from the UCF-101 dataset taking as ground truth the bounding boxes around the activity regions. Qualitative results indicate that the network can successfully segment foreground and background in videos as well as update the foreground appearance based on disentangled motion features. The benefits of these learned features are shown in a discriminative classification task, where initializing the network with the proposed pretraining method outperforms both random initialization and autoencoder pretraining. Our model and source code are publicly available at https: //allenovo.github.io/cvprw17_webpage/ .", "title": "" }, { "docid": "ad9a94a4deafceedccdd5f4164cde293", "text": "In this paper, we investigate the application of machine learning techniques and word embeddings to the task of Recognizing Textual Entailment (RTE) in Social Media. We look at a manually labeled dataset (Lendvai et al., 2016) consisting of user generated short texts posted on Twitter (tweets) and related to four recent media events (the Charlie Hebdo shooting, the Ottawa shooting, the Sydney Siege, and the German Wings crash) and test to what extent neural techniques and embeddings are able to distinguish between tweets that entail or contradict each other or that claim unrelated things. We obtain comparable results to the state of the art in a train-test setting, but we show that, due to the noisy aspect of the data, results plummet in an evaluation strategy crafted to better simulate a real-life train-test scenario.", "title": "" }, { "docid": "896fe681f79ef025a6058a51dd4f19c0", "text": "Semantic parsing is the construction of a complete, formal, symbolic meaning representation of a sentence. While it is crucial to natural language understanding, the problem of semantic parsing has received relatively little attention from the machine learning community. Recent work on natural language understanding has mainly focused on shallow semantic analysis, such as word-sense disambiguation and semantic role labeling. Semantic parsing, on the other hand, involves deep semantic analysis in which word senses, semantic roles and other components are combined to produce useful meaning representations for a particular application domain (e.g. database query). Prior research in machine learning for semantic parsing is mainly based on inductive logic programming or deterministic parsing, which lack some of the robustness that characterizes statistical learning. Existing statistical approaches to semantic parsing, however, are mostly concerned with relatively simple application domains in which a meaning representation is no more than a single semantic frame. In this proposal, we present a novel statistical approach to semantic parsing, WASP, which can handle meaning representations with a nested structure. The WASP algorithm learns a semantic parser given a set of sentences annotated with their correct meaning representations. The parsing model is based on the synchronous context-free grammar, where each rule maps a natural-language substring to its meaning representation. The main innovation of the algorithm is its use of state-of-the-art statistical machine translation techniques. A statistical word alignment model is used for lexical acquisition, and the parsing model itself can be seen as an instance of a syntax-based translation model. In initial evaluation on several real-world data sets, we show that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order. In future work, we intend to pursue several directions in developing accurate semantic parsers for a variety of application domains. This will involve exploiting prior knowledge about the natural-language syntax and the application domain. We also plan to construct a syntax-aware word-based alignment model for lexical acquisition. Finally, we will generalize the learning algorithm to handle contextdependent sentences and accept noisy training data.", "title": "" }, { "docid": "6a455fd9c86feb287a3c5a103bb681de", "text": "This paper presents two approaches to semantic search by incorporating Linked Data annotations of documents into a Generalized Vector Space Model. One model exploits taxonomic relationships among entities in documents and queries, while the other model computes term weights based on semantic relationships within a document. We publish an evaluation dataset with annotated documents and queries as well as user-rated relevance assessments. The evaluation on this dataset shows significant improvements of both models over traditional keyword based search.", "title": "" } ]
scidocsrr
d44ed5c436ff5cec861c3e49d122fab2
Design space exploration of FPGA accelerators for convolutional neural networks
[ { "docid": "5c8c391a10f32069849d743abc5e8210", "text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.", "title": "" } ]
[ { "docid": "0939a703cb2eeb9396c4e681f95e1e4d", "text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.", "title": "" }, { "docid": "8f29a231b801a018a6d18befc0d06d0b", "text": "The paper introduces a deep learningbased Twitter hate-speech text classification system. The classifier assigns each tweet to one of four predefined categories: racism, sexism, both (racism and sexism) and non-hate-speech. Four Convolutional Neural Network models were trained on resp. character 4-grams, word vectors based on semantic information built using word2vec, randomly generated word vectors, and word vectors combined with character n-grams. The feature set was down-sized in the networks by maxpooling, and a softmax function used to classify tweets. Tested by 10-fold crossvalidation, the model based on word2vec embeddings performed best, with higher precision than recall, and a 78.3% F-score.", "title": "" }, { "docid": "9b60816097ccdff7b1eec177aac0b9b8", "text": "We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.", "title": "" }, { "docid": "2e812c0a44832721fcbd7272f9f6a465", "text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.", "title": "" }, { "docid": "5ea42460dc2bdd2ebc2037e35e01dca9", "text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.", "title": "" }, { "docid": "a9052b10f9750d58eb33b9e5d564ee6e", "text": "Cyber Physical Systems (CPS) play significant role in shaping smart manufacturing systems. CPS integrate computation with physical processes where behaviors are represented in both cyber and physical parts of the system. In order to understand CPS in the context of smart manufacturing, an overview of CPS technologies, components, and relevant standards is presented. A detailed technical review of the existing engineering tools and practices from major control vendors has been conducted. Furthermore, potential research areas have been identified in order to enhance the tools functionalities and capabilities in supporting CPS development process.", "title": "" }, { "docid": "a8f27679e13572d00d5eae3496cec014", "text": "Today, we are forward to meeting an older people society in the world. The elderly people have become a high risk of dementia or depression. In recent years, with the rapid development of internet of things (IoT) techniques, it has become a feasible solution to build a system that combines IoT and cloud techniques for detecting and preventing the elderly dementia or depression. This paper proposes an IoT-based elderly behavioral difference warning system for early depression and dementia warning. The proposed system is composed of wearable smart glasses, a BLE-based indoor trilateration position, and a cloud-based service platform. As a result, the proposed system can not only reduce human and medical costs, but also improve the cure rate of depression or delay the deterioration of dementia.", "title": "" }, { "docid": "2e4ac47cdc063d76089c17f30a379765", "text": "Determination of the type and origin of the body fluids found at a crime scene can give important insights into crime scene reconstruction by supporting a link between sample donors and actual criminal acts. For more than a century, numerous types of body fluid identification methods have been developed, such as chemical tests, immunological tests, protein catalytic activity tests, spectroscopic methods and microscopy. However, these conventional body fluid identification methods are mostly presumptive, and are carried out for only one body fluid at a time. Therefore, the use of a molecular genetics-based approach using RNA profiling or DNA methylation detection has been recently proposed to supplant conventional body fluid identification methods. Several RNA markers and tDMRs (tissue-specific differentially methylated regions) which are specific to forensically relevant body fluids have been identified, and their specificities and sensitivities have been tested using various samples. In this review, we provide an overview of the present knowledge and the most recent developments in forensic body fluid identification and discuss its possible practical application to forensic casework.", "title": "" }, { "docid": "05b4df16c35a89ee2a5b9ac482e0a297", "text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.", "title": "" }, { "docid": "e2c9c7c26436f0f7ef0067660b5f10b8", "text": "The naive Bayesian classifier (NBC) is a simple yet very efficient classification technique in machine learning. But the unpractical condition independence assumption of NBC greatly degrades its performance. There are two primary ways to improve NBC's performance. One is to relax the condition independence assumption in NBC. This method improves NBC's accuracy by searching additional condition dependencies among attributes of the samples in a scope. It usually involves in very complex search algorithms. Another is to change the representation of the samples by creating new attributes from the original attributes, and construct NBC from these new attributes while keeping the condition independence assumption. Key problem of this method is to guarantee strong condition independencies among the new attributes. In the paper, a new means of making attribute set, which maps the original attributes to new attributes according to the information geometry and Fisher score, is presented, and then the FS-NBC on the new attributes is constructed. The condition dependence relation among the new attributes theoretically is discussed. We prove that these new attributes are condition independent of each other under certain conditions. The experimental results show that our method improves performance of NBC excellently", "title": "" }, { "docid": "4816f221d67922009a308058139aa56b", "text": "In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T ) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.", "title": "" }, { "docid": "a0d34b1c003b7e88c2871deaaba761ed", "text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1", "title": "" }, { "docid": "df1ea45a4b20042abd99418ff6d1f44e", "text": "This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution.", "title": "" }, { "docid": "da816b4a0aea96feceefe22a67c45be4", "text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.", "title": "" }, { "docid": "3e727d70f141f52fb9c432afa3747ceb", "text": "In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) [1]to generate adversarial examples, which can fool white-box models and blackbox models with a state of the art performance and won the SECOND place in the non-target task in CAAD 2018. In this section, we first introduce the whole architecture about our method, then we present our improvement on loss functions to generate adversarial examples satisfying the L∞ norm restriction in the non-targeted attack problem. Then we illustrate how to use a robust-enhance module to make our adversarial examples more robust and have better transfer-ability. At last we will show our method on how to attack an ensemble of models.", "title": "" }, { "docid": "a0d1d59fc987d90e500b3963ac11b2ad", "text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fd171b73ea88d9b862149e1c1d72aea8", "text": "Localization of people and devices is one of the main building blocks of context aware systems since the user position represents the core information for detecting user's activities, devices activations, proximity to points of interest, etc. While for outdoor scenarios Global Positioning System (GPS) constitutes a reliable and easily available technology, for indoor scenarios GPS is largely unavailable. In this paper we present a range-based indoor localization system that exploits the Received Signal Strength (RSS) of Bluetooth Low Energy (BLE) beacon packets broadcast by anchor nodes and received by a BLE-enabled device. The method used to infer the user's position is based on stigmergy. We exploit the stigmergic marking process to create an on-line probability map identifying the user's position in the indoor environment.", "title": "" }, { "docid": "b959bce5ea9db71d677586eb1b6f023e", "text": "We consider autonomous racing of two cars and present an approach to formulate the decision making as a non-cooperative non-zero-sum game. The game is formulated by restricting both players to fulfill static track constraints as well as collision constraints which depend on the combined actions of the two players. At the same time the players try to maximize their own progress. In the case where the action space of the players is finite, the racing game can be reformulated as a bimatrix game. For this bimatrix game, we show that the actions obtained by a sequential maximization approach where only the follower considers the action of the leader are identical to a Stackelberg and a Nash equilibrium in pure strategies. Furthermore, we propose a game promoting blocking, by additionally rewarding the leading car for staying ahead at the end of the horizon. We show that this changes the Stackelberg equilibrium, but has a minor influence on the Nash equilibria. For an online implementation, we propose to play the games in a moving horizon fashion, and we present two methods for guaranteeing feasibility of the resulting coupled repeated games. Finally, we study the performance of the proposed approaches in simulation for a set-up that replicates the miniature race car tested at the Automatic Control Laboratory of ETH Zürich. The simulation study shows that the presented games can successfully model different racing behaviors and generate interesting racing situations.", "title": "" }, { "docid": "516ef94fad7f7e5801bf1ef637ffb136", "text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1", "title": "" }, { "docid": "bed29a89354c1dfcebbdde38d1addd1d", "text": "Eosinophilic skin diseases, commonly termed as eosinophilic dermatoses, refer to a broad spectrum of skin diseases characterized by eosinophil infiltration and/or degranulation in skin lesions, with or without blood eosinophilia. The majority of eosinophilic dermatoses lie in the allergy-related group, including allergic drug eruption, urticaria, allergic contact dermatitis, atopic dermatitis, and eczema. Parasitic infestations, arthropod bites, and autoimmune blistering skin diseases such as bullous pemphigoid, are also common. Besides these, there are several rare types of eosinophilic dermatoses with unknown origin, in which eosinophil infiltration is a central component and affects specific tissue layers or adnexal structures of the skin, such as the dermis, subcutaneous fat, fascia, follicles, and cutaneous vessels. Some typical examples are eosinophilic cellulitis, granuloma faciale, eosinophilic pustular folliculitis, recurrent cutaneous eosinophilic vasculitis, and eosinophilic fasciitis. Although tissue eosinophilia is a common feature shared by these disorders, their clinical and pathological properties differ dramatically. Among these rare entities, eosinophilic pustular folliculitis may be associated with human immunodeficiency virus (HIV) infection or malignancies, and some other diseases, like eosinophilic fasciitis and eosinophilic cellulitis, may be associated with an underlying hematological disorder, while others are considered idiopathic. However, for most of these rare eosinophilic dermatoses, the causes and the pathogenic mechanisms remain largely unknown, and systemic, high-quality clinical investigations are needed for advances in better strategies for clinical diagnosis and treatment. Here, we present a comprehensive review on the etiology, pathogenesis, clinical features, and management of these rare entities, with an emphasis on recent advances and current consensus.", "title": "" } ]
scidocsrr