query_id
stringlengths 32
32
| query
stringlengths 5
4.91k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
ed6273210338b7e1259db03a0b7f8533
|
Fraud detection in international calls using fuzzy logic
|
[
{
"docid": "c5d5dfaa7af58dcd7c0ddc412e08bec2",
"text": "Telecommunications fraud is a problem that affects operators all around the world. Operators know that fraud cannot be completely eradicated. The solution to deal with this problem is to minimize the damages and cut down losses by detecting fraud situations as early as possible. Computer systems were developed or acquired, and experts were trained to detect these situations. Still, the operators have the need to evolve this process, in order to detect fraud earlier and also get a better understanding of the fraud attacks they suffer. In this paper the fraud problem is analyzed and a new approach to the problem is designed. This new approach, based on the profiling and KDD (Knowledge Discovery in Data) techniques, supported in a MAS (Multiagent System), does not replace the existing fraud detection systems; it uses them and their results to provide operators new fraud detection methods and new knowledge.",
"title": ""
},
{
"docid": "2b97e03fa089cdee0bf504dd85e5e4bb",
"text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.",
"title": ""
},
{
"docid": "1a13a0d13e0925e327c9b151b3e5b32d",
"text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.",
"title": ""
}
] |
[
{
"docid": "b91b42da0e7ffe838bf9d7ab0bd54bea",
"text": "When creating line drawings, artists frequently depict intended curves using multiple, tightly clustered, or overdrawn, strokes. Given such sketches, human observers can readily envision these intended, aggregate, curves, and mentally assemble the artist's envisioned 2D imagery. Algorithmic stroke consolidation---replacement of overdrawn stroke clusters by corresponding aggregate curves---can benefit a range of sketch processing and sketch-based modeling applications which are designed to operate on consolidated, intended curves. We propose StrokeAggregator, a novel stroke consolidation method that significantly improves on the state of the art, and produces aggregate curve drawings validated to be consistent with viewer expectations. Our framework clusters strokes into groups that jointly define intended aggregate curves by leveraging principles derived from human perception research and observation of artistic practices. We employ these principles within a coarse-to-fine clustering method that starts with an initial clustering based on pairwise stroke compatibility analysis, and then refines it by analyzing interactions both within and in-between clusters of strokes. We facilitate this analysis by computing a common 1D parameterization for groups of strokes via common aggregate curve fitting. We demonstrate our method on a large range of line drawings, and validate its ability to generate consolidated drawings that are consistent with viewer perception via qualitative user evaluation, and comparisons to manually consolidated drawings and algorithmic alternatives.",
"title": ""
},
{
"docid": "534a3885c710bc9a65fa2d66e2937dd4",
"text": "This paper examines the concept of culture, and the potential impact of intercultural dynamics of software development. Many of the difficulties confronting today's global software development (GSD) environment have little to do with technical issues; rather, they are \"human\" issues that occur when extensive collaboration and communication among developers with distinct cultural backgrounds are required. Although project managers are reporting that intercultural factors are impacting software practices and artifacts and deserve more detailed study, little analytical research has been conducted in this area other than anecdotal testimonials by software professionals. This paper presents an introductory analysis of the effect that intercultural factors have on global software development. The paper first establishes a framework for intercultural variations by introducing several models commonly used to define culture. Cross-cultural issues that often arise in software development are then identified. The paper continues by explaining the importance of taking intercultural issues seriously and proposes some ideas for future research in the area",
"title": ""
},
{
"docid": "7a0cec9d0e1f865a639db4f65626b5c2",
"text": "Over the past century, academic performance has become the gatekeeper to institutions of higher education, shaping career paths and individual life trajectories. Accordingly, much psychological research has focused on identifying predictors of academic performance, with intelligence and effort emerging as core determinants. In this article, we propose expanding on the traditional set of predictors by adding a third agency: intellectual curiosity. A series of path models based on a meta-analytically derived correlation matrix showed that (a) intelligence is the single most powerful predictor of academic performance; (b) the effects of intelligence on academic performance are not mediated by personality traits; (c) intelligence, Conscientiousness (as marker of effort), and Typical Intellectual Engagement (as marker of intellectual curiosity) are direct, correlated predictors of academic performance; and (d) the additive predictive effect of the personality traits of intellectual curiosity and effort rival that the influence of intelligence. Our results highlight that a \"hungry mind\" is a core determinant of individual differences in academic achievement.",
"title": ""
},
{
"docid": "66c132250df2d08fa707f86035bfd073",
"text": "Morphing, fusion and stitching of digital photographs from multiple sources is a common problem in the recent era. While images may depict visual normalcy despite a splicing operation, there are domains in which a consistency or an anomaly check can be performed to detect a covert digital stitching process. Most digital and low-end mobile cameras have certain intrinsic sensor aberrations such as purple fringing (PF), seen in image regions where there are contrast variations and shadowing. This paper proposes an approach based on Fuzzy clustering to first identify regions which contain Purple Fringing and is then used as a forensic tool to detect splicing operations. The accuracy of the Fuzzy clustering approach is comparable with the state-of-the-art PF detection methods and has been shown to penetrate standard interpolation and stitching operations performed using ADOBE PHOTOSHOP.",
"title": ""
},
{
"docid": "07ec8379b9a51faed0b050d7b1d85922",
"text": "In this paper we propose a Deep Neural Network (D NN) based Speech Enhancement (SE) system that is designed to maximize an approximation of the Short-Time Objective Intelligibility (STOI) measure. We formalize an approximate-STOI cost function and derive analytical expressions for the gradients required for DNN training and show that these gradients have desirable properties when used together with gradient based optimization techniques. We show through simulation experiments that the proposed SE system achieves large improvements in estimated speech intelligibility, when tested on matched and unmatched natural noise types, at multiple signal-to-noise ratios. Furthermore, we show that the SE system, when trained using an approximate-STOI cost function performs on par with a system trained with a mean square error cost applied to short-time temporal envelopes. Finally, we show that the proposed SE system performs on par with a traditional DNN based Short- Time Spectral Amplitude (STSA) SE system in terms of estimated speech intelligibility. These results are important because they suggest that traditional DNN based STSA SE systems might be optimal in terms of estimated speech intelligibility.",
"title": ""
},
{
"docid": "8d4007b4d769c2d90ae07b5fdaee8688",
"text": "In this project, we implement the semi-supervised Recursive Autoencoders (RAE), and achieve the result comparable with result in [1] on the Movie Review Polarity dataset1. We achieve 76.08% accuracy, which is slightly lower than [1] ’s result 76.8%, with less vector length. Experiments show that the model can learn sentiment and build reasonable structure from sentence.We find longer word vector and adjustment of words’ meaning vector is beneficial, while normalization of transfer function brings some improvement. We also find normalization of the input word vector may be beneficial for training.",
"title": ""
},
{
"docid": "2e32d668383eaaed096aa2e34a10d8e9",
"text": "Splicing and copy-move are two well known methods of passive image forgery. In this paper, splicing and copy-move forgery detection are performed simultaneously on the same database CASIA v1.0 and CASIA v2.0. Initially, a suspicious image is taken and features are extracted through BDCT and enhanced threshold method. The proposed technique decides whether the given image is manipulated or not. If it is manipulated then support vector machine (SVM) classify that the given image is gone through splicing forgery or copy-move forgery. For copy-move detection, ZM-polar (Zernike Moment) is used to locate the duplicated regions in image. Experimental results depict the performance of the proposed method.",
"title": ""
},
{
"docid": "e0836eb305f54283ced106528e5102a0",
"text": "Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas.",
"title": ""
},
{
"docid": "1783f837b61013391f3ff4f03ac6742e",
"text": "Nowadays, many methods have been applied for data transmission of MWD system. Magnetic induction is one of the alternative technique. In this paper, detailed discussion on magnetic induction communication system is provided. The optimal coil configuration is obtained by theoretical analysis and software simulations. Based on this coil arrangement, communication characteristics of path loss and bit error rate are derived.",
"title": ""
},
{
"docid": "fd5f3a14f731b4af60c86d7bac95e997",
"text": "(Document Summary) Direct selling as a type of non-store retailing continues to increase internationally and in Australia in its use and popularity. One non-store retailing method, multilevel marketing or network marketing, has recently incurred a degree of consumer suspicion and negative perceptions. A study was developed to investigate consumer perceptions and concerns in New South Wales and Victoria. Consumers were surveyed to determine their perception of direct selling and its relationship to consumer purchasing decisions. Responses indicate consumers had a negative perceptions towards network marketing, while holding a low positive view of direct selling. There appears to be no influence of network marketing on consumer purchase decisions. Direct selling, as a method of non-store retailing, has continued to increase in popularity in Australia and internationally. This study investigated network marketing as a type of direct selling in Australia, by examining consumers' perceptions. The results indicate that Australian consumers were generally negative and suspicious towards network marketing in Australia.",
"title": ""
},
{
"docid": "f16676f00cd50173d75bd61936ec200c",
"text": "Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested.",
"title": ""
},
{
"docid": "0759d6bd8c46a5ea5ce16c3675e07784",
"text": "Because context has a robust influence on the processing of subsequent words, the idea that readers and listeners predict upcoming words has attracted research attention, but prediction has fallen in and out of favor as a likely factor in normal comprehension. We note that the common sense of this word includes both benefits for confirmed predictions and costs for disconfirmed predictions. The N400 component of the event-related potential (ERP) reliably indexes the benefits of semantic context. Evidence that the N400 is sensitive to the other half of prediction--a cost for failure--is largely absent from the literature. This raises the possibility that \"prediction\" is not a good description of what comprehenders do. However, it need not be the case that the benefits and costs of prediction are evident in a single ERP component. Research outside of language processing indicates that late positive components of the ERP are very sensitive to disconfirmed predictions. We review late positive components elicited by words that are potentially more or less predictable from preceding sentence context. This survey suggests that late positive responses to unexpected words are fairly common, but that these consist of two distinct components with different scalp topographies, one associated with semantically incongruent words and one associated with congruent words. We conclude with a discussion of the possible cognitive correlates of these distinct late positivities and their relationships with more thoroughly characterized ERP components, namely the P300, P600 response to syntactic errors, and the \"old/new effect\" in studies of recognition memory.",
"title": ""
},
{
"docid": "7f6738aeccf7bc0e490d62e3030fdaf3",
"text": "Customer churn prediction is becoming an increasingly important business analytics problem for telecom operators. In order to increase the efficiency of customer retention campaigns, churn prediction models need to be accurate as well as compact and interpretable. Although a myriad of techniques for churn prediction has been examined, there has been little attention for the use of Bayesian Network classifiers. This paper investigates the predictive power of a number of Bayesian Network algorithms, ranging from the Naive Bayes classifier to General Bayesian Network classifiers. Furthermore, a feature selection method based on the concept of the Markov Blanket, which is genuinely related to Bayesian Networks, is tested. The performance of the classifiers is evaluated with both the Area under the Receiver Operating Characteristic Curve and the recently introduced Maximum Profit criterion. The Maximum Profit criterion performs an intelligent optimization by targeting this fraction of the customer base which would maximize the profit generated by a retention campaign. The results of the experiments are rigorously tested and indicate that most of the analyzed techniques have a comparable performance. Some methods, however, are more preferred since they lead to compact networks, which enhances the interpretability and comprehensibility of the churn prediction models.",
"title": ""
},
{
"docid": "5a1a40a965d05d0eb898d9ff5595618c",
"text": "BACKGROUND\nKeratosis pilaris is a common skin disorder of childhood that often improves with age. Less common variants of keratosis pilaris include keratosis pilaris atrophicans and atrophodermia vermiculata.\n\n\nOBSERVATIONS\nIn this case series from dermatology practices in the United States, Canada, Israel, and Australia, the clinical characteristics of 27 patients with keratosis pilaris rubra are described. Marked erythema with follicular prominence was noted in all patients, most commonly affecting the lateral aspects of the cheeks and the proximal arms and legs, with both more marked erythema and widespread extent of disease than in keratosis pilaris. The mean age at onset was 5 years (range, birth to 12 years). Sixty-three percent of patients were male. No patients had atrophy or scarring from their lesions. Various treatments were used, with minimal or no improvement in most cases.\n\n\nCONCLUSIONS\nKeratosis pilaris rubra is a variant of keratosis pilaris, with more prominent erythema and with more widespread areas of skin involvement in some cases, but without the atrophy or hyperpigmentation noted in certain keratosis pilaris variants. It seems to be a relatively common but uncommonly reported condition.",
"title": ""
},
{
"docid": "1a99b71b6c3c33d97c235a4d72013034",
"text": "Crowdfunding systems are social media websites that allow people to donate small amounts of money that add up to fund valuable larger projects. These websites are structured around projects: finite campaigns with welldefined goals, end dates, and completion criteria. We use a dataset from an existing crowdfunding website — the school charity Donors Choose — to understand the value of completing projects. We find that completing a project is an important act that leads to larger donations (over twice as large), greater likelihood of returning to donate again, and few projects that expire close but not complete. A conservative estimate suggests that this completion bias led to over $15 million in increased donations to Donors Choose, representing approximately 16% of the total donations for the period under study. This bias suggests that structuring many types of collaborative work as a series of projects might increase contribution significantly. Many social media creators find it rather difficult to motivate users to actively participate and contribute their time, energy, or money to make a site valuable to others. The value in social media largely derives from interactions between and among people who are working together to achieve common goals. To encourage people to participate and contribute, social media creators regularly look for different ways of structuring participation. Some use a blog-type format, such as Facebook, Twitter, or Tumblr. Some use a collaborative document format like Wikipedia. And some use a project-based format. A project is a well-defined set of tasks that needs to be accomplished. Projects usually have a well-defined end goal — something that needs to be accomplished for the project to be considered a success — and an end date — a day by which the project needs to be completed. Much work in society is structured around projects; for example, Hollywood makes movies by organizing each movie’s production as a project, hiring a new crew for each movie. Construction companies organize their work as a sequence of projects. And projects are common in knowledge-work based businesses (?). Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Another important place we see project-based organization is in crowdfunding websites. Crowdfunding is a relatively new phenomenon that merges modern social web technologies with project-based fundraising. It is a new form of social media that publicizes projects that need money, and allows the crowd to each make a small contribution toward the larger project. By aggregating many small donations, crowdfunding websites can fund large and interesting projects of all kinds. Kickstarter, IndieGoGo, Spot.Us, and Donors Choose are examples of crowdfunding websites targeted at specific types of projects (creative, entrepreneurial, journalism, and classroom projects respectively). Crowdfunding is becoming an increasingly popular tool for enabling project-based work. Kickstarter, for example, has raised over $400 million for over 35,000 creative projects, and Donors Choose has raised over $90 million for over 200,000 classroom projects. Additionally, crowdfunding websites represent potential new business models for a number of industries, including some struggling to find viable revenue streams: Sellaband has proven successful in helping musicians fund the creation and distribution of their music; and Spot.Us enables journalists to fund and publish investigative news. In this paper, I seek to understand why crowdfunding systems that are organized around projects are successful. Using a dataset from Donors Choose, a crowdfunding charity that funds classroom projects for K–12 school teachers, I find that completing a project is a powerful motivator that helps projects succeed in the presence of a crowd: donations that complete a project are over twice as large as normal donations. People who make these donations are more likely to return and donate in the future, and their future donations are larger. And few projects get close to completion but fail. Together, these results suggest that completing the funding for a project is an important act for the crowd, and structuring the fundraising around completable projects helps enable success. This also has implications for other types of collaborative technologies. Background and Related Ideas",
"title": ""
},
{
"docid": "36165cb8c6690863ed98c490ba889a9e",
"text": "This paper presents a new low-cost digital control solution that maximizes the AC/DC flyback power supply efficiency. This intelligent digital approach achieves the combined benefits of high performance, low cost and high reliability in a single controller. It introduces unique multiple PWM and PFM operational modes adaptively based on the power supply load changes. While the multi-mode PWM/PFM control significantly improves the light-load efficiency and thus the overall average efficiency, it does not bring compromise to other system performance, such as audible noise, voltage ripples or regulations. It also seamlessly integrated an improved quasi-resonant switching scheme that enables valley-mode turn on in every switching cycle without causing modification to the main PWM/PFM control schemes. A digital integrated circuit (IC) that implements this solution, namely iW1696, has been fabricated and introduced to the industry recently. In addition to outlining the approach, this paper provides experimental results obtained on a 3-W (5V/550mA) cell phone charger that is built with the iW1696.",
"title": ""
},
{
"docid": "6de3aca18d6c68f0250c8090ee042a4e",
"text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.",
"title": ""
},
{
"docid": "759b5a86bc70147842a106cf20b3a0cd",
"text": "This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.",
"title": ""
},
{
"docid": "abc2d0757184f5c50e4f2b3a6dabb56c",
"text": "This paper describes the hardware implementation of the RANdom Sample Consensus (RANSAC) algorithm for featured-based image registration applications. The Multiple-Input Signature Register (MISR) and the index register are used to achieve the random sampling effect. The systolic array architecture is adopted to implement the forward elimination step in the Gaussian elimination. The computational complexity in the forward elimination is reduced by sharing the coefficient matrix. As a result, the area of the hardware cost is reduced by more than 50%. The proposed architecture is realized using Verilog and achieves real-time calculation on 30 fps 1024 * 1024 video stream on 100 MHz clock.",
"title": ""
},
{
"docid": "00280615cb28a6f16bde541af2bc356d",
"text": "Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.",
"title": ""
}
] |
scidocsrr
|
fdbfd2d1733f3292f83a75e06e16f6c4
|
DeepFam: deep learning based alignment-free method for protein family modeling and prediction
|
[
{
"docid": "a5306ca9a50e82e07d487d1ac7603074",
"text": "Many modern visual recognition algorithms incorporate a step of spatial ‘pooling’, where the outputs of several nearby feature detectors are combined into a local or global ‘bag of features’, in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.",
"title": ""
}
] |
[
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "1202e46fcc6c2f88b81fcf153ed4fd7d",
"text": "Recently, several high dimensional classification methods have been proposed to automatically discriminate between patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and elderly controls (CN) based on T1-weighted MRI. However, these methods were assessed on different populations, making it difficult to compare their performance. In this paper, we evaluated the performance of ten approaches (five voxel-based methods, three methods based on cortical thickness and two methods based on the hippocampus) using 509 subjects from the ADNI database. Three classification experiments were performed: CN vs AD, CN vs MCIc (MCI who had converted to AD within 18 months, MCI converters - MCIc) and MCIc vs MCInc (MCI who had not converted to AD within 18 months, MCI non-converters - MCInc). Data from 81 CN, 67 MCInc, 39 MCIc and 69 AD were used for training and hyperparameters optimization. The remaining independent samples of 81 CN, 67 MCInc, 37 MCIc and 68 AD were used to obtain an unbiased estimate of the performance of the methods. For AD vs CN, whole-brain methods (voxel-based or cortical thickness-based) achieved high accuracies (up to 81% sensitivity and 95% specificity). For the detection of prodromal AD (CN vs MCIc), the sensitivity was substantially lower. For the prediction of conversion, no classifier obtained significantly better results than chance. We also compared the results obtained using the DARTEL registration to that using SPM5 unified segmentation. DARTEL significantly improved six out of 20 classification experiments and led to lower results in only two cases. Overall, the use of feature selection did not improve the performance but substantially increased the computation times.",
"title": ""
},
{
"docid": "fbfb6b7cb2dc3e774197c470c55a928b",
"text": "The integrated modular avionics (IMA) architectures have ushered in a new wave of thought regarding avionics integration. IMA architectures utilize shared, configurable computing, communication, and I/O resources. These architectures allow avionics system integrators to benefit from increased system scalability, as well as from a form of platform management that reduces the workload for aircraft-level avionics integration activities. In order to realize these architectural benefits, the avionics suppliers must engage in new philosophies for sharing a set of system-level resources that are managed a level higher than each individual avionics system. The mechanisms for configuring and managing these shared intersystem resources are integral to managing the increased level of avionics integration that is inherent to the IMA architectures. This paper provides guidance for developing the methodology and tools to efficiently manage the set of shared intersystem resources. This guidance is based upon the author's experience in developing the Genesis IMA architecture at Smiths Aerospace. The Genesis IMA architecture was implemented on the Boeing 787 Dreamliner as the common core system (CCS)",
"title": ""
},
{
"docid": "8e31b1f0ed3055332136d8161149e9ed",
"text": "Data collection has become easy due to the rapid development of both mobile devices and wireless networks. In each second, numerous data are generated by user devices and collected through wireless networks. These data, carrying user and network related information, are invaluable for network management. However, they were seldom employed to improve network performance in existing research work. In this article we propose a bandwidth allocation algorithm to increase the throughput of cellular network users by exploring user and network data collected from user devices. With the aid of these data, users can be categorized into clusters and share bandwidth to improve the resource utilization of the network. Simulation results indicate that the proposed scheme is able to rationally form clusters among mobile users and thus significantly increase the throughput and bandwidth efficiency of the network.",
"title": ""
},
{
"docid": "ab6371d4c57d9cf453826833f32677c5",
"text": "In this paper, we consider two inter-dependent deep networks, where one network taps into the other, to perform two challenging cognitive vision tasks - scene classification and object recognition jointly. Recently, convolutional neural networks have shown promising results in each of these tasks. However, as scene and objects are interrelated, the performance of both of these recognition tasks can be further improved by exploiting dependencies between scene and object deep networks. The advantages of considering the inter-dependency between these networks are the following: 1. improvement of accuracy in both scene and object classification, and 2. significant reduction of computational cost in object detection. In order to formulate our framework, we employ two convolutional neural networks (CNNs), scene-CNN and object-CNN. We utilize scene-CNN to generate object proposals which indicate the probable object locations in an image. Object proposals found in the process are semantically relevant to the object. More importantly, the number of object proposals is fewer in amount when compared to other existing methods which reduces the computational cost significantly. Thereafter, in scene classification, we train three hidden layers in order to combine the global (image as a whole) and local features (object information in an image). Features extracted from CNN architecture along with the features processed from object-CNN are combined to perform efficient classification. We perform rigorous experiments on five datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in classifying scenes as well as recognizing objects.",
"title": ""
},
{
"docid": "4d12a4269e4969148f6d5331f5d8afdd",
"text": "Money laundering has become of increasing concern to law makers in recent years, principally because of its associations with terrorism. Recent legislative changes in the United Kingdom mean that auditors risk becoming state law enforcement agents in the private sector. We examine this legislation from the perspective of the changing nature of the relationship between auditors and the state, and the surveillant assemblage within which this is located. Auditors are statutorily obliged to file Suspicious Activity Reports (SARs) into an online database, ELMER, but without much guidance regarding how suspicion is determined. Criminal rather than civil or regulatory sanctions apply to auditors’ instances of non-compliance. This paper evaluates the surveillance implications of the legislation for auditors through lenses developed in the accounting and sociological literature by Brivot andGendron, Neu andHeincke, Deleuze and Guattari, and Haggerty and Ericson. It finds that auditors are generating information flows which are subsequently reassembled into discrete and virtual ‘data doubles’ to be captured and utilised by authorised third parties for unknown purposes. The paper proposes that the surveillant assemblage has extended into the space of the auditor-client relationship, but this extension remains inhibited as a result of auditors’ relatively weak level of engagement in providing SARs, thereby pointing to a degree of resistance in professional service firms regarding the deployment of regulation that compromises the foundations of this",
"title": ""
},
{
"docid": "3ba2477beb6a42bfe2e0c45d9b48b471",
"text": "The presence and functional role of inositol trisphosphate receptors (IP3R) was investigated by electrophysiology and immunohistochemistry in hair cells from the frog semicircular canal. Intracellular recordings were performed from single fibres of the posterior canal in the isolated, intact frog labyrinth, at rest and during rotation, in the presence of IP3 receptor inhibitors and drugs known to produce Ca2+ release from the internal stores or to increase IP3 production. Hair cell immunolabelling for IP3 receptor was performed by standard procedures. The drug 2-aminoethoxydiphenyl borate (2APB), an IP3 receptor inhibitor, produced a marked decrease of mEPSP and spike frequency at low concentration (0.1 mm), without affecting mEPSP size or time course. At high concentration (1 mm), 2APB is reported to block the sarcoplasmic-endoplasmic reticulum Ca2+-ATPase (SERCA pump) and increase [Ca2+]i; at the labyrinthine cytoneural junction, it greatly enhanced the resting and mechanically evoked sensory discharge frequency. The selective agonist of group I metabotropic glutamate receptors (RS)-3,5-dihydroxyphenylglycine (DHPG, 0.6 mm), produced a transient increase in resting mEPSP and spike frequency at the cytoneural junction, with no effects on mEPSP shape or amplitude. Pretreatment with cyclopiazonic acid (CPA, 0.1 mm), a SERCA pump inhibitor, prevented the facilitatory effect of both 2APB and DHPG, suggesting a link between Ca2+ release from intracellular stores and quantal emission. Consistently, diffuse immunoreactivity for IP3 receptors was observed in posterior canal hair cells. Our results indicate the presence and a possibly relevant functional role of IP3-sensitive stores in controlling [Ca2+]i and modulating the vestibular discharge.",
"title": ""
},
{
"docid": "e539580686a1cce0e190845e13315ff5",
"text": "In IPv6 network, before configuring any address, a node must perform Duplicate Address Detection (DAD) to ensure the address is unique on link. However, original DAD is unreliable and vulnerable. In this article, a pull model DAD is designed, which achieves improvements both in reliability and security through changing the solicitation model. Comparing with SEcure Neighbor Discovery (SEND), this proposal has advantage in lightweight overhead and flexibility of address generation. Through evaluation, it is found to be feasible and cost effective.",
"title": ""
},
{
"docid": "e841b5790d69c58982cb2ff5725f96eb",
"text": "Copyright and moral rights to this thesis/research project are retained by the author and/or other copyright owners. The work is supplied on the understanding that any use for commercial gain is strictly forbidden. A copy may be downloaded for personal, non-commercial, research or study without prior permission and without charge. Any use of the thesis/research project for private study or research must be properly acknowledged with reference to the work’s full bibliographic details.",
"title": ""
},
{
"docid": "5ef6299494314f804d67c9baba736718",
"text": "The ARM CoreSight Program Trace Macrocell (PTM) has been widely deployed in recent ARM processors for real-time debugging and tracing of software. Using PTM, the external debugger can extract execution behaviors of applications running on an ARM processor. Recently, some researchers have been using this feature for other purposes, such as fault-tolerant computation and security monitoring. This motivated us to develop an external security monitor that can detect control hijacking attacks, of which the goal is to maliciously manipulate the control flow of victim applications at an attacker’s disposal. This article focuses on detecting a special type of attack called code reuse attacks (CRA), which use a recently introduced technique that allows attackers to perform arbitrary computation without injecting their code by reusing only existing code fragments. Our external monitor is attached to the outside of the host system via the system bus and ARM CoreSight PTM, and is fed with execution traces of a victim application running on the host. As a majority of CRAs violates the normal execution behaviors of a program, our monitor constantly watches and analyzes the execution traces of the victim application and detects a symptom of attacks when the execution behaviors violate certain rules that normal applications are known to adhere. We present two different implementations for this purpose: a hardware-based solution in which all CRA detection components are implemented in hardware, and a hardware/software mixed solution that can be employed in a more resource-constrained environment where the deployment of full hardware-level CRA detection is burdensome.",
"title": ""
},
{
"docid": "150ad4c49d10be14bf2f1a653a245498",
"text": "Code quality metrics are widely used to identify design flaws (e.g., code smells) as well as to act as fitness functions for refactoring recommenders. Both these applications imply a strong assumption: quality metrics are able to assess code quality as perceived by developers. Indeed, code smell detectors and refactoring recommenders should be able to identify design flaws/recommend refactorings that are meaningful from the developer's point-of-view. While such an assumption might look reasonable, there is limited empirical evidence supporting it. We aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies surveyed developers to investigate whether metrics align with their perception of code quality, we mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity. Then, we use state-of-the-art metrics to assess the change brought by each of those commits to the specific quality attribute it targets. We found that, more often than not the considered quality metrics were not able to capture the quality improvement as perceived by developers (e.g., the developer states \"improved the cohesion of class C\", but no quality metric captures such an improvement).",
"title": ""
},
{
"docid": "51970f8396d52df4986337531c0a10a4",
"text": "The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers. We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an ill-posed problem, and observe that humans do not stray from a common language because they are social creatures and have to communicate with many people everyday, and it is far easier to stick to a common language even at the cost of some efficiency loss. Using this as inspiration, we propose and evaluate a multi-agent dialog framework where each agent interacts with, and learns from, multiple agents, and show that this results in more relevant and coherent dialog (as judged by human evaluators) without sacrificing task performance (as judged by quantitative metrics).",
"title": ""
},
{
"docid": "bfa5d103730825ee82f7efdc8c135004",
"text": "The 'default network' is defined as a set of areas, encompassing posterior-cingulate/precuneus, anterior cingulate/mesiofrontal cortex and temporo-parietal junctions, that show more activity at rest than during attention-demanding tasks. Recent studies have shown that it is possible to reliably identify this network in the absence of any task, by resting state functional magnetic resonance imaging connectivity analyses in healthy volunteers. However, the functional significance of these spontaneous brain activity fluctuations remains unclear. The aim of this study was to test if the integrity of this resting-state connectivity pattern in the default network would differ in different pathological alterations of consciousness. Fourteen non-communicative brain-damaged patients and 14 healthy controls participated in the study. Connectivity was investigated using probabilistic independent component analysis, and an automated template-matching component selection approach. Connectivity in all default network areas was found to be negatively correlated with the degree of clinical consciousness impairment, ranging from healthy controls and locked-in syndrome to minimally conscious, vegetative then coma patients. Furthermore, precuneus connectivity was found to be significantly stronger in minimally conscious patients as compared with unconscious patients. Locked-in syndrome patient's default network connectivity was not significantly different from controls. Our results show that default network connectivity is decreased in severely brain-damaged patients, in proportion to their degree of consciousness impairment. Future prospective studies in a larger patient population are needed in order to evaluate the prognostic value of the presented methodology.",
"title": ""
},
{
"docid": "78e2bf9977034d738d5058618519b86e",
"text": "Graphical user interfaces are difficult to implement because of the essential concurrency among multiple interaction devices, such as mice, buttons, and keyboards. Squeak is a user interface implementation language that exploits this concurrency rather than hiding it, helping the programmer to express interactions using multiple devices. We present the motivation, design and semantics of squeak. The language is based on concurrent programming constructs but can be compiled into a conventional sequential language; our implementation generates C code. We discuss how squeak programs can be integrated into a graphics system written in a conventional language to implement large but regular user interfaces, and close with a description of the formal semantics.",
"title": ""
},
{
"docid": "de4b2f6ff87b254a68ecd4a7b5318d66",
"text": "Many scholars see entrepreneurs as action-oriented individuals who use rules of thumb and other mental heuristics to make decisions, but who do little systematic planning and analysis. We argue that what distinguishes successful from unsuccessful entrepreneurs is precisely that the former vary their decisionmaking styles, sometimes relying on heuristics and sometimes relying on systematic analysis. In our proposed framework, successful entrepreneurs assess their level of expertise and the level of ambiguity in a particular decision context and then tailor their decision-making process to reduce risk.",
"title": ""
},
{
"docid": "c159f32bda951cf15a886ff27b4aef8c",
"text": "We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to encode visual information – these play a crucial role in achieving high performance. Extensive experiments show that the proposed technique improves mean average precision by 24% on a public dataset, while being 4× faster, compared to the previous state-of-the-art.",
"title": ""
},
{
"docid": "fdefc782f0438f4451c91d3b96e27b0b",
"text": "Abstract: The roles played by learning and memorization represent an important topic in deep learning research. Recent work on this subject has shown that the optimization behavior of DNNs trained on shuffled labels is qualitatively different from DNNs trained with real labels. Here, we propose a novel permutation approach that can differentiate memorization from learning in deep neural networks (DNNs) trained as usual (i.e., using the real labels to guide the learning, rather than shuffled labels). The evaluation of weather the DNN has learned and/or memorized, happens in a separate step where we compare the predictive performance of a shallow classifier trained with the features learned by the DNN, against multiple instances of the same classifier, trained on the same input, but using shuffled labels as outputs. By evaluating these shallow classifiers in validation sets that share structure with the training set, we are able to tell apart learning from memorization. Application of our permutation approach to multi-layer perceptrons and convolutional neural networks trained on image data corroborated many findings from other groups. Most importantly, our illustrations also uncovered interesting dynamic patterns about how DNNs memorize over increasing numbers of training epochs, and support the surprising result that DNNs are still able to learn, rather than only memorize, when trained with pure Gaussian noise as input.",
"title": ""
},
{
"docid": "196ee1f209a29d56d4c9a6922c6cbb6e",
"text": "This article reviews the growing body of research on electronic commerce from the perspective of economic analysis. It begins by constructing a new framework for understanding electronic commerce research, then identifies the range of applicable theory and current research in the context of the new conceptual model. It goes on to assess the state-of-the-art of knowledge about electronic commerce phenomena in terms of the levels of analysis here proposed. And finally, it charts the directions along which useful work in this area might be developed. This survey and framework are intended to induce researchers in the field of information systems, the authors’ reference discipline, and other areas in schools of business and management to recognize that research on electronic commerce is business-school research, broadly defined. As such, developments in this research area in the next several years will occur across multiple business-school disciplines, and there will be a growing impetus for greater interdisciplinary communication and interaction.",
"title": ""
},
{
"docid": "5309af9cf135b8eb3c2ff633ea0bd192",
"text": "Diameter at breast height has been estimated from mobile laser scanning using a new set of methods. A 2D laser scanner was mounted facing forward, tilted nine degrees downwards, on a car. The trajectory was recorded using inertial navigation and visual SLAM (simultaneous localization and mapping). The laser scanner data, the trajectory and the orientation were used to calculate a 3D point cloud. Clusters representing trees were extracted line-wise to reduce the effects of uncertainty in the positioning system. The intensity of the laser echoes was used to filter out unreliable echoes only grazing a stem. The movement was used to obtain measurements from a larger part of the stem, and multiple lines from different views were used for the circle fit. Two trigonometric methods and two circle fit methods were tested. The best results with bias 2.3% (6 mm) and root mean squared error 14% (37 mm) were acquired with the circle fit on multiple 2D projected clusters. The method was evaluated compared to field data at five test areas with approximately 300 caliper-measured trees within a 10-m working range. The results show that this method is viable for stem measurements from a moving vehicle, for example a forest harvester.",
"title": ""
},
{
"docid": "6acb7aa3228dd128266438d0ae3ed22a",
"text": "Purpose: of this paper is to introduce the reader to the characteristics of PDCA tool and Six Sigma (DMAIC, DFSS) techniques and EFQM Excellence Model (RADAR matrix), which are possible to use for the continuous quality improvement of products, processes and services in organizations. Design/methodology/approach: We compared the main characteristics of the presented methodologies aiming to show the main prerequisites, differences, strengths and limits in their application. Findings: Depending on the purpose every organization will have to find a proper way and a combination of methodologies in its implementation process. The PDCA cycle is a well known fundamental concept of continuousimprovement processes, RADAR matrix provides a structured approach assessing the organizational performance, DMAIC is a systematic, and fact based approach providing framework of results-oriented project management, DFSS is a systematic approach to new products or processes design focusing on prevent activities. Research limitations/implications: This paper provides general information and observations on four presented methodologies. Further research could be done towards more detailed study of characteristics and positive effects of these methodologies. Practical implications: The paper presents condensed presentation of main characteristics, strengths and limitations of presented methodologies. Our findings could be used as solid information for management decisions about the introduction of various quality programmes. Originality/value: We compared four methodologies and showed their main characteristics and differences. We showed that some methodologies are more simple and therefore easily to understand and introduce (e.g. PDCA cycle). On the contrary Six Sigma and EFQM Excellence model are more complex and demanding methodologies and therefore need more time and resources for their proper implementation.",
"title": ""
}
] |
scidocsrr
|
ac3a6a404ce0424b5bf6df7df64aee65
|
Face Recognition Under Varying Illumination
|
[
{
"docid": "4f58172c8101b67b9cd544b25d09f2e2",
"text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.",
"title": ""
}
] |
[
{
"docid": "e07e56bbfb8657d52c84a6d67e750972",
"text": "The Internet of Things (IoT) is emerging as a significant development in information technology, with the potential to increase convenience and efficiency in daily life. While the number of IoT service users has increased dramatically, little is understood about what motivates the continued use of such services. The primary objective of this study is to develop and refine a conceptual framework from the perspective of network externalities and privacy to provide a theoretical understanding of the motivations that drive continued use of IoT services. The proposed model was empirically evaluated using survey data collected from 508 users concerning their perceptions of IoT services. The results indicate network externalities play a significant role in influencing consumers' perception of usage benefits and thus adoption, whereas privacy concerns have a relatively weak effect on adoption. Implications for IS researchers and practice",
"title": ""
},
{
"docid": "6844473b57606198066406c540f642a4",
"text": "Physical activity (PA) during physical education is important for health purposes and for developing physical fitness and movement skills. To examine PA levels and how PA was influenced by environmental and instructor-related characteristics, we assessed children’s activity during 368 lessons taught by 105 physical education specialists in 42 randomly selected schools in Hong Kong. Trained observers used SOFIT in randomly selected classes, grades 4–6, during three climatic seasons. Results indicated children’s PA levels met the U.S. Healthy People 2010 objective of 50% engagement time and were higher than comparable U.S. populations. Multiple regression analyses revealed that temperature, teacher behavior, and two lesson characteristics (subject matter and mode of delivery) were significantly associated with the PA levels. Most of these factors are modifiable, and changes could improve the quantity and intensity of children’s PA.",
"title": ""
},
{
"docid": "7e40c98b9760e1f47a0140afae567b7f",
"text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "38fd6a2b2ea49fda599a70ec7e803cde",
"text": "The role of trace elements in biological systems has been described in several animals. However, the knowledge in fish is mainly limited to iron, copper, manganese, zinc and selenium as components of body fluids, cofactors in enzymatic reactions, structural units of non-enzymatic macromolecules, etc. Investigations in fish are comparatively complicated as both dietary intake and waterborne mineral uptake have to be considered in determining the mineral budgets. The importance of trace minerals as essential ingredients in diets, although in small quantities, is also evident in fish.",
"title": ""
},
{
"docid": "322161b4a43b56e4770d239fe4d2c4c0",
"text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.",
"title": ""
},
{
"docid": "51a885077a141ee6e58ce4551a4c3c93",
"text": "We have developed a mobile augmented reality application with historical photographs and information about a historical street. We follow a design science research methodology and use an extended version of the technology acceptance model (TAM) to study the acceptance of this application. A prototype has been developed in accordance with general principles for usability design, and two surveys have been conducted. A web survey with 200 participants that watched a short video demonstration of the application to validate the adapted acceptance model, and a street survey, where 42 participants got the opportunity to try the application in a live setting before answering a similar questionnaire and provide more concrete feedback. The results show that both perceived usefulness and perceived enjoyment has a direct impact on the intention to use mobile augmented reality applications with historical pictures and information. Further a number of practical recommendations for the development and deployment of such systems are provided.",
"title": ""
},
{
"docid": "ff7b8957aeedc0805f972bf5bd6923f0",
"text": "This study was designed to test the Fundamental Difference Hypothesis (Bley-Vroman, 1988), which states that, whereas children are known to learn language almost completely through (implicit) domain-specific mechanisms, adults have largely lost the ability to learn a language without reflecting on its structure and have to use alternative mechanisms, drawing especially on their problem-solving capacities, to learn a second language. The hypothesis implies that only adults with a high level of verbal analytical ability will reach near-native competence in their second language, but that this ability will not be a significant predictor of success for childhood second language acquisition. A study with 57 adult Hungarian-speaking immigrants confirmed the hypothesis in the sense that very few adult immigrants scored within the range of child arrivals on a grammaticality judgment test, and that the few who did had high levels of verbal analytical ability; this ability was not a significant predictor for childhood arrivals. This study replicates the findings of Johnson and Newport (1989) and provides an explanation for the apparent exceptions in their study. These findings lead to a reconceptualization of the Critical Period Hypothesis: If the scope of this hypothesis is lim-",
"title": ""
},
{
"docid": "d29ca3ca682433a9ea6172622d12316c",
"text": "The phenomenon of a phantom limb is a common experience after a limb has been amputated or its sensory roots have been destroyed. A complete break of the spinal cord also often leads to a phantom body below the level of the break. Furthermore, a phantom of the breast, the penis, or of other innervated body parts is reported after surgical removal of the structure. A substantial number of children who are born without a limb feel a phantom of the missing part, suggesting that the neural network, or 'neuromatrix', that subserves body sensation has a genetically determined substrate that is modified by sensory experience.",
"title": ""
},
{
"docid": "b1e431f48c52a267c7674b5526d9ee23",
"text": "Publish/subscribe is a distributed interaction paradigm well adapted to the deployment of scalable and loosely coupled systems.\n Apache Kafka and RabbitMQ are two popular open-source and commercially-supported pub/sub systems that have been around for almost a decade and have seen wide adoption. Given the popularity of these two systems and the fact that both are branded as pub/sub systems, two frequently asked questions in the relevant online forums are: how do they compare against each other and which one to use?\n In this paper, we frame the arguments in a holistic approach by establishing a common comparison framework based on the core functionalities of pub/sub systems. Using this framework, we then venture into a qualitative and quantitative (i.e. empirical) comparison of the common features of the two systems. Additionally, we also highlight the distinct features that each of these systems has. After enumerating a set of use cases that are best suited for RabbitMQ or Kafka, we try to guide the reader through a determination table to choose the best architecture given his/her particular set of requirements.",
"title": ""
},
{
"docid": "d7aec74465931a52e9cda65de38b1fb7",
"text": "As the use of mobile devices becomes increasingly ubiquitous, the need for systematically testing applications (apps) that run on these devices grows more and more. However, testing mobile apps is particularly expensive and tedious, often requiring substantial manual effort. While researchers have made much progress in automated testing of mobile apps during recent years, a key problem that remains largely untracked is the classic oracle problem, i.e., to determine the correctness of test executions. This paper presents a novel approach to automatically generate test cases, that include test oracles, for mobile apps. The foundation for our approach is a comprehensive study that we conducted of real defects in mobile apps. Our key insight, from this study, is that there is a class of features that we term user-interaction features, which is implicated in a significant fraction of bugs and for which oracles can be constructed - in an application agnostic manner -- based on our common understanding of how apps behave. We present an extensible framework that supports such domain specific, yet application agnostic, test oracles, and allows generation of test sequences that leverage these oracles. Our tool embodies our approach for generating test cases that include oracles. Experimental results using 6 Android apps show the effectiveness of our tool in finding potentially serious bugs, while generating compact test suites for user-interaction features.",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "eb9459d0eb18f0e49b3843a6036289f9",
"text": "Experimental research has had a long tradition in psychology and education. When psychology emerged as an infant science during the 1900s, it modeled its research methods on the established paradigms of the physical sciences, which for centuries relied on experimentation to derive principals and laws. Subsequent reliance on experimental approaches was strengthened by behavioral approaches to psychology and education that predominated during the first half of this century. Thus, usage of experimentation in educational technology over the past 40 years has been influenced by developments in theory and research practices within its parent disciplines. In this chapter, we examine practices, issues, and trends related to the application of experimental research methods in educational technology. The purpose is to provide readers with sufficient background to understand and evaluate experimental designs encountered in the literature and to identify designs that will effectively address questions of interest in their own research. In an introductory section, we define experimental research, differentiate it from alternative approaches, and identify important concepts in its use (e.g., internal vs. external validity). We also suggest procedures for conducting experimental studies and publishing them in educational technology research journals. Next, we analyze uses of experimental methods by instructional researchers, extending the analyses of three decades ago by Clark and Snow (1975). In the concluding section, we turn to issues in using experimental research in educational technology, to include balancing internal and external validity, using multiple outcome measures to assess learning processes and products, using item responses vs. aggregate scores as dependent variables, reporting effect size as a complement to statistical significance, and media replications vs. media comparisons.",
"title": ""
},
{
"docid": "ab75cb747666f6b115a94f1dfb627d63",
"text": "Over the last years, Enterprise Social Networks (ESN) have gained increasing attention both in academia and practice, resulting in a large number of publications dealing with ESN. Among them is a large number of case studies describing the benefits of ESN in each individual case. Based on the different research objects they focus, various benefits are described. However, an overview of the benefits achieved by using ESN is missing and will, thus, be elaborated in this article (research question 1). Further, we cluster the identified benefits to more generic categories and finally classify them to the capabilities of traditional IT as presented by Davenport and Short (1990) to determine if new capabilities of IT arise using ESN (research question 2). To address our research questions, we perform a qualitative content analysis on 37 ESN case studies. As a result, we identify 99 individual benefits, classify them to the capabilities of traditional IT, and define a new IT capability named Social Capital. Our results can, e.g., be used to align and expand current ESN success measurement approaches.",
"title": ""
},
{
"docid": "512ecda05fae6cb333c89833c489dbff",
"text": "This review examines protein complexes in the Brookhaven Protein Databank to gain a better understanding of the principles governing the interactions involved in protein-protein recognition. The factors that influence the formation of protein-protein complexes are explored in four different types of protein-protein complexes--homodimeric proteins, heterodimeric proteins, enzyme-inhibitor complexes, and antibody-protein complexes. The comparison between the complexes highlights differences that reflect their biological roles.",
"title": ""
},
{
"docid": "4fcea2e99877dedc419893313c1baea4",
"text": "A cardiac circumstance affected through irregular electrical action of the heart is called an arrhythmia. A noninvasive method called Electrocardiogram (ECG) is used to diagnosis arrhythmias or irregularities of the heart. The difficulty encountered by doctors in the analysis of heartbeat irregularities id due to the non-stationary of ECG signal, the existence of noise and the abnormality of the heartbeat. The computer-assisted study of ECG signal supports doctors to diagnoses diseases of cardiovascular. The major limitations of all the ECG signal analysis of arrhythmia detection are because to the non-stationary behavior of the ECG signals and unobserved information existent in the ECG signals. In addition, detection based on Extreme learning machine (ELM) has become a common technique in machine learning. However, it easily suffers from overfitting. This paper proposes a hybrid classification technique using Bayesian and Extreme Learning Machine (B-ELM) technique for heartbeat recognition of arrhythmia detection AD. The proposed technique is capable of detecting arrhythmia classes with a maximum accuracy of (98.09%) and less computational time about 2.5s.",
"title": ""
},
{
"docid": "8be72e103853aeac601aa65b61b98fd2",
"text": "Opinion surveys usually employ multiple items to measure the respondent’s underlying value, belief, or attitude. To analyze such types of data, researchers have often followed a two-step approach by first constructing a composite measure and then using it in subsequent analysis. This paper presents a class of hierarchical item response models that help integrate measurement and analysis. In this approach, individual responses to multiple items stem from a latent preference, of which both the mean and variance may depend on observed covariates. Compared with the two-step approach, the hierarchical approach reduces bias, increases efficiency, and facilitates direct comparison across surveys covering different sets of items. Moreover, it enables us to investigate not only how preferences differ among groups, vary across regions, and evolve over time, but also levels, patterns, and trends of attitude polarization and ideological constraint. An open-source R package, hIRT, is available for fitting the proposed models. ∗Direct all correspondence to Xiang Zhou, Department of Government, Harvard University, 1737 Cambridge Street, Cambridge, MA 02138, USA; email: xiang [email protected]. The author thanks Kenneth Bollen, Bryce Corrigan, Ryan Enos, Max Goplerud, Gary King, Jonathan Kropko, Horacio Larreguy, Jie Lv, Christoph Mikulaschek, Barum Park, Pia Raffler, Yunkyu Sohn, Yu-Sung Su, Dustin Tingley, Yuhua Wang, Yu Xie, and Kazuo Yamaguchi for helpful comments on previous versions of this work.",
"title": ""
},
{
"docid": "ba69b4c09bbcd6cfd50632a8d4bea877",
"text": "In this report we consider the current status of the coverage of computer science in education at the lowest levels of education in multiple countries. Our focus is on computational thinking (CT), a term meant to encompass a set of concepts and thought processes that aid in formulating problems and their solutions in different fields in a way that could involve computers [130].\n The main goal of this report is to help teachers, those involved in teacher education, and decision makers to make informed decisions about how and when CT can be included in their local institutions. We begin by defining CT and then discuss the current state of CT in K-9 education in multiple countries in Europe as well as the United States. Since many students are exposed to CT outside of school, we also discuss the current state of informal educational initiatives in the same set of countries.\n An important contribution of the report is a survey distributed to K-9 teachers, aiming at revealing to what extent different aspects of CT are already part of teachers' classroom practice and how this is done. The survey data suggest that some teachers are already involved in activities that have strong potential for introducing some aspects of CT. In addition to the examples given by teachers participating in the survey, we present some additional sample activities and lesson plans for working with aspects of CT in different subjects. We also discuss ways in which teacher training can be coordinated as well as the issue of repositories. We conclude with future directions for research in CT at school.",
"title": ""
},
{
"docid": "3b09a6442c408601bf65078910c1ff46",
"text": "Eukaryotic cells respond to unfolded proteins in their endoplasmic reticulum (ER stress), amino acid starvation, or oxidants by phosphorylating the alpha subunit of translation initiation factor 2 (eIF2alpha). This adaptation inhibits general protein synthesis while promoting translation and expression of the transcription factor ATF4. Atf4(-/-) cells are impaired in expressing genes involved in amino acid import, glutathione biosynthesis, and resistance to oxidative stress. Perk(-/-) cells, lacking an upstream ER stress-activated eIF2alpha kinase that activates Atf4, accumulate endogenous peroxides during ER stress, whereas interference with the ER oxidase ERO1 abrogates such accumulation. A signaling pathway initiated by eIF2alpha phosphorylation protects cells against metabolic consequences of ER oxidation by promoting the linked processes of amino acid sufficiency and resistance to oxidative stress.",
"title": ""
},
{
"docid": "b40b97410d0cd086118f0980d0f52867",
"text": "In smart cities, commuters have the opportunities for smart routing that may enable selecting a route with less car accidents, or one that is more scenic, or perhaps a straight and flat route. Such smart personalization requires a data management framework that goes beyond a static road network graph. This paper introduces PreGo, a novel system developed to provide real time personalized routing. The recommended routes by PreGo are smart and personalized in the sense of being (1) adjustable to individual users preferences, (2) subjective to the trip start time, and (3) sensitive to changes of the road conditions. Extensive experimental evaluation using real and synthetic data demonstrates the efficiency of the PreGo system.",
"title": ""
}
] |
scidocsrr
|
1a9d2e2d9793dc4608f79f421cd806a0
|
Privacy, identity and security in ambient intelligence: A scenario analysis
|
[
{
"docid": "66da54da90bbd252386713751cec7c67",
"text": "A cyber world (CW) is a digitized world created on cyberspaces inside computers interconnected by networks including the Internet. Following ubiquitous computers, sensors, e-tags, networks, information, services, etc., is a road towards a smart world (SW) created on both cyberspaces and real spaces. It is mainly characterized by ubiquitous intelligence or computational intelligence pervasion in the physical world filled with smart things. In recent years, many novel and imaginative researcheshave been conducted to try and experiment a variety of smart things including characteristic smart objects and specific smart spaces or environments as well as smart systems. The next research phase to emerge, we believe, is to coordinate these diverse smart objects and integrate these isolated smart spaces together into a higher level of spaces known as smart hyperspace or hyper-environments, and eventually create the smart world. In this paper, we discuss the potential trends and related challenges toward the smart world and ubiquitous intelligence from smart things to smart spaces and then to smart hyperspaces. Likewise, we show our efforts in developing a smart hyperspace of ubiquitous care for kids, called UbicKids.",
"title": ""
}
] |
[
{
"docid": "8b773175bc7c1830958373dd45f56b6c",
"text": "Code-Mixing (CM) is a natural phenomenon observed in many multilingual societies and is becoming the preferred medium of expression and communication in online and social media fora. In spite of this, current Question Answering (QA) systems do not support CM and are only designed to work with a single interaction language. This assumption makes it inconvenient for multi-lingual users to interact naturally with the QA system especially in scenarios where they do not know the right word in the target language. In this paper, we present WebShodh an end-end web-based Factoid QA system for CM languages. We demonstrate our system with two CM language pairs: Hinglish (Matrix language: Hindi, Embedded language: English) and Tenglish (Matrix language: Telugu, Embedded language: English). Lack of language resources such as annotated corpora, POS taggers or parsers for CM languages poses a huge challenge for automated processing and analysis. In view of this resource scarcity, we only assume the existence of bi-lingual dictionaries from the matrix languages to English and use it for lexically translating the question into English. Later, we use this loosely translated question for our downstream analysis such as Answer Type(AType) prediction, answer retrieval and ranking. Evaluation of our system reveals that we achieve an MRR of 0.37 and 0.32 for Hinglish and Tenglish respectively. We hosted this system online and plan to leverage it for collecting more CM questions and answers data for further improvement.",
"title": ""
},
{
"docid": "2683c65d587e8febe45296f1c124e04d",
"text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.",
"title": ""
},
{
"docid": "167e807e546e437d3ad1c8790a849cba",
"text": "One-way accumulators, introduced by Benaloh and de Mare, can be used to accumulate a large number of values into a single one, which can then be used to authenticate every input value without the need to transmit the others. However, the one-way property does is not suucient for all applications. In this paper, we generalize the deenition of accumulators and deene and construct a collision-free subtype. As an application, we construct a fail-stop signature scheme in which many one-time public keys are accumulated into one short public key. In contrast to previous constructions with tree authentication, the length of both this public key and the signatures can be independent of the number of messages that can be signed.",
"title": ""
},
{
"docid": "1176abf11f866dda3a76ce080df07c05",
"text": "Google Flu Trends can detect regional outbreaks of influenza 7-10 days before conventional Centers for Disease Control and Prevention surveillance systems. We describe the Google Trends tool, explain how the data are processed, present examples, and discuss its strengths and limitations. Google Trends shows great promise as a timely, robust, and sensitive surveillance system. It is best used for surveillance of epidemics and diseases with high prevalences and is currently better suited to track disease activity in developed countries, because to be most effective, it requires large populations of Web search users. Spikes in search volume are currently hard to interpret but have the benefit of increasing vigilance. Google should work with public health care practitioners to develop specialized tools, using Google Flu Trends as a blueprint, to track infectious diseases. Suitable Web search query proxies for diseases need to be established for specialized tools or syndromic surveillance. This unique and innovative technology takes us one step closer to true real-time outbreak surveillance.",
"title": ""
},
{
"docid": "477a8601e824139829568a154934b6cd",
"text": "Understanding noun compounds is the challenge that drew me to study computational linguistics. Think about how just two words, side by side, evoke a whole story: cacao seeds evokes the tree on which the cacao seeds grow, and to understand cacao powder we need to also imagine the seeds of the cacao tree that are crushed to powder. What conjures up these concepts of tree and grow, and seeds and crush, which are not explicitly present in the written word but are essential for our complete understanding of the compounds? The mechanisms by which we make sense of noun compounds can illuminate how we understand language more generally. And because the human mind is so wily as to provide interpretations even when we do not ask it to, I have always found it useful to study these phenomena of language on the computer, because the computer surely does not (yet) have the type of knowledge that must be brought to bear on the problem. If you find these phenomena equally intriguing and puzzling, then you will find this book by Nastase, Nakov, Ó Séaghdga, and Szpakowicz a wonderful summary of past research efforts and a good introduction to the current methods for analyzing semantic relations. To be clear, this book is not only about noun compounds, but explores all types of relations that can hold between what is expressed linguistically as nominal. Such nominals include entities (e.g., Godiva, Belgium) as well as nominals that refer to events (cultivation, roasting) and nominals with complex structure (delicious milk chocolate). In doing so, describing the different semantic relations between chocolate in the 20th century and chocolate in Belgium is within the scope of this book. This is a wise choice as there are then some linguistic cues that will help define and narrow the types of semantic relations (e.g., the prepositions above). Noun compounds are degenerate in the sense that there are few if any overt linguistic cues as to the semantic relations between the nominals.",
"title": ""
},
{
"docid": "e66bc39948ad53767971d444ecff82dd",
"text": "Face processing has several distinctive hallmarks that researchers have attributed either to face-specific mechanisms or to extensive experience distinguishing faces. Here, we examined the face-processing hallmark of selective attention failure--as indexed by the congruency effect in the composite paradigm--in a domain of extreme expertise: chess. Among 27 experts, we found that the congruency effect was equally strong with chessboards and faces. Further, comparing these experts with recreational players and novices, we observed a trade-off: Chess expertise was positively related to the congruency effect with chess yet negatively related to the congruency effect with faces. These and other findings reveal a case of expertise-dependent, facelike processing of objects of expertise and suggest that face and expert-chess recognition share common processes.",
"title": ""
},
{
"docid": "2cbf690c565c6a201d4d8b6bda20b766",
"text": "Visualizations that can handle flat files, or simple table data are most often used in data mining. In this paper we survey most visualizations that can handle more than three dimensions and fit our definition of Table Visualizations. We define Table Visualizations and some additional terms needed for the Table Visualization descriptions. For a preliminary evaluation of some of these visualizations see “Benchmark Development for the Evaluation of Visualization for Data Mining” also included in this volume. Data Sets Used Most of the datasets for the visualization examples are either the automobile or the Iris flower dataset. Nearly every data mining package comes with at least one of these two datasets. The datasets are available UC Irvine Machine Learning Repository [Uci97]. • Iris Plant Flowers – from Fischer 1936, physical measurements from three types of flowers. • Car (Automobile) – data concerning cars manufactured in America, Japan and Europe from 1970 to 1982 Definition of Table Visualizations A two-dimensional table of data is defined by M rows and N columns. A visualization of this data is termed a Table Visualization. In our definition, we define the columns to be the dimensions or the variates (also called fields or attributes), and the rows to be the data records. The data records are sometimes called ndimensional points, or cases. For a more thorough discussion of the table model, see [Car99]. This very general definition only rules out some structured or hierarchical data. In the most general case, a visualization maps certain dimensions to certain features in the visualization. In geographical, scientific, and imaging visualizations, the spatial dimensions are normally assigned to the appropriate X, Y or Z spatial dimension. In a typical information visualization there is no inherent spatial dimension, but quite often the dimension mapped to height and width on the screen has a dominating effect. For example in a scatter plot of four-dimensional data one could map two features to the Xand Y-axis and the other two features to the color and shape of the plotted points. The dimensions assigned to the Xand Y-axis would dominate many aspects of analysis, such as clustering and outlier detection. Some Table Visualizations such as Parallel Coordinates, Survey Plots, or Radviz, treat all of the data dimensions equally. We call these Regular Table Visualizations (RTVs). The data in a Table Visualizations is discrete. The data can be represented by different types, such as integer, real, categorical, nominal, etc. In most visualizations all data is converted to a real type before rendering the visualization. We are concerned with issues that arise from the various types of data, and use the more general term “Table Visualization.” These visualizations can also be called “Array Visualizations” because all the data are of the same type. Table Visualization data is not hierarchical. It does not explicitly contain internal structure or links. The data has a finite size (N and M are bounded). The data can be viewed as M points having N dimensions or features. The order of the table can sometimes be considered another dimension, which is an ordered sequence of integer values from 1 to M. If the table represents points in some other sequence such as a time series, that information should be represented as another column.",
"title": ""
},
{
"docid": "4013515fe0bfae910a4493ff91e4490e",
"text": "This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "ff59e2a5aa984dec7805a4d9d55e69e5",
"text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.",
"title": ""
},
{
"docid": "5916e605ab78bf75925fecbdc55422cd",
"text": "This paper presents a new method for estimating the average heart rate from a foot/ankle worn photoplethysmography (PPG) sensor during fast bike activity. Placing the PPG sensor on the lower half of the body allows more energy to be collected from energy harvesting in order to give a power autonomous sensor node, but comes at the cost of introducing significant motion interference into the PPG trace. We present a normalised least mean square adaptive filter and short-time Fourier transform based algorithm for estimating heart rate in the presence of this motion contamination. Results from 8 subjects show the new algorithm has an average error of 9 beats-per-minute when compared to an ECG gold standard.",
"title": ""
},
{
"docid": "323e7669476aab93735a655e54f6a4a9",
"text": "Monte Carlo Tree Search is a method that depends on decision theory in taking actions/ decisions, when other traditional methods failed on doing so, due to lots of factors such as uncertainty, huge problem domain, or lack in the knowledge base of the problem. Before using this method, several problems remained unsolved including some famous AI games like GO. This method represents a revolutionary technique where a Monte Carlo method has been applied to search tree technique, and proved to be successful in areas thought for a long time as impossible to be solved. This paper highlights some important aspects of this method, and presents some areas where it worked well, as well as enhancements to make it even more powerful.",
"title": ""
},
{
"docid": "9d36947ff5f794942e153c21cdfc3a53",
"text": "It is a well-established fact that corruption is a widespread phenomenon and it is widely acknowledged because of negative impact on economy and society. An important aspect of corruption is that two parties act separately or jointly in order to further their own interests at the expense of society. To strengthen prevent corruption, most of countries have construct special organization. The paper presents a new measure based on introducing game theory as an analytical tool for analyzing the relation between anti-corruption and corruption. Firstly, the paper introduces the corruption situation in China, gives the definition of the game theory and studies government anti-corruption activity through constructing the game theoretic models between anti-corruption and corruption. The relation between supervisor and the anti-corruption will be explained next. A thorough analysis of the mechanism of informant system has been made accordingly in the third part. At last, some suggestions for preventing and fight corruption are put forward.",
"title": ""
},
{
"docid": "8ca60b68f1516d63af36b7ead860686b",
"text": "The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch.",
"title": ""
},
{
"docid": "40c90bf58aae856c7c72bac573069173",
"text": "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.",
"title": ""
},
{
"docid": "4c5b74544b1452ffe0004733dbeee109",
"text": "Literary genres are commonly viewed as being defined in terms of content and style. In this paper, we focus on one particular type of content feature, namely lexical expressions of emotion, and investigate the hypothesis that emotion-related information correlates with particular genres. Using genre classification as a testbed, we compare a model that computes lexiconbased emotion scores globally for complete stories with a model that tracks emotion arcs through stories on a subset of Project Gutenberg with five genres. Our main findings are: (a), the global emotion model is competitive with a largevocabulary bag-of-words genre classifier (80 % F1); (b), the emotion arc model shows a lower performance (59 % F1) but shows complementary behavior to the global model, as indicated by a very good performance of an oracle model (94 % F1) and an improved performance of an ensemble model (84 % F1); (c), genres differ in the extent to which stories follow the same emotional arcs, with particularly uniform behavior for anger (mystery) and fear (adventures, romance, humor, science fiction).",
"title": ""
},
{
"docid": "0a35370e6c99e122b8051a977029d77a",
"text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"title": ""
},
{
"docid": "09c042cb8ee06de9dffc4019f781e496",
"text": "High quality rendering and physics based modeling in volume graphics have been limited because intensity based volumetric data do not represent surfaces well. High spatial frequencies due to abrupt intensity changes at object surfaces result in jagged or terraced surfaces in rendered images. The use of a distance-to-closest-surface function to encode object surfaces is proposed. This function varies smoothly across surfaces and hence can be accurately reconstructed from sampled data. The zero value iso surface of the distance map yields the object surface and the derivative of the distance map yields the surface normal. Examples of rendered images are presented along with a new method for calculating distance maps from sampled binary data.",
"title": ""
},
{
"docid": "514b802d266259087a106d5c2c03f39b",
"text": "A substantial increase of photovoltaic (PV) power generators installations has taken place in recent years, due to the increasing efficiency of solar cells as well as the improvements of manufacturing technology of solar panels. These generators are both grid-connected and stand-alone applications. We present an overview of the essential research results. The paper concentrates on the operation and modeling of stand-alone power systems with PV power generators. Systems with PV array-inverter assemblies, operating in the slave-and-master modes, are discussed, and the simulation results obtained using a renewable energy power system modular simulator are presented. These results demonstrate that simulation is an essential step in the system development process and that PV power generators constitute a valuable energy source. They have the ability to balance the energy and supply good power quality. It is demonstrated that when PV array- inverters are operating in the master mode in stand-alone applications, they well perform the task of controlling the voltage and frequency of the power system. The mechanism of switching the master function between the diesel generator and the PV array-inverter assembly in a stand-alone power system is also proposed and analyzed. Finally, some experimental results on a practical system are compared to the simulation results and confirm the usefulness of the proposed approach to the development of renewable energy systems with PV power generators.",
"title": ""
}
] |
scidocsrr
|
da02f02c7e48b3c36758db60bfa47ce6
|
On-Device Federated Learning via Blockchain and its Latency Analysis
|
[
{
"docid": "c411fc52d40cf1f67ddad0c448c6235a",
"text": "Intel’s Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and confidentiality guarantees to securitysensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious. This paper analyzes Intel SGX, based on the 3 papers [14, 79, 139] that introduced it, on the Intel Software Developer’s Manual [101] (which supersedes the SGX manuals [95, 99]), on an ISCA 2015 tutorial [103], and on two patents [110, 138]. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. This paper does not reflect the information available in two papers [74, 109] that were published after the first version of this paper. This paper’s contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX’s security properties.",
"title": ""
},
{
"docid": "244b583ff4ac48127edfce77bc39e768",
"text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.",
"title": ""
},
{
"docid": "21e16f9abeb0c538b7403d264790b7a8",
"text": "In this paper, the problem of joint power and resource allocation for ultra reliable low latency communication (URLLC) in vehicular networks is studied. The key goal is to minimize the networkwide power consumption of vehicular users (VUEs) subject to high reliability in terms of probabilistic queuing delays. In particular, using extreme value theory (EVT), a new reliability measure is defined to characterize extreme events pertaining to vehicles’ queue lengths exceeding a predefined threshold with non-negligible probability. In order to learn these extreme events in a dynamic vehicular network, a novel distributed approach based on federated learning (FL) is proposed to estimate the tail distribution of the queues. Taking into account the communication delays incurred by FL over wireless links, Lyapunov optimization is used to derive the joint transmit power and resource allocation policies enabling URLLC for each VUE in a distributed manner. The proposed solution is then validated via extensive simulations using a Manhattan mobility model. Simulation results show that FL enables the proposed distributed method to estimate the tail distribution of queues with an accuracy that is very close to a centralized solution with up to 79% reductions in the amount of data that need to be exchanged. Furthermore, the proposed method yields up to 60% reductions of VUEs with large queue lengths, while reducing the average power consumption by two folds, compared to an average queue-based baseline. For the VUEs with large queue lengths, the proposed method reduces their average queue lengths and fluctuations therein by about 30% compared to the aforementioned baseline. ar X iv :1 80 7. 08 12 7v 1 [ cs .I T ] 2 1 Ju l 2 01 8",
"title": ""
}
] |
[
{
"docid": "937dec4b11b3d039c81ca258283f82e8",
"text": "Nonnegative matrix factorization (NMF) provides a lower rank approximation of a matrix by a product of two nonnegative factors. NMF has been shown to produce clustering results that are often superior to those by other methods such as K-means. In this paper, we provide further interpretation of NMF as a clustering method and study an extended formulation for graph clustering called Symmetric NMF (SymNMF). In contrast to NMF that takes a data matrix as an input, SymNMF takes a nonnegative similarity matrix as an input, and a symmetric nonnegative lower rank approximation is computed. We show that SymNMF is related to spectral clustering, justify SymNMF as a general graph clustering method, and discuss the strengths and shortcomings of SymNMF and spectral clustering. We propose two optimization algorithms for SymNMF and discuss their convergence properties and computational efficiencies. Our experiments on document clustering, image clustering, and image segmentation support SymNMF as a graph clustering method that captures latent linear and nonlinear relationships in the data.",
"title": ""
},
{
"docid": "5ad8e24875ab689ae1f8d6d63844153a",
"text": "Currently Internet of Things (IoT) and multimedia technologies have entered the healthcare field through ambient aiding living and telemedicine. However there are still several obstacles blocking in the way, the toughest ones among which are IoT interoperability, system security, streaming Quality of Service (QoS) and dynamic increasing storage. The major contribution of this paper is proposing an open, secure and flexible platform based on IoT and Cloud computing, on which several mainstream short distant ambient communication protocols for medical purpose are discussed to address interoperability; Secure Sockets Layer (SSL), authentication and auditing are taken into consideration to solve the security issue; an adaptive streaming QoS model is utilized to improve streaming quality in dynamic environment; and an open Cloud computing infrastructure is adopted to support elastic Electronic Health Record (EHR) archiving in the backend. Finally an integrated reference implementation is introduced to demonstrate feasibility.",
"title": ""
},
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
},
{
"docid": "682432bc24847bcca3fdeba01c08a5c6",
"text": "The effect of high K-concentration, insulin and the L-type Ca 2+ channel blocker PN 200-110 on cytosolic intracellular free calcium ([Ca2+]i) was studied in single ventricular myocytes of 10-day-old embryonic chick heart, 20-week-old human fetus and rabbit aorta (VSM) single cells using the Ca2+-sensitive fluorescent dye, Fura-2 microfluorometry and digital imaging technique. Depolarization of the cell membrane of both heart and VSM cells with continuous superfusion of 30 mM [K+]o induced a rapid transient increase of [Ca2+]j that was followed by a sustained component. The early transient increase of [Ca2+]i by high [K+]o was blocked by the L-type calcium channel antagonist nifedipine. However, the sustained component was found to be insensitive to this drug. PN 200-110 another L-type Ca 2+ blocker was found to decrease both the early transient and the sustained increase of [Ca2+]i induced by depolarization of the cell membrane with high [K+]o. Insulin at a concentration of 40 to 80 tzU/rnl only produced a sustained increase of [Ca2+]i that was blocked by PN 200-110 or by lowering the extracellular Ca 2+ concentration with EGTA. The sustained increase of [Ca2+]i induced by high [K+]o or insulin was insensitive to metabolic inhibitors such as KCN and ouabain as well to the fast Na + channel blocker, tetrodotoxin and to the increase of intracellular concentrations of cyclic nucleotides. Using the patch clamp technique, insulin did not affect the L-type Ca 2+ current and the delayed outward K + current. These results suggest that the early increase of [Ca2+]i during depolarization of the cell membrane of heart and VSM cells with high [K+]o is due to the opening and decay of an L-type Ca z+ channel. However, the sustained increase of [Ca2+]i during a sustained depolarization is due to the activation of a resting (R) Ca 2+ channel that is insensitive to lowering [ATP]i and sensitive to insulin. (Mol Cell Biochem 117: 93--106, 1992)",
"title": ""
},
{
"docid": "1c0be734eaff2b337edfd9af75a711fa",
"text": "This article is a fully referenced research review to overview progress in unraveling the details of the evolutionary Tree of Life, from life's first occurrence in the hypothetical RNA-era, to humanity's own emergence and diversification, through migration and intermarriage, using research diagrams and brief discussion of the current state of the art. The Tree of Life, in biological terms, has come to be identified with the evolutionary tree of biological diversity. It is this tree which represents the climax fruitfulness of the biosphere and the genetic foundation of our existence, embracing not just higher Eucaryotes, plants, animals and fungi, but Protista, Eubacteria and Archaea, the realm, including the extreme heat and salt-loving organisms, which appears to lie almost at the root of life itself. To a certain extent the notion of a tree based on generational evolution has become complicated by a variety of compounding factors. Gene transfer is not just vertical carried down the generations. There is also evidence for promiscuous incidences of horizontal gene transfer, genetic symbiosis, hybridization and even the formation of chimeras. This review will cover all these aspects, from the first life on Earth to Homo sapiens.",
"title": ""
},
{
"docid": "ec4b7d7e2a512b29ee2ba195706c3571",
"text": "BACKGROUND\nThe Currarino triad is a rare hereditary syndrome comprising anorectal malformation, sacral bony defect, and presacral mass. Most of the patients are diagnosed during infancy.\n\n\nCASE PRESENTATION\nA 44-year-old man was diagnosed with Currarino triad, with a huge presacral teratoma and meningocele. One-stage surgery via posterior approach was successful.\n\n\nCONCLUSIONS\nTreatment of the presacral mass in the Currarino triad, diagnosed in adulthood, is challenging. Multidisciplinary management and detailed planning before surgery are important for a satisfactory outcome.",
"title": ""
},
{
"docid": "69d68431379da12139fa4a87ccac527f",
"text": "Traditional ultra-dense wireless networks are recommended as a complement for cellular networks and are deployed in partial areas, such as hotspot and indoor scenarios. Based on the massive multiple-input multi-output antennas and the millimeter wave communication technologies, the 5G ultra-dense cellular network is proposed to deploy in overall cellular scenarios. Moreover, a distribution network architecture is presented for 5G ultra-dense cellular networks. Furthermore, the backhaul network capacity and the backhaul energy efficiency of ultra-dense cellular networks are investigated to answer an important question, that is, how much densification can be deployed for 5G ultra-dense cellular networks. Simulation results reveal that there exist densification limits for 5G ultra-dense cellular networks with backhaul network capacity and backhaul energy efficiency constraints.",
"title": ""
},
{
"docid": "78283b148e6340ef9c49e503f9f39a2e",
"text": "Blur in facial images significantly impedes the efficiency of recognition approaches. However, most existing blind deconvolution methods cannot generate satisfactory results due to their dependence on strong edges, which are sufficient in natural images but not in facial images. In this paper, we represent point spread functions (PSFs) by the linear combination of a set of pre-defined orthogonal PSFs, and similarly, an estimated intrinsic (EI) sharp face image is represented by the linear combination of a set of pre-defined orthogonal face images. In doing so, PSF and EI estimation is simplified to discovering two sets of linear combination coefficients, which are simultaneously found by our proposed coupled learning algorithm. To make our method robust to different types of blurry face images, we generate several candidate PSFs and EIs for a test image, and then, a non-blind deconvolution method is adopted to generate more EIs by those candidate PSFs. Finally, we deploy a blind image quality assessment metric to automatically select the optimal EI. Thorough experiments on the facial recognition technology database, extended Yale face database B, CMU pose, illumination, and expression (PIE) database, and face recognition grand challenge database version 2.0 demonstrate that the proposed approach effectively restores intrinsic sharp face images and, consequently, improves the performance of face recognition.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "6c2095e83fd7bc3b7bd5bd259d1ae9bb",
"text": "This paper basically deals with design of an IoT Smart Home System (IoTSHS) which can provide the remote control to smart home through mobile, infrared(IR) remote control as well as with PC/Laptop. The controller used to design the IoTSHS is WiFi based microcontroller. Temperature sensor is provided to indicate the room temperature and tell the user if it's needed to turn the AC ON or OFF. The designed IoTSHS need to be interfaced through switches or relays with the items under control through the power distribution box. When a signal is sent from IoTSHS, then the switches will connect or disconnect the item under control. The designed IoT smart home system can also provide remote controlling for the people who cannot use smart phone to control their appliances Thus, the designed IoTSHS can benefits the whole parts in the society by providing advanced remote controlling for the smart home. The designed IoTSHS is controlled through remote control which uses IR and WiFi. The IoTSHS is capable to connect to WiFi and have a web browser regardless to what kind of operating system it uses, to control the appliances. No application program is needed to purchase, download, or install. In WiFi controlling, the IoTSHS will give a secured Access Point (AP) with a particular service set identifier (SSID). The user will connect the device (e.g. mobile-phone or Laptop/PC) to this SSID with providing the password and then will open the browser and go to particular fixed link. This link will open an HTML web page which will allow the user to interface between the Mobile-Phone/Laptop/PC and the appliances. In addition, the IoTSHS may connect to the home router so that the user can control the appliances with keeping connection with home router. The proposed IoTSHS was designed, programmed, fabricated and tested with excellent results.",
"title": ""
},
{
"docid": "7c8948433cf6c0d35fe29ccfac75d5b5",
"text": "The EMIB dense MCP technology is a new packaging paradigm that provides localized high density interconnects between two or more die on an organic package substrate, opening up new opportunities for heterogeneous on-package integration. This paper provides an overview of EMIB architecture and package capabilities. First, EMIB is compared with other approaches for high density interconnects. Some of the inherent advantages of the technology, such as the ability to cost effectively implement high density interconnects without requiring TSVs, and the ability to support the integration of many large die in an area much greater than the typical reticle size limit are highlighted. Next, the overall EMIB architecture envelope is discussed along with its constituent building blocks, the package construction with the embedded bridge, die to package interconnect features. Next, the EMIB assembly process is described at a high level. Finally, high bandwidth signaling between the die is discussed and the link bandwidth envelope is quantified.",
"title": ""
},
{
"docid": "e2ffac5515399469b93ed53e05d92345",
"text": "Network security is a major issue affecting SCADA systems designed and deployed in the last decade. Simulation of network attacks on a SCADA system presents certain challenges, since even a simple SCADA system is composed of models in several domains and simulation environments. Here we demonstrate the use of C2WindTunnel to simulate a plant and its controller, and the Ethernet network that connects them, in different simulation environments. We also simulate DDOS-like attacks on a few of the routers to observe and analyze the effec ts of a network attack on such a system. I. I NTRODUCTION Supervisory Control And Data Acquisition (SCADA) systems are computer-based monitoring tools that are used to manage and control critical infrastructure functions in re al time, like gas utilities, power plants, chemical plants, tr affic control systems, etc. A typical SCADA system consists of a SCADA Master which provides overall monitoring and control for the system, local process controllers called Re mot Terminal Units (RTUs), sensors and actuators and a network which provides the communication between the Master and the RTUs. A. Security of SCADA Systems SCADA systems are designed to have long life spans, usually in decades. The SCADA systems currently installed and used were designed at a time when security issues were not paramount, which is not the case today. Furthermore, SCADA systems are now connected to the Internet for remote monitoring and control making the systems susceptible to network security problems which arise through a connection to a public network. Despite these evident security risks, SCADA systems are cumbersome to upgrade for several reasons. Firstly, adding security features often implies a large downtime, which is not desirable in systems like power plants and traffic contro l. Secondly, SCADA devices with embedded codes would need to be completely replaced to add new security protocols. Lastly, the networks used in a SCADA system are usually customized for that system and cannot be generalized. Security of legacy SCADA systems and design of future systems both thus rely heavily on the assessment and rectification of security vulnerabilities of SCADA implementatio ns in realistic settings. B. Simulation of SCADA Systems In a SCADA system it is essential to model and simulate communication networks in order to study mission critical situations such as network failures or attacks. Even a simpl e SCADA system is composed of several units in various domains like dynamic systems, networks and physical environments, and each of these units can be modeled using a variety of available simulators and/or emulators. An example system could include simulating controller and plant dynamics in Simulink or Matlab, network architecture and behavior in a network simulator like OMNeT++, etc. An adequate simulation of such a system necessitates the use of an underlying software infrastructure that connects and re lates the heterogeneous simulators in a logically and temporally coherent framework.",
"title": ""
},
{
"docid": "390cb70c820d0ebefe936318f8668ac3",
"text": "BACKGROUND\nMandatory labeling of products with top allergens has improved food safety for consumers. Precautionary allergen labeling (PAL), such as \"may contain\" or \"manufactured on shared equipment,\" are voluntarily placed by the food industry.\n\n\nOBJECTIVE\nTo establish knowledge of PAL and its impact on purchasing habits by food-allergic consumers in North America.\n\n\nMETHODS\nFood Allergy Research & Education and Food Allergy Canada surveyed consumers in the United States and Canada on purchasing habits of food products featuring different types of PAL. Associations between respondents' purchasing behaviors and individual characteristics were estimated using multiple logistic regression.\n\n\nRESULTS\nOf 6684 participants, 84.3% (n = 5634) were caregivers of a food-allergic child and 22.4% had food allergy themselves. Seventy-one percent reported a history of experiencing a severe allergic reaction. Buying practices varied on the basis of PAL wording; 11% of respondents purchased food with \"may contain\" labeling, whereas 40% purchased food that used \"manufactured in a facility that also processes.\" Twenty-nine percent of respondents were unaware that the law requires labeling of priority food allergens. Forty-six percent were either unsure or incorrectly believed that PAL is required by law. Thirty-seven percent of respondents thought PAL was based on the amount of allergen present. History of a severe allergic reaction decreased the odds of purchasing foods with PAL.\n\n\nCONCLUSIONS\nAlmost half of consumers falsely believed that PAL was required by law. Up to 40% surveyed consumers purchased products with PAL. Understanding of PAL is poor, and improved awareness and guidelines are needed to help food-allergic consumers purchase food safely.",
"title": ""
},
{
"docid": "02322377d048f2469928a71290cf1566",
"text": "In order to interact with human environments, humanoid robots require safe and compliant control which can be achieved through force-controlled joints. In this paper, full body step recovery control for robots with force-controlled joints is achieved by adding model-based feed-forward controls. Push Recovery Model Predictive Control (PR-MPC) is presented as a method for generating full-body step recovery motions after a large disturbance. Results are presented from experiments on the Sarcos Primus humanoid robot that uses hydraulic actuators instrumented with force feedback control.",
"title": ""
},
{
"docid": "c6a429e06f634e1dee995d0537777b4b",
"text": "Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects.\n We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures.\n A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.",
"title": ""
},
{
"docid": "8df1395775e139c281512e4e4c1920d9",
"text": "Over the past 20 years, breakthrough discoveries of chromatin-modifying enzymes and associated mechanisms that alter chromatin in response to physiological or pathological signals have transformed our knowledge of epigenetics from a collection of curious biological phenomena to a functionally dissected research field. Here, we provide a personal perspective on the development of epigenetics, from its historical origins to what we define as 'the modern era of epigenetic research'. We primarily highlight key molecular mechanisms of and conceptual advances in epigenetic control that have changed our understanding of normal and perturbed development.",
"title": ""
},
{
"docid": "bd0b233e4f19abaf97dcb85042114155",
"text": "BACKGROUND/PURPOSE\nHair straighteners are very popular around the world, although they can cause great damage to the hair. Thus, the characterization of the mechanical properties of curly hair using advanced techniques is very important to clarify how hair straighteners act on hair fibers and to contribute to the development of effective products. On this basis, we chose two nonconventional hair straighteners (formaldehyde and glyoxylic acid) to investigate how hair straightening treatments affect the mechanical properties of curly hair.\n\n\nMETHODS\nThe mechanical properties of curly hair were evaluated using a tensile test, differential scanning calorimetry (DSC) measurements, scanning electronic microscopy (SEM), a torsion modulus, dynamic vapor sorption (DVS), and Fourier transform infrared spectroscopy (FTIR) analysis.\n\n\nRESULTS\nThe techniques used effectively helped the understanding of the influence of nonconventional hair straighteners on hair properties. For the break stress and the break extension tests, formaldehyde showed a marked decrease in these parameters, with great hair damage. Glyoxylic acid had a slight effect compared to formaldehyde treatment. Both treatments showed an increase in shear modulus, a decrease in water sorption and damage to the hair surface.\n\n\nCONCLUSIONS\nA combination of the techniques used in this study permitted a better understanding of nonconventional hair straightener treatments and also supported the choice of the better treatment, considering a good relationship between efficacy and safety. Thus, it is very important to determine the properties of hair for the development of cosmetics used to improve the beauty of curly hair.",
"title": ""
},
{
"docid": "8f212b657bc99532387d008282cc75b1",
"text": "Mindfulness training has been considered an effective mode for optimizing sport performance. The purpose of this study was to examine the impact of a twelve-session, 30-minute mindfulness meditation training session for sport (MMTS) intervention. The sample included a Division I female collegiate athletes, using quantitative comparisons based on preand post-test ratings on the Mindfulness Attention Awareness Scale (MAAS), the Positive Affect Negative Affect Scale (PANAS), the Psychological Well-Being Scale and the Life Satisfaction Scale. Paired sample t-tests highlight significant increases in mindfulness scores for the intervention group (p < .01), while the comparison group score of mindfulness remained constant. Both groups remained stable in reported positive affect however the intervention group maintained stable reports of negative affect while the comparison group experienced a significant increase in Negative Affect (p < .001). Results are discussed in relation to existing theories on mindfulness and meditation.",
"title": ""
},
{
"docid": "b5aad69e6a0f672cdaa1f81187a48d57",
"text": "In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%.",
"title": ""
},
{
"docid": "ae7877dba4d843f6c6fc2f54e3ce7b9c",
"text": "Many lesion experiments have provided evidence that the hippocampus plays a time-limited role in memory, consistent with the operation of a systems-level memory consolidation process during which lasting neocortical memory traces become established [see Squire, L. R., Clark, R. E., & Knowlton, B. J. (2001). Retrograde amnesia. Hippocampus 11, 50]. However, large lesions of the hippocampus at different time intervals after acquisition of a watermaze spatial reference memory task have consistently resulted in temporally ungraded retrograde amnesia [Bolhuis, J. J., Stewart, C. A., Forrest, E. M. (1994). Retrograde amnesia and memory reactivation in rats with ibotenate lesions to the hippocampus or subiculum. Quarterly Journal of Experimental Psychology 47B, 129; Mumby, D. G., Astur, R. S., Weisend, M. P., Sutherland, R. J. (1999). Retrograde amnesia and selective damage to the hippocampal formation: memory for places and object discriminations. Behavioural Brain Research 106, 97; Sutherland, R. J., Weisend, M. P., Mumby, D., Astur, R. S., Hanlon, F. M., et al. (2001). Retrograde amnesia after hippocampal damage: recent vs. remote memories in two tasks. Hippocampus 11, 27]. It is possible that spatial memories acquired during such a task remain permanently dependent on the hippocampus, that chance performance may reflect a failure to access memory traces that are initially unexpressed but still present, or that graded retrograde amnesia for spatial information might only be observed following partial hippocampal lesions. This study examined the retrograde memory impairments of rats that received either partial or complete lesions of the hippocampus either 1-2 days, or 6 weeks after training in a watermaze reference memory task. Memory retention was assessed using a novel 'reminding' procedure consisting of a series of rewarded probe trials, allowing the measurement of both free recall and memory reactivation. Rats with complete hippocampal lesions exhibited stable, temporally ungraded retrograde amnesia, and could not be reminded of the correct location. Partially lesioned rats could be reminded of a recently learned platform location, but no recovery of remote memory was observed. These results offer no support for hippocampus-dependent consolidation of allocentric spatial information, and suggest that the hippocampus can play a long-lasting role in spatial memory. The nature of this role--in the storage, retrieval, or expression of memory--is discussed.",
"title": ""
}
] |
scidocsrr
|
700af11d69e36e5a57c0d41c1c96cead
|
Modeling Customer Lifetime Value in the Telecom Industry
|
[
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
}
] |
[
{
"docid": "dd14599e6a4d2e83a7a476471be53d13",
"text": "This paper presents the modeling, design, fabrication, and measurement of microelectromechanical systems-enabled continuously tunable evanescent-mode electromagnetic cavity resonators and filters with very high unloaded quality factors (Qu). Integrated electrostatically actuated thin diaphragms are used, for the first time, for tuning the frequency of the resonators/filters. An example tunable resonator with 2.6:1 (5.0-1.9 GHz) tuning ratio and Qu of 300-650 is presented. A continuously tunable two-pole filter from 3.04 to 4.71 GHz with 0.7% bandwidth and insertion loss of 3.55-2.38 dB is also shown as a technology demonstrator. Mechanical stability measurements show that the tunable resonators/filters exhibit very low frequency drift (less than 0.5% for 3 h) under constant bias voltage. This paper significantly expands upon previously reported tunable resonators.",
"title": ""
},
{
"docid": "8fccceb2757decb670eed84f4b2405a1",
"text": "This paper develops and evaluates search and optimization techniques for autotuning 3D stencil (nearest neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, autogenerates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This autotuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other autotuned stencil codes by a large margin. Furthermore, heterogeneous GPU clusters are shown to exhibit the highest performance for dissimilar tuning parameters leveraging proportional partitioning relative to single-GPU performance.",
"title": ""
},
{
"docid": "e902cdc8d2e06d7dd325f734b0a289b6",
"text": "Vaccinium arctostaphylos is a traditional medicinal plant in Iran used for the treatment of diabetes mellitus. In our search for antidiabetic compounds from natural sources, we found that the extract obtained from V. arctostaphylos berries showed an inhibitory effect on pancreatic alpha-amylase in vitro [IC50 = 1.91 (1.89-1.94) mg/mL]. The activity-guided purification of the extract led to the isolation of malvidin-3-O-beta-glucoside as an a-amylase inhibitor. The compound demonstrated a dose-dependent enzyme inihibitory activity [IC50 = 0.329 (0.316-0.342) mM].",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "773c132b708a605039d59de52a3cf308",
"text": "BACKGROUND\nAirSeal is a novel class of valve-free insufflation system that enables a stable pneumoperitoneum with continuous smoke evacuation and carbon dioxide (CO₂) recirculation during laparoscopic surgery. Comparison data to standard CO₂ pressure pneumoperitoneum insufflators is scarce. The aim of this study is to evaluate the potential advantages of AirSeal compared to a standard CO₂ insufflator.\n\n\nMETHODS/DESIGN\nThis is a single center randomized controlled trial comparing elective laparoscopic cholecystectomy, colorectal surgery and hernia repair with AirSeal (group A) versus a standard CO₂ pressure insufflator (group S). Patients are randomized using a web-based central randomization and registration system. Primary outcome measures will be operative time and level of postoperative shoulder pain by using the visual analog score (VAS). Secondary outcomes include the evaluation of immunological values through blood tests, anesthesiological parameters, surgical side effects and length of hospital stay. Taking into account an expected dropout rate of 5%, the total number of patients is 182 (n = 91 per group). All tests will be two-sided with a confidence level of 95% (P <0.05).\n\n\nDISCUSSION\nThe duration of an operation is an important factor in reducing the patient's exposure to CO₂ pneumoperitoneum and its adverse consequences. This trial will help to evaluate if the announced advantages of AirSeal, such as clear sight of the operative site and an exceptionally stable working environment, will facilitate the course of selected procedures and influence operation time and patients clinical outcome.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01740011, registered 23 November 2012.",
"title": ""
},
{
"docid": "fea8bf3ca00b3440c2b34188876917a2",
"text": "Digitalization has been identified as one of the major trends changing society and business. Digitalization causes changes for companies due to the adoption of digital technologies in the organization or in the operation environment. This paper discusses digitalization from the viewpoint of diverse case studies carried out to collect data from several companies, and a literature study to complement the data. This paper describes the first version of the digital transformation model, derived from synthesis of these industrial cases, explaining a starting point for a systematic approach to tackle digital transformation. The model is aimed to help companies systematically handle the changes associated with digitalization. The model consists of four main steps, starting with positioning the company in digitalization and defining goals for the company, and then analyzing the company’s current state with respect to digitalization goals. Next, a roadmap for reaching the goals is defined and implemented in the company. These steps are iterative and can be repeated several times. Although company situations vary, these steps will help to systematically approach digitalization and to take the steps necessary to benefit from it.",
"title": ""
},
{
"docid": "f2f2b48cd35d42d7abc6936a56aa580d",
"text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.",
"title": ""
},
{
"docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd",
"text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.",
"title": ""
},
{
"docid": "5499d3f75391ec2a28dcc84d3a3c4410",
"text": "DRAM latency continues to be a critical bottleneck for system performance. In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips. Our mechanism is based on the key observation that a recently-accessed row has more charge and thus the following access to the same row can be performed faster. To exploit this observation, we propose to track the addresses of recently-accessed rows in a table in the memory controller. If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency. Row addresses are removed from the table after a specified duration to ensure rows that have leaked too much charge are not accessed with lower latency. We evaluate ChargeCache on a wide variety of workloads and show that it provides significant performance and energy benefits for both single-core and multi-core systems.",
"title": ""
},
{
"docid": "49dd14500296da55b7ed34d96af30b13",
"text": "Deadly infections from opportunistic fungi have risen in frequency, largely because of the at-risk immunocompromised population created by advances in modern medicine and the HIV/AIDS pandemic. This review focuses on dynamics of the fungal polysaccharide cell wall, which plays an outsized role in fungal pathogenesis and therapy because it acts as both an environmental barrier and as the major interface with the host immune system. Human fungal pathogens use architectural strategies to mask epitopes from the host and prevent immune surveillance, and recent work elucidates how biotic and abiotic stresses present during infection can either block or enhance masking. The signaling components implicated in regulating fungal immune recognition can teach us how cell wall dynamics are controlled, and represent potential targets for interventions designed to boost or dampen immunity.",
"title": ""
},
{
"docid": "d0b2999de796ec3215513536023cc2be",
"text": "Recently proposed machine comprehension (MC) application is an effort to deal with natural language understanding problem. However, the small size of machine comprehension labeled data confines the application of deep neural networks architectures that have shown advantage in semantic inference tasks. Previous methods use a lot of NLP tools to extract linguistic features but only gain little improvement over simple baseline. In this paper, we build an attention-based recurrent neural network model, train it with the help of external knowledge which is semantically relevant to machine comprehension, and achieves a new state-of-the-art result.",
"title": ""
},
{
"docid": "a40e71e130f31450ce1e60d9cd4a96be",
"text": "Progering® is the only intravaginal ring intended for contraception therapies during lactation. It is made of silicone and releases progesterone through the vaginal walls. However, some drawbacks have been reported in the use of silicone. Therefore, ethylene vinyl acetate copolymer (EVA) was tested in order to replace it. EVA rings were produced by a hot-melt extrusion procedure. Swelling and degradation assays of these matrices were conducted in different mixtures of ethanol/water. Solubility and partition coefficient of progesterone were measured, together with the initial hormone load and characteristic dimensions. A mathematical model was used to design an EVA ring that releases the hormone at specific rate. An EVA ring releasing progesterone in vitro at about 12.05 ± 8.91 mg day−1 was successfully designed. This rate of release is similar to that observed for Progering®. In addition, it was observed that as the initial hormone load or ring dimension increases, the rate of release also increases. Also, the device lifetime was extended with a rise in the initial amount of hormone load. EVA rings could be designed to release progesterone in vitro at a rate of 12.05 ± 8.91 mg day−1. This ring would be used in contraception therapies during lactation. The use of EVA in this field could have initially several advantages: less initial and residual hormone content in rings, no need for additional steps of curing or crosslinking, less manufacturing time and costs, and the possibility to recycle the used rings.",
"title": ""
},
{
"docid": "6b1dd01c57f967e3caf83af9343099c5",
"text": "We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—generative and predictive—that are trained separately but are used jointly to generate novel targeted chemical libraries. ReLeaSE uses simple representation of molecules by their simplified molecular-input line-entry system (SMILES) strings only. Generative models are trained with a stack-augmented memory network to produce chemically feasible SMILES strings, and predictive models are derived to forecast the desired properties of the de novo–generated compounds. In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm. In the second phase, both models are trained jointly with the RL approach to bias the generation of new chemical structures toward those with the desired physical and/or biological properties. In the proof-of-concept study, we have used the ReLeaSE method to design chemical libraries with a bias toward structural complexity or toward compounds with maximal, minimal, or specific range of physical properties, such as melting point or hydrophobicity, or toward compounds with inhibitory activity against Janus protein kinase 2. The approach proposed herein can find a general use for generating targeted chemical libraries of novel compounds optimized for either a single desired property or multiple properties.",
"title": ""
},
{
"docid": "f31a8b627e6a0143e70cf1526bf827fa",
"text": "D-amino acid oxidase (DAO) has been reported to be associated with schizophrenia. This study aimed to search for genetic variants associated with this gene. The genomic regions of all exons, highly conserved regions of introns, and promoters of this gene were sequenced. Potentially meaningful single-nucleotide polymorphisms (SNPs) obtained from direct sequencing were selected for genotyping in 600 controls and 912 patients with schizophrenia and in a replicated sample consisting of 388 patients with schizophrenia. Genetic associations were examined using single-locus and haplotype association analyses. In single-locus analyses, the frequency of the C allele of a novel SNP rs55944529 located at intron 8 was found to be significantly higher in the original large patient sample (p = 0.016). This allele was associated with a higher level of DAO mRNA expression in the Epstein-Barr virus-transformed lymphocytes. The haplotype distribution of a haplotype block composed of rs11114083-rs2070586-rs2070587-rs55944529 across intron 1 and intron 8 was significantly different between the patients and controls and the haplotype frequencies of AAGC were significantly higher in patients, in both the original (corrected p < 0.0001) and replicated samples (corrected p = 0.0003). The CGTC haplotype was specifically associated with the subgroup with deficits in sustained attention and executive function and the AAGC haplotype was associated with the subgroup without such deficits. The DAO gene was a susceptibility gene for schizophrenia and the genomic region between intron 1 and intron 8 may harbor functional genetic variants, which may influence the mRNA expression of DAO and neurocognitive functions in schizophrenia.",
"title": ""
},
{
"docid": "ca544972e6fe3c051f72d04608ff36c1",
"text": "The prefrontal cortex (PFC) plays a key role in controlling goal-directed behavior. Although a variety of task-related signals have been observed in the PFC, whether they are differentially encoded by various cell types remains unclear. Here we performed cellular-resolution microendoscopic Ca(2+) imaging from genetically defined cell types in the dorsomedial PFC of mice performing a PFC-dependent sensory discrimination task. We found that inhibitory interneurons of the same subtype were similar to each other, but different subtypes preferentially signaled different task-related events: somatostatin-positive neurons primarily signaled motor action (licking), vasoactive intestinal peptide-positive neurons responded strongly to action outcomes, whereas parvalbumin-positive neurons were less selective, responding to sensory cues, motor action, and trial outcomes. Compared to each interneuron subtype, pyramidal neurons showed much greater functional heterogeneity, and their responses varied across cortical layers. Such cell-type and laminar differences in neuronal functional properties may be crucial for local computation within the PFC microcircuit.",
"title": ""
},
{
"docid": "a941e1fb5a21fafa8e78269c4bd90637",
"text": "The penis is the male organ of copulation and is composed of erectile tissue that encases the extrapelvic portion of the urethra (Fig. 66-1). The penis of the horse is musculocavernous and can be divided into three parts: the root, the body or shaft, and the glans penis. The penis originates caudally at the root, which is fixed to the lateral aspects of the ischial arch by two crura (leg-like parts) that converge to form the shaft of the penis. The shaft constitutes the major portion of the penis and begins at the junction of the crura. It is attached caudally to the symphysis ischii of the pelvis by two short suspensory ligaments that merge with the origin of the gracilis muscles (Fig. 66-2). The glans penis is the conical enlargement that caps the shaft. The urethra passes over the ischial arch between the crura and curves cranioventrally to become incorporated within erectile tissue of the penis. The mobile shaft and glans penis extend cranioventrally to the umbilical region of the abdominal wall. The body is cylindrical but compressed laterally. When quiescent, the penis is soft, compressible, and about 50 cm long. Fifteen to 20 cm lie free in the prepuce. When maximally erect, the penis is up to three times longer than when it is in a quiescent state. Erectile Bodies",
"title": ""
},
{
"docid": "3b85d3eef49825e67f77769950b80800",
"text": "The phishing is a technique used by cyber-criminals to impersonate legitimate websites in order to obtain personal information. This paper presents a novel lightweight phishing detection approach completely based on the URL (uniform resource locator). The mentioned system produces a very satisfying recognition rate which is 95.80%. This system, is an SVM (support vector machine) tested on a 2000 records data-set consisting of 1000 legitimate and 1000 phishing URLs records. In the literature, several works tackled the phishing attack. However those systems are not optimal to smartphones and other embed devices because of their complex computing and their high battery usage. The proposed system uses only six URL features to perform the recognition. The mentioned features are the URL size, the number of hyphens, the number of dots, the number of numeric characters plus a discrete variable that correspond to the presence of an IP address in the URL and finally the similarity index. Proven by the results of this study the similarity index, the feature we introduce for the first time as input to the phishing detection systems improves the overall recognition rate by 21.8%.",
"title": ""
},
{
"docid": "13642d5d73a58a1336790f74a3f0eac7",
"text": "Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.",
"title": ""
},
{
"docid": "9fa53682b83e925409ea115569494f70",
"text": "Circuit techniques for enabling a sub-0.9 V logic-compatible embedded DRAM (eDRAM) are presented. A boosted 3T gain cell utilizes Read Word-line (RWL) preferential boosting to increase read margin and improve data retention time. Read speed is enhanced with a hybrid current/voltage sense amplifier that allows the Read Bit-line (RBL) to remain close to VDD. A regulated bit-line write scheme for driving the Write Bit-line (WBL) is equipped with a steady-state storage node voltage monitor to overcome the data `1' write disturbance problem of the PMOS gain cell without introducing another boosted supply for the Write Word-line (WWL) over-drive. An adaptive and die-to-die adjustable read reference bias generator is proposed to cope with PVT variations. Monte Carlo simulations compare the 6-sigma read and write performance of proposed eDRAM against conventional designs. Measurement results from a 64 kb eDRAM test chip implemented in a 65 nm low-leakage CMOS process show a 1.25 ms data retention time with a 2 ns random cycle time at 0.9 V, 85°C, and a 91.3 μW per Mb static power dissipation at 1.0 V, 85°C.",
"title": ""
},
{
"docid": "c9d137a71c140337b3f8345efdac17ab",
"text": "For more than 30 years, many authors have attempted to synthesize the knowledge about how an enterprise should structure its business processes, the people that execute them, the Information Systems that support both of these and the IT layer on which such systems operate, in such a way that they will be aligned with the business strategy. This is the challenge of Enterprise Architecture design, which is the theme of this paper. We will provide a brief review of the literature on this subject, with an emphasis on more recent proposals and methods that have been applied in practice. We also select approaches that propose some sort of framework that provides a general Enterprise Architecture in a given domain that can be reused as a basis for specific designs in such a domain. Then we present our proposal for Enterprise Architecture design, which is based on general domain models that we call Enterprise Architecture Patterns.",
"title": ""
}
] |
scidocsrr
|
332a0601450185af3356b8a68b833045
|
Hardening the OAuth-WebView Implementations in Android Applications by Re-Factoring the Chromium Library
|
[
{
"docid": "4eafe7f60154fa2bed78530735a08878",
"text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.",
"title": ""
}
] |
[
{
"docid": "fe529aab49b0c985e40bab3ab0e0582c",
"text": "A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.",
"title": ""
},
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "df5778fce3318029d249de1ff37b0715",
"text": "The Switched Reluctance Machine (SRM) is a robust machine and is a candidate for ultra high speed applications. Until now the area of ultra high speed machines has been dominated by permanent magnet machines (PM). The PM machine has a higher torque density and some other advantages compared to SRMs. However, the soaring prices of the rare earth materials are driving the efforts to find an alternative to PM machines without significantly impacting the performance. At the same time significant progress has been made in the design and control of the SRM. This paper reviews the progress of the SRM as a high speed machine and proposes a novel rotor structure design to resolve the challenge of high windage losses at ultra high speed. It then elaborates on the path of modifying the design to achieve optimal performance. The simulation result of the final design is verified on FEA software. Finally, a prototype machine with similar design is built and tested to verify the simulation model. The experimental waveform indicates good agreement with the simulation result. Therefore, the performance of the prototype machine is analyzed and presented at the end of this paper.",
"title": ""
},
{
"docid": "5bd61380b9b05b3e89d776c6cbeb0336",
"text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "eb9973ea01e6d55eb19912d2a437af30",
"text": "Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular in optimization. In fact, it is now widely recognized that the success of deep learning is not only due to the special deep architecture of the models, but also due to the behavior of the stochastic descent methods used, which play a key role in reaching “good” solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models—originally developed in the 1990’s—and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. In particular, we show that there is a fundamental identity which holds for SMD (and SGD) under very general conditions, and which implies the minimax optimality of SMD (and SGD) for sufficiently small step size, and for a general class of loss functions and general nonlinear models. We further show that this identity can be used to naturally establish other properties of SMD (and SGD), namely convergence and implicit regularization for over-parameterized linear models (in what is now being called the “interpolating regime”), some of which have been shown in certain cases in prior literature. We also argue how this identity can be used in the so-called “highly over-parameterized” nonlinear setting (where the number of parameters far exceeds the number of data points) to provide insights into why SMD (and SGD) may have similar convergence and implicit regularization properties for deep learning.",
"title": ""
},
{
"docid": "fdf1b2f49540d5d815f2d052f2570afe",
"text": "It has been recently shown that Generative Adversarial Networks (GANs) can produce synthetic images of exceptional visual fidelity. In this work, we propose the first GAN-based method for automatic face aging. Contrary to previous works employing GANs for altering of facial attributes, we make a particular emphasize on preserving the original person's identity in the aged version of his/her face. To this end, we introduce a novel approach for “Identity-Preserving” optimization of GAN's latent vectors. The objective evaluation of the resulting aged and rejuvenated face images by the state-of-the-art face recognition and age estimation solutions demonstrate the high potential of the proposed method.",
"title": ""
},
{
"docid": "c82d08f4a89db422785f017597dc09f2",
"text": "We describe the magnetic resonance (MR) patterns of a variety of fetal gastrointestinal (GI) abnormalities. Thirty-two fetuses between 23 and 38 weeks’ gestation with abnormal appearance of the GI tract by ultrasound underwent MR imaging with T1- and T2-weighted sequences. The MR aspect of intestinal atresia (duodenal atresia, one case; small bowel atresia, nine cases) included dilatation of the bowel loops, accurate assessment of the normal bowel distal to the atresia (except in the patient with multiple atresia and apple-peel syndrome), and micro-rectum with decreased T1 signal (except in the patient with duodenal atresia). Megacystis-microcolon-intestinal hypoperistalsis syndrome (one case) was indicated by an abnormal signal of the entire bowel and an abnormal pattern for the urinary tract. Meconium pseudocysts (two cases) were easily differentiated from enteric cysts (two cases). High anorectal malformations with (two cases) or without (one case) urinary fistula and cloacal malformation (one case) are described and MR findings are discussed. The capability of MR imaging to demonstrate the normal bowel with intraperitoneal anomalies (e.g., congenital diaphragmatic hernia, and sacrococcygeal teratoma) is emphasized. MR imaging is informative in the diagnosis of GI tract abnormalities, especially the severe malformations, with much more accuracy than sonography.",
"title": ""
},
{
"docid": "0e387b0ce86b00123ed6dd69459033e8",
"text": "3-D hand pose estimation is an essential problem for human–computer interaction. Most of the existing depth-based hand pose estimation methods consume 2-D depth map or 3-D volume via 2-D/3-D convolutional neural networks. In this paper, we propose a deep semantic hand pose regression network (SHPR-Net) for hand pose estimation from point sets, which consists of two subnetworks: a semantic segmentation subnetwork and a hand pose regression subnetwork. The semantic segmentation network assigns semantic labels for each point in the point set. The pose regression network integrates the semantic priors with both input and late fusion strategy and regresses the final hand pose. Two transformation matrices are learned from the point set and applied to transform the input point cloud and inversely transform the output pose, respectively, which makes the SHPR-Net more robust to geometric transformations. Experiments on NYU, ICVL, and MSRA hand pose data sets demonstrate that our SHPR-Net achieves high performance on par with the start-of-the-art methods. We also show that our method can be naturally extended to hand pose estimation from the multi-view depth data and achieves further improvement on the NYU data set.",
"title": ""
},
{
"docid": "0bc86f0bcc0ae544aa1fbcc572390aa3",
"text": "The detection of plant leaf is an very important factor to prevent serious outbreak. Automatic detection of plant disease is essential research topic. Most plant diseases are caused by fungi, bacteria, and viruses. Fungi are identified primarily from their morphology, with emphasis placed on their reproductive structures. Bacteria are considered more primitive than fungi and generally have simpler life cycles. With few exceptions, bacteria exist as single cells and increase in numbersby dividing into two cells during a process called binary fission Viruses are extremely tiny particles consisting of protein and genetic material with no associated protein. The term disease is usually used only for the destruction of live plants. The developed processing scheme consists of four main steps, first a color transformation structure for the input RGB image is created, this RGB is converted to HSI because RGB is for color generation and his for color descriptor. Then green pixels are masked and removed using specific threshold value, then the image is segmented and the useful segments are extracted, finally the texture statistics is computed. from SGDM matrices. Finally the presence of diseases on the plant leaf is evaluated. Keyword: HSI, Segmentation, Color Co-occurrence Matrix, Texture, Plant Leaf Diseases.",
"title": ""
},
{
"docid": "053218d2f92ec623daa403a55aba8c74",
"text": "Yoga is an age-old traditional Indian psycho-philosophical-cultural method of leading one's life, that alleviates stress, induces relaxation and provides multiple health benefits to the person following its system. It is a method of controlling the mind through the union of an individual's dormant energy with the universal energy. Commonly practiced yoga methods are 'Pranayama' (controlled deep breathing), 'Asanas' (physical postures) and 'Dhyana' (meditation) admixed in varying proportions with differing philosophic ideas. A review of yoga in relation to epilepsy encompasses not only seizure control but also many factors dealing with overall quality-of-life issues (QOL). This paper reviews articles related to yoga and epilepsy, seizures, EEG, autonomic changes, neuro-psychology, limbic system, arousal, sleep, brain plasticity, motor performance, brain imaging studies, and rehabilitation. There is a dearth of randomized, blinded, controlled studies related to yoga and seizure control. A multi-centre, cross-cultural, preferably blinded (difficult for yoga), well-randomized controlled trial, especially using a single yogic technique in a homogeneous population such as Juvenile myoclonic epilepsy is justified to find out how yoga affects seizure control and QOL of the person with epilepsy.",
"title": ""
},
{
"docid": "b7babfd34b47420f85aae434ce72b84d",
"text": "The use of Building Information Modeling (BIM) in the construction industry is on the rise. It is widely acknowledged that adoption of BIM would cause a seismic shift in the business processes within the construction industry and related fields. Cost estimation is a key aspect in the workflow of a construction project. Processes within estimating, such as quantity survey and pricing, may be automated by using existing BIM software in combination with existing estimating software. The adoption of this combination of technologies is not as widely seen as might be expected. Researchers conducted a survey of construction practitioners to determine the extent to which estimating processes were automated in the conjunction industry, with the data from a BIM model. Survey participants were asked questions about how BIM was used within their organization and how it was used in the various tasks involved in construction cost estimating. The results of the survey data revealed that while most contractors were using BIM, only a small minority were using it to automate estimating processes. Most organizations reported that employees skilled in BIM did not have the estimating experience to produce working estimates from BIM models and vice-versa. The results of the survey are presented and analyzed to determine conditions that would improve the adoption of these new business processes in the construction estimating field.",
"title": ""
},
{
"docid": "fd317c492ed68bf14bdef38c27ed6696",
"text": "The systematic study of subcellular location patterns is required to fully characterize the human proteome, as subcellular location provides critical context necessary for understanding a protein's function. The analysis of tens of thousands of expressed proteins for the many cell types and cellular conditions under which they may be found creates a need for automated subcellular pattern analysis. We therefore describe the application of automated methods, previously developed and validated by our laboratory on fluorescence micrographs of cultured cell lines, to analyze subcellular patterns in tissue images from the Human Protein Atlas. The Atlas currently contains images of over 3000 protein patterns in various human tissues obtained using immunohistochemistry. We chose a 16 protein subset from the Atlas that reflects the major classes of subcellular location. We then separated DNA and protein staining in the images, extracted various features from each image, and trained a support vector machine classifier to recognize the protein patterns. Our results show that our system can distinguish the patterns with 83% accuracy in 45 different tissues, and when only the most confident classifications are considered, this rises to 97%. These results are encouraging given that the tissues contain many different cell types organized in different manners, and that the Atlas images are of moderate resolution. The approach described is an important starting point for automatically assigning subcellular locations on a proteome-wide basis for collections of tissue images such as the Atlas.",
"title": ""
},
{
"docid": "5ceb415b17cc36e9171ddc72a860ccc8",
"text": "Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, as well as the parameters of the word embedding training process, namely the context window size, the dimensionality and the number of negative samples, on the attained classification performance. By comparing the classification results of word embedding models that have been trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data should align with the Twitter classification dataset both in data type and time period to achieve significantly better performance compared to baselines such as SVM with TF-IDF. Moreover, by evaluating the results of word embedding models trained using various context window sizes and dimensionalities, we find that large context window and dimension sizes are preferable to improve the performance. However, the number of negative samples parameter does not significantly affect the performance of the CNN classifiers. Our experimental results also show that choosing the correct word embedding model for use with CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings. Finally, for out-of-vocabulary (OOV) words that are not available in the learned word embedding models, we show that a simple OOV strategy to randomly initialise the OOV words without any prior knowledge is sufficient to attain a good classification performance among the current OOV strategies (e.g. a random initialisation using statistics of the pre-trained word embedding models).",
"title": ""
},
{
"docid": "32a97a3d9f010c7cdd542c34f02afb46",
"text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we delve into the logical design of ETL scenarios and provide a generic and customizable framework in order to support the DW designer in his task. First, we present a metamodel particularly customized for the definition of ETL activities. We follow a workflow-like approach, where the output of a certain activity can either be stored persistently or passed to a subsequent activity. Also, we employ a declarative database programming language, LDL, to define the semantics of each activity. The metamodel is generic enough to capture any possible ETL activity. Nevertheless, in the pursuit of higher reusability and flexibility, we specialize the set of our generic metamodel constructs with a palette of frequently-used ETL activities, which we call templates. Moreover, in order to achieve a uniform extensibility mechanism for this library of built-ins, we have to deal with specific language issues. Therefore, we also discuss the mechanics of template instantiation to concrete activities. The design concepts that we introduce have been implemented in a tool, ARKTOS II, which is also presented.",
"title": ""
},
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "432149654abdfdabb9147a830f50196d",
"text": "In this paper, an advanced High Voltage (HV) IGBT technology, which is focused on low loss and is the ultimate device concept for HV IGBT, is presented. CSTBTTM technology utilizing “ULSI technology” and “Light Punch-Through (LPT) II technology” (i.e. narrow Wide Cell Pitch LPT(II)-CSTBT(III)) for the first time demonstrates breaking through the limitation of HV IGBT's characteristics with voltage ratings ranging from 2500 V up to 6500 V. The improved significant trade-off characteristic between on-state voltage (VCE(sat)) and turn-off loss (EOFF) is achieved by means of a “narrow Wide Cell Pitch CSTBT(III) cell”. In addition, this device achieves a wide operating junction temperature (@218 ∼ 448K) and excellent short circuit behavior with the new cell and vertical designs. The LPT(II) concept is utilized for ensuring controllable IGBT characteristics and achieving a thin N− drift layer. Our results cover design of the Wide Cell Pitch LPT(II)-CSTBT(III) technology and demonstrate high total performance with a great improvement potential.",
"title": ""
},
{
"docid": "15e31918fcebb95beaf381d93d7605a5",
"text": "One challenge for UHF RFID passive tag design is to obtain a low-profile antenna that minimizes the influence of near-body or attached objects without sacrificing both read range and universal UHF RFID band interoperability. A new improved design of a RFID passive tag antenna is presented that performs well near problematic surfaces (human body, liquids, metals) across most of the universal UHF RFID (840-960 MHz) band. The antenna is based on a low-profile printed configuration with slots, and it is evaluated through extensive simulations and experimental tests.",
"title": ""
},
{
"docid": "5b618ffd8e3dc68f36757ad5551a136a",
"text": "Recent years have witnessed the boom of online sharing media contents, which raise significant challenges in effective management and retrieval. Though a large amount of efforts have been made, precise retrieval on video shots with certain topics has been largely ignored. At the same time, due to the popularity of novel time-sync comments, or so-called “bullet-screen comments”, video semantics could be now combined with timestamps to support further research on temporal video labeling. In this paper, we propose a novel video understanding framework to assign temporal labels on highlighted video shots. To be specific, due to the informal expression of bullet-screen comments, we first propose a temporal deep structured semantic model (T-DSSM) to represent comments into semantic vectors by taking advantage of their temporal correlation. Then, video highlights are recognized and labeled via semantic vectors in a supervised way. Extensive experiments on a real-world dataset prove that our framework could effectively label video highlights with a significant margin compared with baselines, which clearly validates the potential of our framework on video understanding, as well as bullet-screen comments interpretation.",
"title": ""
},
{
"docid": "04edbcc6006a76e538cffb0cc09d9fc5",
"text": "Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve.",
"title": ""
},
{
"docid": "4426848fbae6fdabdb969768254f2cb1",
"text": "This paper presents a multimodal information presentation method for a basic dance training system. The system targets on beginners and enables them to learn basics of dances easily. One of the most effective ways of learning dances is to watch a video showing the performance of dance masters. However, some information cannot be conveyed well through video. One is the translational motion, especially that in the depth direction. We cannot tell exactly how far does the dancers move forward or backward. Another is the timing information. Although we can tell how to move our arms or legs from video, it is difficult to know when to start moving them. We solve the first issue by introducing an image display on a mobile robot. We can learn the amount of translation just by following the robot. We introduce active devices for the second issue. The active devices are composed of some vibro-motors and are developed to direct action-starting cues with vibration. Experimental results show the effectiveness of our multimodal information presentation method.",
"title": ""
}
] |
scidocsrr
|
3092695951f21edbbe9a72a6c3d65a6c
|
Hybrid Evolutionary Algorithms for Graph Coloring
|
[
{
"docid": "fd32bf580b316634e44a8c37adfab2eb",
"text": "In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoc techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.",
"title": ""
}
] |
[
{
"docid": "039aca95b78859648e56e3b51472a08c",
"text": "Knowledge about the general graph structure of the World Wide Web is important for understanding the social mechanisms that govern its growth, for designing ranking methods, for devising better crawling algorithms, and for creating accurate models of its structure. In this paper, we analyze a large web graph. The graph was extracted from a large publicly accessible web crawl that was gathered by the Common Crawl Foundation in 2012. The graph covers over 3.5 billion web pages and 128.7 billion hyperlinks. We analyze and compare, among other features, degree distributions, connectivity, average distances, and the structure of weakly/strongly connected components. We conduct our analysis on three different levels of aggregation: page, host, and pay-level domain (PLD) (one “dot level” above public suffixes). Our analysis shows that, as evidenced by previous research (Serrano et al., 2007), some of the features previously observed by Broder et al., 2000 are very dependent on artifacts of the crawling process, whereas other appear to be more structural. We confirm the existence of a giant strongly connected component; we however find, as observed by other researchers (Donato et al., 2005; Boldi et al., 2002; Baeza-Yates and Poblete, 2003), very different proportions of nodes that can reach or that can be reached from the giant component, suggesting that the “bow-tie structure” as described by Broder et al. is strongly dependent on the crawling process, and to the best of our current knowledge is not a structural property of the Web. More importantly, statistical testing and visual inspection of size-rank plots show that the distributions of indegree, outdegree and sizes of strongly connected components of the page and host graph are not power laws, contrarily to what was previously reported for much smaller crawls, although they might be heavy tailed. If we aggregate at pay-level domain, however, a power law emerges. We also provide for the first time accurate measurement of distance-based features, using recently introduced algorithms that scale to the size of our crawl (Boldi and Vigna, 2013).",
"title": ""
},
{
"docid": "e17284a2cfff3f9d1ad6c471acadc553",
"text": "Baby-Led Weaning (BLW) is an alternative method for introducing complementary foods to infants in which the infant feeds themselves hand-held foods instead of being spoon-fed by an adult. The BLW infant also shares family food and mealtimes and is offered milk (ideally breast milk) on demand until they self-wean. Anecdotal evidence suggests that many parents are choosing this method instead of conventional spoon-feeding of purées. Observational studies suggest that BLW may encourage improved eating patterns and lead to a healthier body weight, although it is not yet clear whether these associations are causal. This review evaluates the literature with respect to the prerequisites for BLW, which we have defined as beginning complementary foods at six months (for safety reasons), and exclusive breastfeeding to six months (to align with WHO infant feeding guidelines); the gross and oral motor skills required for successful and safe self-feeding of whole foods from six months; and the practicalities of family meals and continued breastfeeding on demand. Baby-Led Weaning will not suit all infants and families, but it is probably achievable for most. However, ultimately, the feasibility of BLW as an approach to infant feeding can only be determined in a randomized controlled trial. Given the popularity of BLW amongst parents, such a study is urgently needed.",
"title": ""
},
{
"docid": "4284f5cb44a2c466dd7ea9e7ee2fc387",
"text": "As an iMetrics technique, co-word analysis is used to describe the status of various subject areas, however, iMetrics itself is not examined by a co-word analysis. For the purpose of using co-word analysis, this study tries to investigate the intellectual structure of iMetrics during the period of 1978 to 2014. The research data are retrieved from two core journals on iMetrics research ( Scientometrics , and Journal of Informetrics ) and relevant articles in six journals publishing iMetrics studies. Application of hierarchical clustering led to the formation of 11 clusters representing the intellectual structure of iMetrics, including “Scientometric Databases and Indicators,” “Citation Analysis,” “Sociology of Science,” “Issues Related to Rankings of Universities, Journals, etc.,” “Information Visualization and Retrieval,” “Mapping Intellectual Structure of Science,” “Webometrics,” “Industry–University– Government Relations,” “Technometrics (Innovation and Patents), “Scientific Collaboration in Universities”, and “Basics of Network Analysis.” Furthermore, a two-dimensional map and a strategic diagram are drawn to clarify the structure, maturity, and cohesion of clusters. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f85547bb47571bd86b3ece7ae583cba0",
"text": "In this paper, a novel start-up circuit with a simple topology and low stand-by power during under voltage lockout (UVLO) mode is proposed for SMPS (switching mode power supplies) application. The proposed start-up circuit is designed using only a few MOSFETs, LDMOSs, and one JFET based on the analysis of the existing start-up circuits to address the power consumption and input voltage range issues of the conventional start-up. Simulated results using 0.35um BCDMOS process demonstrate that the leakage current of the proposed circuit is less than 1uA after UVLO signal turns on. Setting time is less than 1ms when the load current changes from 10mA to 20mA and vice versa",
"title": ""
},
{
"docid": "02a19d2a94adf992dc6fbb6c5ebdcf77",
"text": "Today, project management practices play a key role in different industries and sectors. Project management is promoted as an organizational strategic component that leads innovation, creates value and turns vision into reality. Despite the importance of projects and project management their high rate of failures and challenges is a major concern of both industry and academia. Among the reasons that affect project outcomes, stakeholder influential attributes and more importantly, their understanding and effective utilization and management are identified as the key to project success. This study utilizes the body of knowledge developed in the field of project management and uses stakeholder theory combined with a number of complementary theories to achieve its goals and objectives. The study moves beyond the traditional power-based frameworks employing six key influential attributes to examine their direct and mediating effects on project success. The quantitative survey data are analyzed using SEM statistical techniques and procedures to produce research results. The research results have led to the development of a new typology of stakeholder influential attributes (TSIA) and a stakeholder-based project management model (SBPMM) that aid managing for stakeholders’ strategy and principle.",
"title": ""
},
{
"docid": "ebf05689ab2b96adda370e613b34b1f0",
"text": "A b s t r a c t . The problem of finding the internal orientation of a camera (camera calibration) is extremely important for practical applications. In this paper a complete method for calibrating a camera is presented. In contrast with existing methods it does not require a calibration object with a known 3D shape. The new method requires only point matches from image sequences. It is shown, using experiments with noisy data, that it is possible to calibrate a camera just by pointing it at the environment, selecting points of interest and then tracking them in the image as the camera moves. It is not necessary to know the camera motion. The camera calibration is computed in two steps. In the first step the epipolar transformation is found. Two methods for obtaining the epipoles are discussed, one due to Sturm is based on projective invariants, the other is based on a generalisation of the essential matrix. The second step of the computation uses the so-called Kruppa equations which link the epipolar transformation to the image of the absolute conic. After the camera has made three or more movements the Kruppa equations can be solved for the coefficients of the image of the absolute conic. The solution is found using a continuation method which is briefly described. The intrinsic parameters of the camera are obtained from the equation for the image of the absolute conic. The results of experiments with synthetic noisy data are reported and possible enhancements to the method are suggested.",
"title": ""
},
{
"docid": "22d233c7f0916506d2fc23b3a8ef4633",
"text": "CD69 is a type II C-type lectin involved in lymphocyte migration and cytokine secretion. CD69 expression represents one of the earliest available indicators of leukocyte activation and its rapid induction occurs through transcriptional activation. In this study we examined the molecular mechanism underlying mouse CD69 gene transcription in vivo in T and B cells. Analysis of the 45-kb region upstream of the CD69 gene revealed evolutionary conservation at the promoter and at four noncoding sequences (CNS) that were called CNS1, CNS2, CNS3, and CNS4. These regions were found to be hypersensitive sites in DNase I digestion experiments, and chromatin immunoprecipitation assays showed specific epigenetic modifications. CNS2 and CNS4 displayed constitutive and inducible enhancer activity in transient transfection assays in T cells. Using a transgenic approach to test CNS function, we found that the CD69 promoter conferred developmentally regulated expression during positive selection of thymocytes but could not support regulated expression in mature lymphocytes. Inclusion of CNS1 and CNS2 caused suppression of CD69 expression, whereas further addition of CNS3 and CNS4 supported developmental-stage and lineage-specific regulation in T cells but not in B cells. We concluded CNS1-4 are important cis-regulatory elements that interact both positively and negatively with the CD69 promoter and that differentially contribute to CD69 expression in T and B cells.",
"title": ""
},
{
"docid": "ce41d07b369635c5b0a914d336971f8e",
"text": "In this paper, a fuzzy controller for an inverted pendulum system is presented in two stages. These stages are: investigation of fuzzy control system modeling methods and solution of the “Inverted Pendulum Problem” by using Java programming with Applets for internet based control education. In the first stage, fuzzy modeling and fuzzy control system investigation, Java programming language, classes and multithreading were introduced. In the second stage specifically, simulation of the inverted pendulum problem was developed with Java Applets and the simulation results were given. Also some stability concepts are introduced. c © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8c2b0e93eae23235335deacade9660f0",
"text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.",
"title": ""
},
{
"docid": "49cda71b86a3a6b374616a9013816b38",
"text": "Discriminative localization is essential for fine-grained image classification task, which devotes to recognizing hundreds of subcategories in the same basic-level category. Reflecting on discriminative regions of objects, key differences among different subcategories are subtle and local. Existing methods generally adopt a two-stage learning framework: The first stage is to localize the discriminative regions of objects, and the second is to encode the discriminative features for training classifiers. However, these methods generally have two limitations: (1) Separation of the two-stage learning is time-consuming. (2) Dependence on object and parts annotations for discriminative localization learning leads to heavily labor-consuming labeling. It is highly challenging to address these two important limitations simultaneously. Existing methods only focus on one of them. Therefore, this paper proposes the discriminative localization approach via saliency-guided Faster R-CNN to address the above two limitations at the same time, and our main novelties and advantages are: (1) End-to-end network based on Faster R-CNN is designed to simultaneously localize discriminative regions and encode discriminative features, which accelerates classification speed. (2) Saliency-guided localization learning is proposed to localize the discriminative region automatically, avoiding labor-consuming labeling. Both are jointly employed to simultaneously accelerate classification speed and eliminate dependence on object and parts annotations. Comparing with the state-of-the-art methods on the widely-used CUB-200-2011 dataset, our approach achieves both the best classification accuracy and efficiency.",
"title": ""
},
{
"docid": "b123916f2795ab6810a773ac69bdf00b",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "db41f44f0ecccdd1828ac2789c2cedc9",
"text": "Porter’s generic strategy matrix, which highlights cost leadership, differentiation and focus as the three basic choices for firms, has dominated corporate competitive strategy for the last thirty years. According to this model, a venture can choose how it wants to compete, based on the match between its type of competitive advantage and the market target pursued, as the key determinants of choice (Akan, Allen, Helms & Spralls, 2006:43).",
"title": ""
},
{
"docid": "46e65e7bd9df94abdda811bbd43cecda",
"text": "This chapter offers a broad review of the literature at the nexus between Business Models and innovation studies and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model Innovation are reviewed and organized around a synthesizing meta-framework. The framework elucidates the nature of the complementarities across various perspectives. Finally, the use of business model-related ideas in practice is discussed, and critical managerial challenges as they relate to Business Model Innovation and managing business models are identified and examined.",
"title": ""
},
{
"docid": "e159ffe1f686e400b28d398127edfc5c",
"text": "In this paper, we present an in-vehicle computing system capable of localizing lane markings and communicating them to drivers. To the best of our knowledge, this is the first system that combines the Maximally Stable Extremal Region (MSER) technique with the Hough transform to detect and recognize lane markings (i.e., lines and pictograms). Our system begins by localizing the region of interest using the MSER technique. A three-stage refinement computing algorithm is then introduced to enhance the results of MSER and to filter out undesirable information such as trees and vehicles. To achieve the requirements of real-time systems, the Progressive Probabilistic Hough Transform (PPHT) is used in the detection stage to detect line markings. Next, the recognition of the color and the form of line markings is performed; this it is based on the results of the application of the MSER to left and right line markings. The recognition of High-Occupancy Vehicle pictograms is performed using a new algorithm, based on the results of MSER regions. In the tracking stage, Kalman filter is used to track both ends of each detected line marking. Several experiments are conducted to show the efficiency of our system. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f45fc4d1cefa08f09b60752f44359090",
"text": "A novel organization of switched capacitor charge pump circuits based on voltage doubler structures is presented in this paper. Each voltage doubler takes a dc input and outputs a doubled dc voltage. By cascading voltage doublers the output voltage increases up to2 times. A two-phase voltage doubler and a multiphase voltage doubler (MPVD) structures are discussed and design considerations are presented. A simulator working in the – realm was used for simplified circuit level simulation. In order to evaluate the power delivered by a charge pump, a resistive load is attached to the output of the charge pump and an equivalent capacitance is evaluated. A comparison of the voltage doubler circuits with Dickson charge pump and Makowski’s voltage multiplier is presented in terms of the area requirements, the voltage gain, and the power level. This paper also identifies optimum loading conditions for different configurations of the charge pumps. Design guidelines for the desired voltage and power levels are discussed. A two-stage MPVD was fabricated using MOSIS 2.0m CMOS technology. It was designed with internal frequency regulation to reduce power consumption under no load condition.",
"title": ""
},
{
"docid": "0a3ff05dc001e66be2fcd1a71973a8d7",
"text": "Recent advances in evaluating and measuring the perceived visual quality of three-dimensional (3-D) polygonal models are presented in this article, which analyzes the general process of objective quality assessment metrics and subjective user evaluation methods and presents a taxonomy of existing solutions. Simple geometric error computed directly on the 3-D models does not necessarily reflect the perceived visual quality; therefore, integrating perceptual issues for 3-D quality assessment is of great significance. This article discusses existing metrics, including perceptually based ones, computed either on 3-D data or on two-dimensional (2-D) projections, and evaluates their performance for their correlation with existing subjective studies.",
"title": ""
},
{
"docid": "188d9e1b0244aa7f68610dab9d852ab9",
"text": "We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user’s unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.",
"title": ""
},
{
"docid": "40be421f4d66283357c22fa9cd59790f",
"text": "We have examined standards required for successful e-commerce (EC) architectures and evaluated the strengths and limitations of current systems that have been developed to support EC. We find that there is an unfilled need for systems that can reliably locate buyers and sellers in electronic marketplaces and also facilitate automated transactions. The notion of a ubiquitous network where loosely coupled buyers and sellers can reliably find each other in real time, evaluate products, negotiate prices, and conduct transactions is not adequately supported by current systems. These findings were based on an analysis of mainline EC architectures: EDI, company Websites, B2B hubs, e-Procurement systems, and Web Services. Limitations of each architecture were identified. Particular attention was given to the strengths and weaknesses of the Web Services architecture, since it may overcome some limitations of the other approaches. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9553dd3188ddd40f6c958f8e606e91b6",
"text": "Cloud simulation is important for creating images of outdoor scenes. However, the complexity of this natural phenomenon makes the simulation of large-scale clouds difficult in real time. In this paper, we present a new method for 3D cloud simulation in which cloud animation is simplified and simulated by approximating Lennard-Jones Potential. To solve the N-body problem in Lennard-Jones Potential, we minimized the interaction between particles by dividing the simulation space into many cells and we defined a cutoff distance to perform calculation between neighboring particles. Additionally, a separate distance is introduced between particles to maintain the stability in the Lennard-Jones system. Our experimental results demonstrate that our method is computationally inexpensive and suitable for real time applications where large-scale simulation of clouds is required.",
"title": ""
},
{
"docid": "499f3d46aff5196eff4f7550f8374b67",
"text": "The main task for any level of coach is to construct a training program that will ensure continual progression of an athlete whilst avoiding injury. This is a particularly challenging task with athletes who have had several training years behind them. According to the principles of training, to ensure adaptation, overload in the form of manipulating frequency, volume and intensity must be applied. Furthermore, training exercises must be specific to the target task to ensure a carry over effect. Biomechanics is a sports science sub-discipline which is able to quantify the potential effect of training exercises, rather than leaving it to the coaches \"gut feel\".",
"title": ""
}
] |
scidocsrr
|
1578f521994cdea00141a1737327d677
|
ABC and 3D: opportunities and obstacles to 3D printing in special education environments
|
[
{
"docid": "20fa84c01c29609825302e4cc2bf4094",
"text": "In this paper, we introduce the origins and applications of digital fabrication and \"making\" in education, and discuss how they can be implemented, researched, and developed in schools. Our discussion is based on several papers and posters that we summarize into three categories: research, technology development, and experiences in formal and informal education.",
"title": ""
},
{
"docid": "073cd7c54b038dcf69ae400f97a54337",
"text": "Interventions to support children with autism often include the use of visual supports, which are cognitive tools to enable learning and the production of language. Although visual supports are effective in helping to diminish many of the challenges of autism, they are difficult and time-consuming to create, distribute, and use. In this paper, we present the results of a qualitative study focused on uncovering design guidelines for interactive visual supports that would address the many challenges inherent to current tools and practices. We present three prototype systems that address these design challenges with the use of large group displays, mobile personal devices, and personal recording technologies. We also describe the interventions associated with these prototypes along with the results from two focus group discussions around the interventions. We present further design guidance for visual supports and discuss tensions inherent to their design.",
"title": ""
}
] |
[
{
"docid": "9fa20791d2e847dbd2c7204d00eec965",
"text": "As neurobiological evidence points to the neocortex as the brain region mainly involved in high-level cognitive functions, an innovative model of neocortical information processing has been recently proposed. Based on a simplified model of a neocortical neuron, and inspired by experimental evidence of neocortical organisation, the Hierarchical Temporal Memory (HTM) model attempts at understanding intelligence, but also at building learning machines. This paper focuses on analysing HTM's ability for online, adaptive learning of sequences. In particular, we seek to determine whether the approach is robust to noise in its inputs, and to compare and contrast its performance and attributes to an alternative Hidden Markov Model (HMM) approach. We reproduce a version of a HTM network and apply it to a visual pattern recognition task under various learning conditions. Our first set of experiments explore the HTM network's capability to learn repetitive patterns and sequences of patterns within random data streams. Further experimentation involves assessing the network's learning performance in terms of inference and prediction under different noise conditions. HTM results are compared with those of a HMM trained at the same tasks. Online learning performance results demonstrate the HTM's capacity to make use of context in order to generate stronger predictions, whereas results on robustness to noise reveal an ability to deal with noisy environments. Our comparisons also, however, emphasise a manner in which HTM differs significantly from HMM, which is that HTM generates predicted observations rather than hidden states, and each observation is a sparse distributed representation.",
"title": ""
},
{
"docid": "f4cc2848713439b162dc5fc255c336d2",
"text": "We consider the problem of waveform design for multiple input/multiple output (MIMO) radars, where the transmit waveforms are adjusted based on target and clutter statistics. A model for the radar returns which incorporates the transmit waveforms is developed. The target detection problem is formulated for that model. Optimal and suboptimal algorithms are derived for designing the transmit waveforms under different assumptions regarding the statistical information available to the detector. The performance of these algorithms is illustrated by computer simulation.",
"title": ""
},
{
"docid": "1f218afceb60fe63ea0e137207f6faf7",
"text": "To present the prevalence, clinical relevance, and ultrasound (US) and magnetic resonance imaging (MRI) appearances of the accessory coracobrachialis (ACB) muscle. We present an US prospective study of the ACB muscle over a 2-year period. Five of the eight patients with suspected ACB on US were subsequently examined by MRI. An ACB muscle was demonstrated by US in eight patients (eight shoulders), including seven females, one male, with mean age 39 years, over 770 (664 patients) consecutive shoulder US examinations referred to our institution yielding a prevalence of 1.04 %. In dynamic US assessment, one case of subcoracoid impingement secondary to a bulky ACB was diagnosed. No thoracic outlet syndrome was encountered in the remaining cases. MRI confirmed the presence of the accessory muscle in five cases. ACB muscle is a rarely reported yet not uncommon anatomic variation of the shoulder musculature encountered only in eight of 664 patients referred for shoulder US study. Its US and MRI appearance is described. One of our patients presented with subcoracoid impingement related to the presence of an ACB.",
"title": ""
},
{
"docid": "f2026d9d827c088711875acc56b12b70",
"text": "The goal of the study is to formalize the concept of viral marketing (VM) as a close derivative of contagion models from epidemiology. The study examines in detail the two common mathematical models of epidemic spread and their marketing implications. The SIR and SEIAR models of infectious disease spread are examined in detail. From this analysis of the epidemiological foundations along with a review of relevant marketing literature, a marketing model of VM is developed. This study demonstrates the key elements that define viral marketing as a formal marketing concept and the distinctive mechanical features that differ from conventional marketing.",
"title": ""
},
{
"docid": "808a6c959eb79deb6ac5278805f5b855",
"text": "Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model.",
"title": ""
},
{
"docid": "435925ecebc5a13f0a0547961f12fd27",
"text": "Feature subset selection is one of the key problems in the area of pattern recognition and machine learning. Feature subset selection refers to the problem of selecting only those features that are useful in predicting a target concept i. e. class. Most of the data acquired through different sources are not particularly screened for any specific task e. g. classification, clustering, anomaly detection, etc. When this data is fed to a learning algorithm, its results deteriorate. The proposed method is a pure filter based feature subset selection technique that incurs less computational cost and highly efficient in terms of classification accuracy. Moreover, along with high accuracy the proposed method requires less number of features in most of the cases. In the proposed method the issue of feature ranking and threshold value selection is addressed. The proposed method adaptively selects number of features as per the worth of an individual feature in the dataset. An extensive experimentation is performed, comprised of a number of benchmark datasets over three well known classification algorithms. Empirical results endorse efficiency and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "c187a6ad17503d269fe4c3a03fc4fd89",
"text": "Despite the widespread support for live migration of Virtual Machines (VMs) in current hypervisors, these have significant shortcomings when it comes to migration of certain types of VMs. More specifically, with existing algorithms, there is a high risk of service interruption when migrating VMs with high workloads and/or over low-bandwidth networks. In these cases, VM memory pages are dirtied faster than they can be transferred over the network, which leads to extended migration downtime. In this contribution, we study the application of delta compression during the transfer of memory pages in order to increase migration throughput and thus reduce downtime. The delta compression live migration algorithm is implemented as a modification to the KVM hypervisor. Its performance is evaluated by migrating VMs running different type of workloads and the evaluation demonstrates a significant decrease in migration downtime in all test cases. In a benchmark scenario the downtime is reduced by a factor of 100. In another scenario a streaming video server is live migrated with no perceivable downtime to the clients while the picture is frozen for eight seconds using standard approaches. Finally, in an enterprise application scenario, the delta compression algorithm successfully live migrates a very large system that fails after migration using the standard algorithm. Finally, we discuss some general effects of delta compression on live migration and analyze when it is beneficial to use this technique.",
"title": ""
},
{
"docid": "8a243d17a61f75ef9a881af120014963",
"text": "This paper presents a Deep Mayo Predictor model for predicting the outcomes of the matches in IPL 9 being played in April – May, 2016. The model has three components which are based on multifarious considerations emerging out of a deeper analysis of T20 cricket. The models are created using Data Analytics methods from machine learning domain. The prediction accuracy obtained is high as the Mayo Predictor Model is able to correctly predict the outcomes of 39 matches out of the 56 matches played in the league stage of the IPL IX tournament. Further improvement in the model can be attempted by using a larger training data set than the one that has been utilized in this work. No such effort at creating predictor models for cricket matches has been reported in the literature.",
"title": ""
},
{
"docid": "cf0a4f12c23b42c08b6404fe897ed646",
"text": "By performing computation at the location of data, non-Von Neumann (VN) computing should provide power and speed benefits over conventional (e.g., VN-based) approaches to data-centric workloads such as deep learning. For the on-chip training of largescale deep neural networks using nonvolatile memory (NVM) based synapses, success will require performance levels (e.g., deep neural network classification accuracies) that are competitive with conventional approaches despite the inherent imperfections of such NVM devices, and will also require massively parallel yet low-power read and write access. In this paper, we focus on the latter requirement, and outline the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices to implement this acceleration through what is, at least locally, analog computing. We address how the circuit requirements for this new neuromorphic computing approach are somewhat reminiscent of, yet significantly different from, the well-known requirements found in conventional memory applications. We discuss tradeoffs that can influence both the effective acceleration factor (“speed”) and power requirements of such on-chip learning accelerators. P. Narayanan A. Fumarola L. L. Sanches K. Hosokawa S. C. Lewis R. M. Shelby G. W. Burr",
"title": ""
},
{
"docid": "57c9170c8cbf4dda16538e8af5eb59e5",
"text": "Companies that offer loyalty reward programs believe that their programs have a long-run positive effect on customer evaluations and behavior. However, if loyalty rewards programs increase relationship durations and usage levels, customers will be increasingly exposed to the complete spectrum of service experiences, including experiences that may cause customers to switch to another service provider. Using cross-sectional, time-series data from a worldwide financial services company that offers a loyalty reward program, this article investigates the conditions under which a loyalty rewards program will have a positive effect on customer evaluations, behavior, and repeat purchase intentions. The results show that members in the loyalty reward program overlook or discount negative evaluations of the company vis-à-vis competition. One possible reason could be that members of the loyalty rewards program perceive that they are getting better quality and service for their price or, in other words, “good value.”",
"title": ""
},
{
"docid": "2ad6b17fcb0ea20283e318a3fed2939f",
"text": "A fundamental problem of time series is k nearest neighbor (k-NN) query processing. However, existing methods are not fast enough for large dataset. In this paper, we propose a novel approach, STS3, to process k-NN queries by transforming time series to sets and measure the similarity under Jaccard metric. Our approach is more accurate than Dynamic Time Warping(DTW) in our suitable scenarios and it is faster than most of the existing methods, due to the efficient similarity search for sets. Besides, we also developed an index, a pruning and an approximation technique to improve the k-NN query procedure. As shown in the experimental results, all of them could accelerate the query processing effectively.",
"title": ""
},
{
"docid": "a8edc02eb78637f18fc948d81397fc75",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "3b9af99b33c15188a8ec50c7decd3b28",
"text": "The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the \"style\" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting.",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "4bce887df71f59085938c8030e7b0c1c",
"text": "Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required. After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabelled corpora, and in both cases the transferability is evaluated on a set of downstream NLP tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.",
"title": ""
},
{
"docid": "7527cfe075027c9356645419c4fd1094",
"text": "ive Multi-Document Summarization via Phrase Selection and Merging∗ Lidong Bing§ Piji Li Yi Liao Wai Lam Weiwei Guo† Rebecca J. Passonneau‡ §Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA USA Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong †Yahoo Labs, Sunnyvale, CA, USA ‡Center for Computational Learning Systems, Columbia University, New York, NY, USA §[email protected], {pjli, yliao, wlam}@se.cuhk.edu.hk †[email protected], ‡[email protected]",
"title": ""
},
{
"docid": "f24c9f07945572ed467f397e4274060e",
"text": "Scholarly digital libraries have become an important source of bibliographic records for scientific communities. Author name search is one of the most common query exercised in digital libraries. The name ambiguity problem in the context of author search in digital libraries, arising from multiple authors sharing the same name, poses many challenges. A number of name disambiguation methods have been proposed in the literature so far. A variety of bibliographic attributes have been considered in these methods. However, hardly any effort has been made to assess the potential contribution of these attributes. We, for the first time, evaluate the potential strength and/or weaknesses of these attributes by a rigorous course of experiments on a large data set. We also explore the potential utility of some attributes from different perspective. A close look reveals that most of the earlier work require one or more attributes which are difficult to obtain in practical applications. Based on this empirical study, we identify three very common and easy to access attributes and propose a two-step hierarchical clustering technique to solve name ambiguity using these attributes only. Experimental results on data set extracted from a popular digital library show that the proposed method achieves significantly high level of accuracy (> 90%) for most of the instances.",
"title": ""
},
{
"docid": "fa75c21227d8e9e417c54552f8dbe2f9",
"text": "Autonomous intelligent cruise control (AICC) is a technology for driver convenience, increased safety, and smoother traffic flow. AICC also has been proposed for increasing traffic flow by allowing shorter intervehicle headways. Because an AI CC-equipped vehicle operates using only information available from its own sensors, there is no requirement for communication and cooperation between vehicles. This format allows gradual market penetration of AICC systems, which makes the technology attractive from a systems implementation standpoint. The potential flow increases when only a proportion of vehicles on a highway are equipped with AICC were examined, and theoretical upper limits on flows as a function of pertinent variables were derived. Because of the limitations of the theoretical models, a simulator was used that models interactions between vehicles to give detailed information on achievable capacity and traffic stream stability. Results showed that AICC can lead to potentially large gains in capacity only if certain highly unrealistic assumptions hold. In reality, the capacity gains from AICC are likely to be small.",
"title": ""
},
{
"docid": "58e3444f3d35d0ad45e5637e7c53efb5",
"text": "An efficient method for text localization and recognition in real-world images is proposed. Thanks to effective pruning, it is able to exhaustively search the space of all character sequences in real time (200ms on a 640x480 image). The method exploits higher-order properties of text such as word text lines. We demonstrate that the grouping stage plays a key role in the text localization performance and that a robust and precise grouping stage is able to compensate errors of the character detector. The method includes a novel selector of Maximally Stable Extremal Regions (MSER) which exploits region topology. Experimental validation shows that 95.7% characters in the ICDAR dataset are detected using the novel selector of MSERs with a low sensitivity threshold. The proposed method was evaluated on the standard ICDAR 2003 dataset where it achieved state-of-the-art results in both text localization and recognition.",
"title": ""
}
] |
scidocsrr
|
002fba58f96c79a98229f37567fa4363
|
Pretty as a Princess: Longitudinal Effects of Engagement With Disney Princesses on Gender Stereotypes, Body Esteem, and Prosocial Behavior in Children.
|
[
{
"docid": "b4dcc5c36c86f9b1fef32839d3a1484d",
"text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.",
"title": ""
},
{
"docid": "3d7fabdd5f56c683de20640abccafc44",
"text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.",
"title": ""
}
] |
[
{
"docid": "761be34401cc6ef1d8eea56465effca9",
"text": "Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "a76826da7f077cf41aaa7c8eca9be3fe",
"text": "In this paper we present an open-source design for the development of low-complexity, anthropomorphic, underactuated robot hands with a selectively lockable differential mechanism. The differential mechanism used is a variation of the whiffletree (or seesaw) mechanism, which introduces a set of locking buttons that can block the motion of each finger. The proposed design is unique since with a single motor and the proposed differential mechanism the user is able to control each finger independently and switch between different grasping postures in an intuitive manner. Anthropomorphism of robot structure and motion is achieved by employing in the design process an index of anthropomorphism. The proposed robot hands can be easily fabricated using low-cost, off-the-shelf materials and rapid prototyping techniques. The efficacy of the proposed design is validated through different experimental paradigms involving grasping of everyday life objects and execution of daily life activities. The proposed hands can be used as affordable prostheses, helping amputees regain their lost dexterity.",
"title": ""
},
{
"docid": "5a2649736269f7be88886c2a45243492",
"text": "Modern computer displays tend to be in fixed size, rigid, and rectilinear rendering them insensitive to the visual area demands of an application or the desires of the user. Foldable displays offer the ability to reshape and resize the interactive surface at our convenience and even permit us to carry a very large display surface in a small volume. In this paper, we implement four interactive foldable display designs using image projection with low-cost tracking and explore display behaviors using orientation sensitivity.",
"title": ""
},
{
"docid": "7f0dd680faf446e74aff177dc97b5268",
"text": "Vehicle Ad-Hoc Networks (VANET) enable all components in intelligent transportation systems to be connected so as to improve transport safety, relieve traffic congestion, reduce air pollution, and enhance driving comfort. The vision of all vehicles connected poses a significant challenge to the collection, storage, and analysis of big traffic-related data. Vehicular cloud computing, which incorporates cloud computing into vehicular networks, emerges as a promising solution. Different from conventional cloud computing platform, the vehicle mobility poses new challenges to the allocation and management of cloud resources in roadside cloudlet. In this paper, we study a virtual machine (VM) migration problem in roadside cloudletbased vehicular network and unfold that (1) whether a VM shall be migrated or not along with the vehicle moving and (2) where a VM shall be migrated, in order to minimize the overall network cost for both VM migration and normal data traffic. We first treat the problem as a static off-line VM placement problem and formulate it into a mixed-integer quadratic programming problem. A heuristic algorithm with polynomial time is then proposed to tackle the complexity of solving mixed-integer quadratic programming. Extensive simulation results show that it produces near-optimal performance and outperforms other related algorithms significantly. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "367c6ce6d83baff7de78e9d128123ce8",
"text": "Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).",
"title": ""
},
{
"docid": "20af5209de71897158820f935018d877",
"text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.",
"title": ""
},
{
"docid": "ee9bccbfecd58151569449911c624221",
"text": "Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.",
"title": ""
},
{
"docid": "7cfeadc550f412bb92df4f265bf99de0",
"text": "AIM\nCorrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings.\n\n\nMETHODS\nA specially designed resolution phantom containing three (99m)Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a (99m)Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone(®), Philips Astonish(®) and Siemens Flash3D(®) software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme.\n\n\nRESULTS\nThe best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5mm (GE), from 9.1 to 6.4mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but deteriorates significantly the spatial resolution.\n\n\nCONCLUSIONS\nUsing advanced reconstruction algorithms the largest improvement in image resolution and contrast is found for the scatter corrected slices without applying post-filtering. The user has to choose whether noise reduction by post-filtering or improved image resolution fits better a particular imaging procedure.",
"title": ""
},
{
"docid": "5545d32ccfd1459c8c7e918c8b324eb5",
"text": "Sequence generative adversarial networks SeqGAN have been used to improve conditional sequence generation tasks, for example, chit-chat dialogue generation. To stabilize the training of SeqGAN, Monte Carlo tree search MCTS or reward at every generation step REGS is used to evaluate the goodness of a generated subsequence. MCTS is computationally intensive, but the performance of REGS is worse than MCTS. In this paper, we propose stepwise GAN StepGAN, in which the discriminator is modified to automatically assign scores quantifying the goodness of each subsequence at every generation step. StepGAN has significantly less computational costs than MCTS. We demonstrate that StepGAN outperforms previous GAN-based methods on both synthetic experiment and chit-chat dialogue generation.",
"title": ""
},
{
"docid": "94640a4ad3b32a307658ca2028dbd589",
"text": "In this paper, we investigate the diversity aspect of paraphrase generation. Prior deep learning models employ either decoding methods or add random input noise for varying outputs. We propose a simple method Diverse Paraphrase Generation (D-PAGE), which extends neural machine translation (NMT) models to support the generation of diverse paraphrases with implicit rewriting patterns. Our experimental results on two real-world benchmark datasets demonstrate that our model generates at least one order of magnitude more diverse outputs than the baselines in terms of a new evaluation metric Jeffrey’s Divergence. We have also conducted extensive experiments to understand various properties of our model with a focus on diversity.",
"title": ""
},
{
"docid": "1608c56c79af07858527473b2b0262de",
"text": "The field weakening control strategy of interior permanent magnet synchronous motor for electric vehicles was studied in the paper. A field weakening control method based on gradient descent of voltage limit according to the ellipse and modified current setting were proposed. The field weakening region was determined by the angle between the constant torque direction and the voltage limited ellipse decreasing direction. The direction of voltage limited ellipse decreasing was calculated by using the gradient descent method. The current reference was modified by the field weakening direction and the magnitude of the voltage error according to the field weakening region. A simulink model was also founded by Matlab/Simulink, and the validity of the proposed strategy was proved by the simulation results.",
"title": ""
},
{
"docid": "ec8847a65f015a52ce90bdd304103658",
"text": "This study has a purpose to investigate the adoption of online games technologies among adolescents and their behavior in playing online games. The findings showed that half of them had experience ten months or less in playing online games with ten hours or less for each time playing per week. Nearly fifty-four percent played up to five times each week where sixty-six percent played two hours or less. Behavioral Intention has significant correlation to model variables naming Perceived Enjoyment, Flow Experience, Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions; Experience; and the number and duration of game sessions. The last, Performance Expectancy and Facilitating Condition had a positive, medium, and statistically direct effect on Behavioral Intention. Four other variables Perceived Enjoyment, Flow Experience, Effort Expectancy, and Social Influence had positive or negative, medium or small, and not statistically direct effect on Behavioral Intention. Additionally, Flow Experience and Social Influence have no significant different between the mean value for male and female. Other variables have significant different regard to gender, where mean value of male was significantly greater than female except for Age. Practical implications of this study are relevant to groups who have interest to enhance or to decrease the adoption of online games technologies. Those to enhance the adoption of online games technologies must: preserve Performance Expectancy and Facilitating Conditions; enhance Flow Experience, Perceived Enjoyment, Effort Expectancy, and Social Influence; and engage the adolescent's online games behavior, specifically supporting them in longer playing games and in enhancing their experience. The opposite actions to these proposed can be considered to decrease the adoption.",
"title": ""
},
{
"docid": "04eb3cb8f83277b552d9cb80d990cce0",
"text": "The growing momentum of the Internet of Things (IoT) has shown an increase in attack vectors within the security research community. We propose adapting a recent new approach of frequently changing IPv6 address assignment to add an additional layer of security to the Internet of Things. We examine implementing Moving Target IPv6 Defense (MT6D) in IPv6 over Low-Powered Wireless Personal Area Networks (6LoWPAN); a protocol that is being used in wireless sensors found in home automation systems and smart meters. 6LoWPAN allows the Internet of Things to extend into the world of wireless sensor networks. We propose adapting Moving-Target IPv6 Defense for use with 6LoWPAN in order to defend against network-side attacks such as Denial-of-Service and Man-In-The-Middle while maintaining anonymity of client-server communications. This research aims in providing a moving-target defense for wireless sensor networks while maintaining power efficiency within the network.",
"title": ""
},
{
"docid": "6ca68f39cd15b3e698d8df8c99e160a6",
"text": "This paper proposed a novel isolated bidirectional flyback converter integrated with two non-dissipative LC snubbers. In the proposed topology, the main flyback transformer and the LC snubbers are crossed-coupled to reduce current circulation and recycle the leakage energy. The proposed isolated bidirectional flyback converter can step-up the voltage of the battery (Vbat = 12V) to a high voltage side (VHV = 200V) for the load demand and vice versa. The main goal of this paper is to demonstrate the performances of this topology to achieve high voltage gain with less switching losses and reduce components stresses. The circuit analysis conferred in detail for Continuous Conduction Mode (CCM). Lastly, a laboratory prototype constructed to compare with simulation result.",
"title": ""
},
{
"docid": "611c8ce42410f8f678aa5cb5c0de535b",
"text": "User simulators are a principal offline method for training and evaluating human-computer dialog systems. In this paper, we examine simple sequence-to-sequence neural network architectures for training end-to-end, natural language to natural language, user simulators, using only raw logs of previous interactions without any additional human labelling. We compare the neural network-based simulators with a language model (LM)-based approach for creating natural language user simulators. Using both an automatic evaluation using LM perplexity and a human evaluation, we demonstrate that the sequence-tosequence approaches outperform the LM-based method. We show correlation between LM perplexity and the human evaluation on this task, and discuss the benefits of different neural network architecture variations.",
"title": ""
},
{
"docid": "69944e5a5a23abf66be23fe6a56d53cc",
"text": "A 71-76 GHz high dynamic range CMOS RF variable gain amplifier (VGA) is presented. Variable gain is achieved using two current-steering trans-conductance stages, which provide high linearity with relatively low power consumption. The circuit is fabricated in a MS/RF 90-nm CMOS technology and consumes 18-mA total current from a 2-V supply. This VGA achieves a 14-dB maximum gain, a 30-dB gain controlled range, and a 4-dBm output saturation power. To the authorpsilas knowledge, this VGA demonstrates the highest operation frequency among the reported CMOS VGAs.",
"title": ""
},
{
"docid": "bf1b556a1617674ca7b560aa48731f76",
"text": "The increasing complexity of configuring cellular networks suggests that machine learning (ML) can effectively improve 5G technologies. Deep learning has proven successful in ML tasks such as speech processing and computational vision, with a performance that scales with the amount of available data. The lack of large datasets inhibits the flourish of deep learning applications in wireless communications. This paper presents a methodology that combines a vehicle traffic simulator with a raytracing simulator, to generate channel realizations representing 5G scenarios with mobility of both transceivers and objects. The paper then describes a specific dataset for investigating beamselection techniques on vehicle-to-infrastructure using millimeter waves. Experiments using deep learning in classification, regression and reinforcement learning problems illustrate the use of datasets generated with the proposed methodology.",
"title": ""
},
{
"docid": "27f001247d02f075c9279b37acaa49b3",
"text": "A Zadoff–Chu (ZC) sequence is uncorrelated with a non-zero cyclically shifted version of itself. However, this alone is insufficient to mitigate inter-code interference in LTE initial uplink synchronization. The performance of the state-of-the-art algorithms vary widely depending on the specific ZC sequences employed. We develop a systematic procedure to choose the ZC sequences that yield the optimum performance. It turns out that the procedure for ZC code selection in LTE standard is suboptimal when the carrier frequency offset is not small.",
"title": ""
},
{
"docid": "bd9f01cad764a03f1e6cded149b9adbd",
"text": "Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.",
"title": ""
}
] |
scidocsrr
|
f2fa921143776e7508b96f6146d7ab80
|
SNIF: a simple nude image finder
|
[
{
"docid": "203359248f9d54f837540bdd7f717ccb",
"text": "This paper presents \\bic (Border/Interior pixel Classification), a compact and efficient CBIR approach suitable for broad image domains. It has three main components: (1) a simple and powerful image analysis algorithm that classifies image pixels as either border or interior, (2) a new logarithmic distance (dLog) for comparing histograms, and (3) a compact representation for the visual features extracted from images. Experimental results show that the BIC approach is consistently more compact, more efficient and more effective than state-of-the-art CBIR approaches based on sophisticated image analysis algorithms and complex distance functions. It was also observed that the dLog distance function has two main advantages over vectorial distances (e.g., L1): (1) it is able to increase substantially the effectiveness of (several) histogram-based CBIR approaches and, at the same time, (2) it reduces by 50% the space requirement to represent a histogram.",
"title": ""
},
{
"docid": "84a187b1e5331c4e7eb349c8b1358f14",
"text": "We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.",
"title": ""
}
] |
[
{
"docid": "b3c203dabe2c19764634fbc3a6717381",
"text": "This work complements existing research regarding the forgiveness process by highlighting the role of commitment in motivating forgiveness. On the basis of an interdependence-theoretic analysis, the authors suggest that (a) victims' self-oriented reactions to betrayal are antithetical to forgiveness, favoring impulses such as grudge and vengeance, and (b) forgiveness rests on prorelationship motivation, one cause of which is strong commitment. A priming experiment, a cross-sectional survey study, and an interaction record study revealed evidence of associations (or causal effects) of commitment with forgiveness. The commitment-forgiveness association appeared to rest on intent to persist rather than long-term orientation or psychological attachment. In addition, the commitment-forgiveness association was mediated by cognitive interpretations of betrayal incidents; evidence for mediation by emotional reactions was inconsistent.",
"title": ""
},
{
"docid": "a44b74738723580f4056310d6856bb74",
"text": "This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet ....",
"title": ""
},
{
"docid": "05a77d687230dc28697ca1751586f660",
"text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.",
"title": ""
},
{
"docid": "d792928284e2d7d9c54621974a4e3e9b",
"text": "This paper presents a new fuzzy controller for semi-active vehicle suspension systems, which has a significantly fewer number of rules in comparison to existing fuzzy controllers. The proposed fuzzy controller has only nine fuzzy rules, whose performance is equivalent to the existing fuzzy controller with 49 fuzzy rules. The proposed controller with less number of fuzzy rules will be more feasible and cost-efficient in hardware implementation. For comparison, a linear quadratic regulator controlled semi-active suspension, and a passive suspension are also implemented and simulated. Simulation results show that the ride comfort and road holding are improved by 28% and 31%, respectively, with the fuzzy controlled semi-active suspension system, in comparison to the linear quadratic regulator controlled semi-active suspension.",
"title": ""
},
{
"docid": "a1fef597312118f53e6b1468084a9300",
"text": "The design of highly emissive and stable blue emitters for organic light emitting diodes (OLEDs) is still a challenge, justifying the intense research activity of the scientific community in this field. Recently, a great deal of interest has been devoted to the elaboration of emitters exhibiting a thermally activated delayed fluorescence (TADF). By a specific molecular design consisting into a minimal overlap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) due to a spatial separation of the electron-donating and the electron-releasing parts, luminescent materials exhibiting small S1-T1 energy splitting could be obtained, enabling to thermally upconvert the electrons from the triplet to the singlet excited states by reverse intersystem crossing (RISC). By harvesting both singlet and triplet excitons for light emission, OLEDs competing and sometimes overcoming the performance of phosphorescence-based OLEDs could be fabricated, justifying the interest for this new family of materials massively popularized by Chihaya Adachi since 2012. In this review, we proposed to focus on the recent advances in the molecular design of blue TADF emitters for OLEDs during the last few years.",
"title": ""
},
{
"docid": "a08e1710d15b69ea23980daa722ace0d",
"text": "Olympic combat sports separate athletes into weight divisions, in an attempt to reduce size, strength, range and/or leverage disparities between competitors. Official weigh-ins are conducted anywhere from 3 and up to 24 h prior to competition ensuring athletes meet weight requirements (i.e. have 'made weight'). Fighters commonly aim to compete in weight divisions lower than their day-to-day weight, achieved via chronic and acute manipulations of body mass (BM). Although these manipulations may impair health and absolute performance, their strategic use can improve competitive success. Key considerations are the acute manipulations around weigh-in, which differ in importance, magnitude and methods depending on the requirements of the individual combat sport and the weigh-in regulations. In particular, the time available for recovery following weigh-in/before competition will determine what degree of acute BM loss can be implemented and reversed. Increased exercise and restricted food and fluid intake are undertaken to decrease body water and gut contents reducing BM. When taken to the extreme, severe weight-making practices can be hazardous, and efforts have been made to reduce their prevalence. Indeed some have called for the abolition of these practices altogether. In lieu of adequate strategies to achieve this, and the pragmatic recognition of the likely continuation of these practices as long as regulations allow, this review summarises guidelines for athletes and coaches for manipulating BM and optimising post weigh-in recovery, to achieve better health and performance outcomes across the different Olympic combat sports.",
"title": ""
},
{
"docid": "c157b149d334b2cc1f718d70ef85e75e",
"text": "The large inter-individual variability within the normal population, the limited reproducibility due to habituation or fatigue, and the impact of instruction and the subject's motivation, all constitute a major problem in posturography. These aspects hinder reliable evaluation of the changes in balance control in the case of disease and complicate objectivation of the impact of therapy and sensory input on balance control. In this study, we examine whether measurement of balance control near individualized limits of stability and under very challenging sensory conditions might reduce inter- and intra-individual variability compared to the well-known Sensory Organization Test (SOT). To do so, subjects balance on a platform on which instability increases automatically until body orientation or body sway velocity surpasses a safety limit. The maximum tolerated platform instability is then used as a measure for balance control under 10 different sensory conditions. Ninety-seven healthy subjects and 107 patients suffering from chronic dizziness (whiplash syndrome (n = 25), Meniere's disease (n = 28), acute (n = 28) or gradual (n = 26) peripheral function loss) were tested. In both healthy subjects and patients this approach resulted in a low intra-individual variability (< 14.5(%). In healthy subjects and patients, balance control was maximally affected by closure of the eyes and by vibration of the Achilles' tendons. The other perturbation techniques applied (sway referenced vision or platform, cooling of the foot soles) were less effective. Combining perturbation techniques reduced balance control even more, but the effect was less than the linear summation of the effect induced by the techniques applied separately. The group averages of healthy subjects show that vision contributed maximum 37%, propriocepsis minimum 26%, and labyrinths maximum 44% to balance control in healthy subjects. However, a large inter-individual variability was observed. Balance control of each patient group was less than in healthy subjects in all sensory conditions. Similar to healthy subjects, patients also show a large inter-individual variability, which results in a low sensitivity of the test. With the exception of some minor differences between Whiplash and Meniere patients, balance control did not differ between the four patient groups. This points to a low specificity of the test. Balance control was not correlated with the outcome of the standard vestibular examination. This study strengthens our notion that the contribution of the sensory inputs to balance control differs considerably per individual and may simply be due to differences in the vestibular function related to the specific pathology, but also to differences in motor learning strategies in relation to daily life requirements. It is difficult to provide clinically relevant normative data. We conclude that, like the SOT, the current test is merely a functional test of balance with limited diagnostic value.",
"title": ""
},
{
"docid": "dbd06c81892bc0535e2648ee21cb00b4",
"text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.",
"title": ""
},
{
"docid": "7c09cb7f935e2fb20a4d2e56a5471e61",
"text": "This paper proposes and evaluates an approach to the parallelization, deployment and management of bioinformatics applications that integrates several emerging technologies for distributed computing. The proposed approach uses the MapReduce paradigm to parallelize tools and manage their execution, machine virtualization to encapsulate their execution environments and commonly used data sets into flexibly deployable virtual machines, and network virtualization to connect resources behind firewalls/NATs while preserving the necessary performance and the communication environment. An implementation of this approach is described and used to demonstrate and evaluate the proposed approach. The implementation integrates Hadoop, Virtual Workspaces, and ViNe as the MapReduce, virtual machine and virtual network technologies, respectively, to deploy the commonly used bioinformatics tool NCBI BLAST on a WAN-based test bed consisting of clusters at two distinct locations, the University of Florida and the University of Chicago. This WAN-based implementation, called CloudBLAST, was evaluated against both non-virtualized and LAN-based implementations in order to assess the overheads of machine and network virtualization, which were shown to be insignificant. To compare the proposed approach against an MPI-based solution, CloudBLAST performance was experimentally contrasted against the publicly available mpiBLAST on the same WAN-based test bed. Both versions demonstrated performance gains as the number of available processors increased, with CloudBLAST delivering speedups of 57 against 52.4 of MPI version, when 64 processors on 2 sites were used. The results encourage the use of the proposed approach for the execution of large-scale bioinformatics applications on emerging distributed environments that provide access to computing resources as a service.",
"title": ""
},
{
"docid": "b42f4d645e2a7e24df676a933f414a6c",
"text": "Epilepsy is a common neurological condition which affects the central nervous system that causes people to have a seizure and can be assessed by electroencephalogram (EEG). Electroencephalography (EEG) signals reflect two types of paroxysmal activity: ictal activity and interictal paroxystic events (IPE). The relationship between IPE and ictal activity is an essential and recurrent question in epileptology. The spike detection in EEG is a difficult problem. Many methods have been developed to detect the IPE in the literature. In this paper we propose three methods to detect the spike in real EEG signal: Page Hinkley test, smoothed nonlinear energy operator (SNEO) and fractal dimension. Before using these methods, we filter the signal. The Singular Spectrum Analysis (SSA) filter is used to remove the noise in an EEG signal.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "ff429302ec983dd1203ac6dd97506ef8",
"text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute",
"title": ""
},
{
"docid": "b266069e91c24120b1732c5576087a90",
"text": "Reactions of organic molecules on Montmorillonite c lay mineral have been investigated from various asp ects. These include catalytic reactions for organic synthesis, chemical evolution, the mechanism of humus-formatio n, and environmental problems. Catalysis by clay minerals has attracted much interest recently, and many repo rts including the catalysis by synthetic or modified cl ays have been published. In this review, we will li mit the review to organic reactions using Montmorillonite clay as cat alyst.",
"title": ""
},
{
"docid": "9651fa86b37b6de23956e76459e127fc",
"text": "This corrects the article DOI: 10.1038/nature12346",
"title": ""
},
{
"docid": "05ab4fa15696ee8b47e017ebbbc83f2c",
"text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.",
"title": ""
},
{
"docid": "a9a3d46bd6f5df951957ddc57d3d390d",
"text": "In this paper, we propose a low-power level shifter (LS) capable of converting extremely low-input voltage into high-output voltage. The proposed LS consists of a pre-amplifier with a logic error correction circuit and an output latch stage. The pre-amplifier generates complementary amplified signals, and the latch stage converts them into full-swing output signals. Simulated results demonstrated that the proposed LS in a 0.18-μm CMOS process can convert a 0.19-V input into 1.8-V output correctly. The energy and the delay time of the proposed LS were 0.24 pJ and 21.4 ns when the low supply voltage, high supply voltage, and the input pulse frequency, were 0.4, 1.8 V, and 100 kHz, respectively.",
"title": ""
},
{
"docid": "4318041c3cf82ce72da5983f20c6d6c4",
"text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.",
"title": ""
},
{
"docid": "172567417be706a47c94d35d90c24400",
"text": "This work presents a novel semi-supervised learning approach for data-driven modeling of asset failures when health status is only partially known in historical data. We combine a generative model parameterized by deep neural networks with non-linear embedding technique. It allows us to build prognostic models with the limited amount of health status information for the precise prediction of future asset reliability. The proposed method is evaluated on a publicly available dataset for remaining useful life (RUL) estimation, which shows significant improvement even when a fraction of the data with known health status is as sparse as 1% of the total. Our study suggests that the non-linear embedding based on a deep generative model can efficiently regularize a complex model with deep architectures while achieving high prediction accuracy that is far less sensitive to the availability of health status information.",
"title": ""
},
{
"docid": "cb29a1fc5a8b70b755e934c9b3512a36",
"text": "The problem of pedestrian detection in image and video frames has been extensively investigated in the past decade. However, the low performance in complex scenes shows that it remains an open problem. In this paper, we propose to cascade simple Aggregated Channel Features (ACF) and rich Deep Convolutional Neural Network (DCNN) features for efficient and effective pedestrian detection in complex scenes. The ACF based detector is used to generate candidate pedestrian windows and the rich DCNN features are used for fine classification. Experiments show that the proposed approach achieved leading performance in the INRIA dataset and comparable performance to the state-of-the-art in the Caltech and ETH datasets.",
"title": ""
},
{
"docid": "88e193c935a216ea21cb352921deaa71",
"text": "This overview paper outlines our views of actual security of biometric authentication and encryption systems. The attractiveness of some novel approaches like cryptographic key generation from biometric data is in some respect understandable, yet so far has lead to various shortcuts and compromises on security. Our paper starts with an introductory section that is followed by a section about variability of biometric characteristics, with a particular attention paid to biometrics used in large systems. The following sections then discuss the potential for biometric authentication systems, and for the use of biometrics in support of cryptographic applications as they are typically used in computer systems.",
"title": ""
}
] |
scidocsrr
|
76606e4157bb2fd429408fec8885f7b1
|
ContainerLeaks: Emerging Security Threats of Information Leakages in Container Clouds
|
[
{
"docid": "ed28d1b8142a2149a1650e861deb7c53",
"text": "Over the last few years, the use of virtualization technologies has increased dramatically. This makes the demand for efficient and secure virtualization solutions become more obvious. Container-based virtualization and hypervisor-based virtualization are two main types of virtualization technologies that have emerged to the market. Of these two classes, container-based virtualization is able to provide a more lightweight and efficient virtual environment, but not without security concerns. In this paper, we analyze the security level of Docker, a well-known representative of container-based approaches. The analysis considers two areas: (1) the internal security of Docker, and (2) how Docker interacts with the security features of the Linux kernel, such as SELinux and AppArmor, in order to harden the host system. Furthermore, the paper also discusses and identifies what could be done when using Docker to increase its level of security.",
"title": ""
}
] |
[
{
"docid": "77c8f9723134571d11ae9fc193fd377e",
"text": "s of Invited Talks From Relational to Semantic Data Mining",
"title": ""
},
{
"docid": "f99670327cc71eeab7bea6ef24d1d5c6",
"text": "Infant cry is a mode of communication, for interacting and drawing attention. The infants cry due to physiological, emotional or some ailment reasons. Cry involves high pitch changes in the signal. In this paper we describe an ‘Infant Cry Sounds Database’ (ICSD), collected especially for the study of likely cause of an infant’s cry. The database consists of infant cry sounds due to six causes: pain, discomfort, emotional need, ailment, environmental factors and hunger/thirst. The ground truth cause of cry is established with the help of two medical experts and parents of the infants. Preliminary analysis is carried out using the sound production features, the instantaneous fundamental frequency and frame energy derived from the cry acoustic signal, using auto correlation and linear prediction (LP) analysis. Spectrograms give the base reference. The infant cry sounds due to pain and discomfort are distinguished. The database should be helpful towards automated diagnosis of the causes of infant cry.",
"title": ""
},
{
"docid": "b8f6411673d866c6464509b6fa7e9498",
"text": "In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"title": ""
},
{
"docid": "80759a5c2e60b444ed96c9efd515cbdf",
"text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.",
"title": ""
},
{
"docid": "0fb083fd6ee3fd20560f0a06e04eab11",
"text": "This paper presents 50–70 GHz single-pole double-throw (SPDT) and single-pole four-throw (SP4T) switches built using a low-cost 0.13-µm CMOS process. The switches are based on tuned λ/4 designs with output matching networks. High substrate resistance together with deep trenches and isolation moats are used for low insertion loss. The SPDT and SP4T switches result in a measured insertion loss of 2.0 and 2.3 dB at 60 GHz, with an isolation of ≫ 32 dB and ≫ 22 dB, respectively. The measured output port-to-port isolation is ≫ 27 dB for both designs. The P1dB is 13–14 dBm with a measured IIP3 of ≫ 23 dBm for both switches. Both designs have a return loss better than −10 dB at all ports from 50 to 70 GHz. The active chip area is 0.39×0.32 mm2 (SPDT) and 0.59×0.45 mm2 (SP4T). To our knowledge, this paper presents the lowest loss 60 GHz SPDT and SP4T switches and also the highest isolation SPDT switch in any CMOS technology to-date.",
"title": ""
},
{
"docid": "e7232201e629e45b1f8f9a49cb1fdedf",
"text": "Semantic Data Mining refers to the data mining tasks that systematically incorporate domain knowledge, especially formal semantics, into the process. In the past, many research efforts have attested the benefits of incorporating domain knowledge in data mining. At the same time, the proliferation of knowledge engineering has enriched the family of domain knowledge, especially formal semantics and Semantic Web ontologies. Ontology is an explicit specification of conceptualization and a formal way to define the semantics of knowledge and data. The formal structure of ontology makes it a nature way to encode domain knowledge for the data mining use. In this survey paper, we introduce general concepts of semantic data mining. We investigate why ontology has the potential to help semantic data mining and how formal semantics in ontologies can be incorporated into the data mining process. We provide detail discussions for the advances and state of art of ontology-based approaches and an introduction of approaches that are based on other form of knowledge representations.",
"title": ""
},
{
"docid": "26b5d72d3135623765b389c8a2f40625",
"text": "Data preprocessing is a fundamental part of any machine learning application and frequently the most time-consuming aspect when developing a machine learning solution. Preprocessing for deep learning is characterized by pipelines that lazily load data and perform data transformation, augmentation, batching and logging. Many of these functions are common across applications but require different arrangements for training, testing or inference. Here we introduce a novel software framework named nuts-flow/ml that encapsulates common preprocessing operations as components, which can be flexibly arranged to rapidly construct efficient preprocessing pipelines for deep learning.",
"title": ""
},
{
"docid": "de8e0f866ee88ab01736073ceb536239",
"text": "This paper presents a newly developed high torque density motor design for electric racing cars. An interior permanent magnet motor with a flux-concentration configuration is proposed. The 18slots/16poles motor has pre-formed tooth wound coils, rare-earth magnets type material, whilst employing a highly efficient cooling system with forced oil convection through the slot and forced air convection in the airgap. Losses are minimized either by using special materials, i.e. non-oriented thin gage, laminated steel or special construction, i.e. magnet segmentation or twisted wires. The thermal behavior of the motor is modelled and tested using Le Mans racing typical driving cycle. Several prototypes have been built and tested to validate the proposed configuration.",
"title": ""
},
{
"docid": "e2056bfb51b851cde7f45386d8cff115",
"text": "Squamous cell carcinoma (SCC) is the most common malignant tumor in the oral cavity, and it accounts for about 90% of all oral cancers. Several risk factors for oral SCC have been identified; however, SCC associated with odontogenic keratocysts have rarely been reported. The present study describes the case of a 36-year-old man with SCC of the right ramus of the mandible, which was initially diagnosed as a benign odontogenic cyst. He underwent enucleation at another hospital followed by segmental mandibulectomy and fibular free flap reconstruction at our institution. In this case, we introduce a patient with oral cancer associated with odontogenic cyst on the mandible and report a satisfactory outcome with wide resection and immediate free flap reconstruction.",
"title": ""
},
{
"docid": "43e645dd8627cbe2841aaf7b509a9e7b",
"text": "This article argues that mirror neurons originate in sensorimotor associative learning and therefore a new approach is needed to investigate their functions. Mirror neurons were discovered about 20 years ago in the monkey brain, and there is now evidence that they are also present in the human brain. The intriguing feature of many mirror neurons is that they fire not only when the animal is performing an action, such as grasping an object using a power grip, but also when the animal passively observes a similar action performed by another agent. It is widely believed that mirror neurons are a genetic adaptation for action understanding; that they were designed by evolution to fulfill a specific socio-cognitive function. In contrast, we argue that mirror neurons are forged by domain-general processes of associative learning in the course of individual development, and, although they may have psychological functions, they do not necessarily have a specific evolutionary purpose or adaptive function. The evidence supporting this view shows that (1) mirror neurons do not consistently encode action \"goals\"; (2) the contingency- and context-sensitive nature of associative learning explains the full range of mirror neuron properties; (3) human infants receive enough sensorimotor experience to support associative learning of mirror neurons (\"wealth of the stimulus\"); and (4) mirror neurons can be changed in radical ways by sensorimotor training. The associative account implies that reliable information about the function of mirror neurons can be obtained only by research based on developmental history, system-level theory, and careful experimentation.",
"title": ""
},
{
"docid": "08b1f2381eeb59b4adf1da331c7f4e35",
"text": "This paper presents coordination algorithms for networks of mobile autonomous agents. The objective of the proposed algorithms is to achieve rendezvous, that is, agreement over the location of the agents in the network. We provide analysis and design results for multiagent networks in arbitrary dimensions under weak requirements on the switching and failing communication topology. The novel correctness proof relies on proximity graphs and their properties and on a general LaSalle invariance principle for nondeterministic discrete-time dynamical systems",
"title": ""
},
{
"docid": "3d15103ad837b29d48b05b62d1358a07",
"text": "Background: With the rapid population ageing that is occurring world-wide, there is increasing interest in “smart home” technologies that can assist older adults to continue living at home with safety and independence. This systematic review and critical evaluation of the world wide literature assesses the effectiveness and feasibility of smart-home technologies for promoting independence, health, well-being and quality of life, in older adults. Methods: A total of 1877 “smart home” publications were identified by the initial search of peer reviewed journals. Of these, 21 met our inclusion criteria for the review and were subject to data extraction and quality assessment. Results: Smart-home technologies included different types of active and passive sensors, monitoring devices, robotics and environmental control systems. One study assessed effectiveness of a smart home technology. Sixteen reported on the feasibility of smart-home technology and four were observational studies. Conclusion: Older adults were reported to readily accept smart-home technologies, especially if they benefited physical activity, independence and function and if privacy concerns were addressed. Given the modest number of objective analyses, there is a need for further scientific analysis of a range of smart home technologies to promote community living. rather than being hospitalized or institutionalized [10]. Smart-home technologies can also promote independent living and safety. This has the potential to optimize quality of life and reduce the stress on agedcare facilities and other health resources [13]. The challenge with smart-home technologies is to create a home environment that is safe and secure to reduce falls, disability, stress, fear or social isolation [14]. Contemporary smart home technology systems are versatile in function and user friendly. Smart home technologies usually aim to perform functions without disturbing the user and without causing any pain, inconvenience or movement restrictions. Martin and colleagues performed a preliminary analysis of the acceptance of smart-home technologies [15]. The results from this review were limited as no studies met inclusion criteria [15]. Given however, the rapid progression of new smart home technologies, a new systematic review of the literature is required. This paper addresses that need by analysing the range of studies undertaken to assess the impact of these technologies on the quality of life experienced by an ageing population accessing these supports. The broader context incorporates consideration of the social and emotional well-being needs of this population. The current review aimed to answer the following research question: “What is the effectiveness of smart-home technologies for Citation: Morris ME, Adair B, Miller K, Ozanne E, Hansen R, et al. (2013) Smart-Home Technologies to Assist Older People to Live Well at Home. Aging Sci 1: 101. doi:10.4172/jasc.1000101",
"title": ""
},
{
"docid": "d4ca93d0aeabda1b90bb3f0f16df9ee8",
"text": "Smart card technology has evolved over the last few years following notable improvements in the underlying hardware and software platforms. Advanced smart card microprocessors, along with robust smart card operating systems and platforms, contribute towards a broader acceptance of the technology. These improvements have eliminated some of the traditional smart card security concerns. However, researchers and hackers are constantly looking for new issues and vulnerabilities. In this article we provide a brief overview of the main smart card attack categories and their corresponding countermeasures. We also provide examples of well-documented attacks on systems that use smart card technology (e.g. satellite TV, EMV, proximity identification) in an attempt to highlight the importance of the security of the overall system rather than just the smart card. a 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98efa74b25284d0ce22038811f9e09e5",
"text": "Automatic analysis of malicious binaries is necessary in order to scale with the rapid development and recovery of malware found in the wild. The results of automatic analysis are useful for creating defense systems and understanding the current capabilities of attackers. We propose an approach for automatic dissection of malicious binaries which can answer fundamental questions such as what behavior they exhibit, what are the relationships between their inputs and outputs, and how an attacker may be using the binary. We implement our approach in a system called BitScope. At the core of BitScope is a system which allows us to execute binaries with symbolic inputs. Executing with symbolic inputs allows us to reason about code paths without constraining the analysis to a particular input value. We implement 5 analysis using BitScope, and demonstrate that the analysis can rapidly analyze important properties such as what behaviors the malicious binaries exhibit. For example, BitScope uncovers all commands in typical DDoS zombies and botnet programs, and uncovers significant behavior in just minutes. This work was supported in part by CyLab at Carnegie Mellon under grant DAAD19-02-1-0389 from the Army Research Office, the U.S. Army Research Office under the Cyber-TA Research Grant No. W911NF-06-1-0316, the ITA (International Technology Alliance), CCF-0424422, National Science Foundation Grant Nos. 0311808, 0433540, 0448452, 0627511, and by the IT R&D program of MIC(Ministry of Information and Communication)/IITA(Institute for Information Technology Advancement) [2005-S-606-02, Next Generation Prediction and Response technology for Computer and Network Security Incidents]. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the ARO, CMU, or the U.S. Government.",
"title": ""
},
{
"docid": "a7e35f3dec01d0ae7d15b02ec0ea7bee",
"text": "Both generative adversarial networks (GAN) in unsupervised learning and actorcritic methods in reinforcement learning (RL) have gained a reputation for being difficult to optimize. Practitioners in both fields have amassed a large number of strategies to mitigate these instabilities and improve training. Here we show that GANs can be viewed as actor-critic methods in an environment where the actor cannot affect the reward. We review the strategies for stabilizing training for each class of models, both those that generalize between the two and those that are particular to that model. We also review a number of extensions to GANs and RL algorithms with even more complicated information flow. We hope that by highlighting this formal connection we will encourage both GAN and RL communities to develop general, scalable, and stable algorithms for multilevel optimization with deep networks, and to draw inspiration across communities.",
"title": ""
},
{
"docid": "a7910419717fe5c06e24de66d74ca4ec",
"text": "Many psychological phenomena occur in small time windows, measured in minutes or hours. However, most computational linguistic techniques look at data on the order of weeks, months, or years. We explore micropatterns in sequences of messages occurring over a short time window for their prevalence and power for quantifying psychological phenomena, specifically, patterns in affect. We examine affective micropatterns in social media posts from users with anxiety, eating disorders, panic attacks, schizophrenia, suicidality, and matched controls.",
"title": ""
},
{
"docid": "455ad24d734b7941c4be4de78d99db9e",
"text": "This paper is concerned with simple human performance laws of action for three classes of taskspointing, crossing, and steering, as well as their applications in Virtual Reality research. In comparison to Fitts' law of pointing, the law of steering the quantitative relationship between human temporal performance and the movement path's spatial characteristicshas been notably under investigated. After a review of research on the law of steering in different domains and time periods, we examine the applicability of the law of steering in a VR locomotion task. Participants drove a virtual vehicle in a virtual environment on paths whose shape and width were systematically manipulated. Results showed that the law of steering indeed applies to locomotion in Virtual Environments. Participants' mean trial completion times linearly correlated (r2 between 0.985 and 0.999) with an index of difficulty quantified as path length to width ratio for the straight and circular paths used in this experiment. On average both the mean and the maximum speeds of the participants were linearly proportional to path width. Such human performance regularity provides a quantitative tool for 3D human-machine interface design and evaluation. We also propose to use the law-of-steering model in Virtual Reality manipulation tasks such as the ring and wire task in the future.",
"title": ""
},
{
"docid": "40050e8f3ad386e4604514ec49bcb52e",
"text": "Imperforate hymen is a malformation that is easy to diagnose, even in countries with limited health care coverage. Unrecognized at birth, it becomes evident at puberty because of the development of a hematocolpos, which requires surgical intervention. This situation can be avoided with a complete examination of the infant at birth. This case report describes four patients whom we saw from 1995 through 2001 at the Bangui (Central African Republic) Pediatric Center and Community Hospital.",
"title": ""
},
{
"docid": "341e6b3558471a2c557dab62904ddfb7",
"text": "With network traffic rates continuously growing, security systems like firewalls are facing increasing challenges to process incoming packets at line speed without sacrificing protection. Accordingly, specialized hardware firewalls are increasingly used in high-speed environments. Hardware solutions, though, are inherently limited in terms of the complexity of the policies they can implement, often forcing users to choose between throughput and comprehensive analysis. On the contrary, complex rules typically constitute only a small fraction of the rule set. This motivates the combination of massively parallel, yet complexity-limited specialized circuitry with a slower, but semantically powerful software firewall. The key challenge in such a design arises from the dependencies between classification rules due to their relative priorities within the rule set: complex rules requiring software-based processing may be interleaved at arbitrary positions between those where hardware processing is feasible. We therefore discuss approaches for partitioning and transforming rule sets for hybrid packet processing, and propose HyPaFilter, a hybrid classification system based on tailored circuitry on an FPGA as an accelerator for a Linux netfilter firewall. Our evaluation demonstrates 30-fold performance gains in comparison to software-only processing.",
"title": ""
},
{
"docid": "7013e752987cf3dbdeab029d8eb184e6",
"text": "Federated searching was once touted as the library world’s answer to Google, but ten years since federated searching technology’s inception, how does it actually compare? This study focuses on undergraduate student preferences and perceptions when doing research using both Google and a federated search tool. Students were asked about their preferences using each search tool and the perceived relevance of the sources they found using each search tool. Students were also asked to self-assess their online searching skills. The findings show that students believe they possess strong searching skills, are able to find relevant sources using both search tools, but actually prefer the federated search tool to Google for doing research. Thus, despite federated searching’s limitations, students see the need for it, libraries should continue to offer federated search (especially if a discovery search tool is not available), and librarians should focus on teaching students how to use federated search and Google more effectively.",
"title": ""
}
] |
scidocsrr
|
fcfbfc58317d4dde5004d49073e72c2d
|
Beam-Steering SIW Leaky-Wave Subarray With Flat-Topped Footprint for 5G Applications
|
[
{
"docid": "dff9461a1827c8a3b28cd0a4490ce7f8",
"text": "This work presents a full-wave integral equation approach specifically conceived for the analysis and design of laterally-shielded rectangular dielectric waveguides, periodically loaded with planar perturbations of rectangular shape. This type of open periodic waveguide supports the propagation of leaky-wave modes, which can be used to build leaky-wave antennas which exhibit many desirable features for millimeter waveband applications. The particularities of the leaky-mode analysis theory are described in this paper, and comparisons with other methods are presented for validation purposes. Using this leaky-mode analysis method, a novel periodic leaky-wave antenna is presented and designed. This novel antenna shows some important improvements with respect to the features of previously proposed antennas. The results of the designed radiation patterns are validated with three-dimensional electromagnetic analysis using commercial software.",
"title": ""
},
{
"docid": "8d1d5d3211eaa91f41d81aa66001ac94",
"text": "This paper investigates low-complexity approaches to small-cell base-station (SBS) design, suitable for future 5G millimeter-wave (mmWave) indoor deployments. Using large-scale antenna systems and high-bandwidth spectrum, such SBS can theoretically achieve the anticipated future data bandwidth demand of 10000 fold in the next 20 years. We look to exploit small cell distances to simplify SBS design, particularly considering dense indoor installations. We compare theoretical results, based on a link budget analysis, with the system simulation of a densely deployed indoor network using appropriate mmWave channel propagation conditions. The frequency diverse bands of 28 and 72 GHz of the mmWave spectrum are assumed in the analysis. We investigate the performance of low-complexity approaches using a minimal number of antennas at the base station and the user equipment. Using the appropriate power consumption models and the state-of-the-art sub-component power usage, we determine the total power consumption and the energy efficiency of such systems. With mmWave being typified nonline-of-sight communication, we further investigate and propose the use of direct sequence spread spectrum as a means to overcome this, and discuss the use of multipath detection and combining as a suitable mechanism to maximize link reliability.",
"title": ""
},
{
"docid": "ed676ff14af6baf9bde3bdb314628222",
"text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.",
"title": ""
}
] |
[
{
"docid": "3e62736546550ad7456407bef386d6ec",
"text": "Context: mobile application development is rapidly evolving with substantial economic and scientific interest. One of the primary reasons for mobile application development failure is the increasing number of mobile platforms; some organizations endorse mobile application development before understanding the associated development challenges of each target platform. Objective: the objective of this paper is to identify the challenges of native, web, and hybrid mobile applications, which can undermine the successful development of such applications. Method: we adopted a two-phase research approach: at first, the challenges were identified via a systematic literature review (SLR); and then, the identified challenges were validated through conducting interviews with practitioners. Results: through both research approaches, we identified nine challenges vital to the success of mobile application development and four additional challenges from interviews not reported in the literature. A comparison of the challenges (native, web, and hybrid) identified in SLR indicates that there are slightly more differences than similarities between the challenges. On the other hand, the challenges (native, web, and hybrid) identified in interviews indicates that there are more similarities than differences between the challenges. Our results show a weak negative correlation between the ranks obtained from the SLR and the interviews (<inline-formula> <tex-math notation=\"LaTeX\">$[rs(9) = -.034]$ </tex-math></inline-formula>, <inline-formula> <tex-math notation=\"LaTeX\">$p = 0.932$ </tex-math></inline-formula>). The results obtained from our t-test (i.e., <inline-formula> <tex-math notation=\"LaTeX\">$t = 0.868, p = 0.402 > 0.05$ </tex-math></inline-formula>) depicts that there is no significant difference between the findings of SLR and interviews. Conclusions: mobile application development organizations should try to address the identified challenges when developing mobile applications (native, web, or hybrid) to increase the probability of mobile application success.",
"title": ""
},
{
"docid": "bf98d3c8d9bea339fb057bc1c177e9e0",
"text": "Inactivation of parasites in food by microwave treatment may vary due to differences in the characteristics of microwave ovens and food properties. Microwave treatment in standard domestic ovens results in hot and cold spots, and the microwaves do not penetrate all areas of the samples depending on the thickness, which makes it difficult to compare microwave with conventional heat treatments. The viability of Anisakis simplex (isolated larvae and infected fish muscle) heated in a microwave oven with precise temperature control was compared with that of larvae heated in a water bath to investigate any additional effect of the microwaves. At a given temperature, less time was required to kill the larvae by microwaves than by heated water. Microwave treatment killed A. simplex larvae faster than did conventional cooking when the microwaves fully penetrated the samples and resulted in fewer changes in the fish muscle. However, the heat-stable allergen Ani s 4 was detected by immunohistochemistry in the fish muscle after both heat treatments, even at 70°C, suggesting that Ani s 4 allergens were released from the larvae into the surrounding tissue and that the tissues retained their allergenicity even after the larvae were killed by both heat treatments. Thus, microwave cooking will not render fish safe for individuals already sensitized to A. simplex heat-resistant allergens.",
"title": ""
},
{
"docid": "f9f1cf949093c41a84f3af854a2c4a8b",
"text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.",
"title": ""
},
{
"docid": "88a1736e189ce870fbce1ad52aab590f",
"text": "Recommendations towards a food supply system framework that will deliver healthy food in a sustainable way. In 2007, Emily Morgan was one of fifteen Americans to be granted a Fulbright Postgraduate Scholarship to Australia. A Tufts University postgraduate student and former Mount Holyoke College graduate, Emily carried out her Fulbright research on the relationship between food, health and the environment. This project was completed at VicHealth, under the direction of nutrition promotion and food policy expert Dr Tony Worsley and in collaboration with the School of Exercise and Nutrition Sciences at Deakin University. Fruit and Vegetable Consumption and Waste in Australia iii Contents Executive Summary 1 Preamble 4 Introduction 5 The Australian Food System 7 How do we conceptualize the food system? 7 The sectors of the food system 8 Challenges to improving the system 9 Major forces on the food system 10 The role of government 11 Recommendations 11 Consumption and Waste in Australia 12 How much is enough? 12 Consumption data 13 International data 13 National nutrition survey 13 National children's nutrition and physical activity survey 13 National health survey 14 State-based consumption data 14 Waste data 15 Recommendations 18 Drivers for change 19 Health and the link with fruit and vegetable consumption 19 Cancer 20 Cardiovascular disease 21 Diabetes 22 Other conditions 22 Environment and its relationship with the food system 24 Climate change 24 Water usage 29 Biodiversity conservation and ecosystem health 31 Ethics and the food system 32 Environmental ethics 32 Human ethics 32 Animal ethics 34 Economics and the future of the food system 35 Current efforts to change the paradigm 36 Efforts to increase fruit and vegetable consumption 36 International 36 'Go for 2 and 5 ® ' campaign 36 'Go for your life' 37 Other efforts 37 Efforts to minimize and better manage food waste 39 Minimizing food losses along the supply system 39 Better managing food losses along the supply system 42 Minimizing food losses at the consumer level 44 Better managing food losses at the consumer level 44 Whole-of-system approaches to improving the food system 45 Recommendations 47 Conclusion 49 Culture change 49 Summary of recommendations 51 References 53 Fruit and Vegetable Consumption and Waste in Australia 1 Executive Summary Food is essential to human existence and healthy, nutritious food is vital for living life to its full potential. What we eat and how we dispose of it not only …",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "675a5316ee0f1ba2423d54b154fa2a38",
"text": "Person Re-identification (ReID) is to identify the same person across different cameras. It is a challenging task due to the large variations in person pose, occlusion, background clutter, etc. How to extract powerful features is a fundamental problem in ReID and is still an open problem today. In this paper, we design a Multi-Scale Context-Aware Network (MSCAN) to learn powerful features over full body and body parts, which can well capture the local context knowledge by stacking multi-scale convolutions in each layer. Moreover, instead of using predefined rigid parts, we propose to learn and localize deformable pedestrian parts using Spatial Transformer Networks (STN) with novel spatial constraints. The learned body parts can release some difficulties, e.g. pose variations and background clutters, in part-based representation. Finally, we integrate the representation learning processes of full body and body parts into a unified framework for person ReID through multi-class person identification tasks. Extensive evaluations on current challenging large-scale person ReID datasets, including the image-based Market1501, CUHK03 and sequence-based MARS datasets, show that the proposed method achieves the state-of-the-art results.",
"title": ""
},
{
"docid": "978a6f4dd34f63ea49e633c2f1d76355",
"text": "The growing trend of using smartphones and other GPS-enabled devices has provided new opportunities for developing spatial computing applications and technologies in unanticipated and unprecedented ways. Some capabilities of today's smartphones highlight the potential of citizen sensors to enable the next generation of geoinformatics. One promising application area for this is social media and its application to disaster management. Dynamic, real-time incident information collected from onsite human responders about the extent of damage, the evolution of the event, the community's needs, and responders' ability to deal with the situation, combined with information from the larger emergency management community, could lead to more accurate and real-time situational awareness. This would enable informed decisions, better resource allocation and thus a better response and outcome to the total crisis. In this context, the US Department of Homeland Security's Science & Technology Directorate (DHS-S&T) has initiated the Social Media Alert and Response to Threats to Citizens\" (SMART-C) program, which aims to develop citizen participatory sensing capabilities for decision support throughout the disaster life cycle via a multitude of devices and modalities. Here, the authors provide an overview of the envisioned SMART-C system's capabilities and discuss some of the interesting and unique challenges that arise due to the combination of spatial computing and social media within the context of disaster management.",
"title": ""
},
{
"docid": "984dc75b97243e448696f2bf0ba3c2aa",
"text": "Background: Predicting credit card payment default is critical for the successful business model of a credit card company. An accurate predictive model can help the company identify customers who might default their payment in the future so that the company can get involved earlier to manage risk and reduce loss. It is even better if a model can assist the company on credit card application approval to minimize the risk at upfront. However, credit card default prediction is never an easy task. It is dynamic. A customer who paid his/her payment on time in the last few months may suddenly default his/her next payment. It is also unbalanced given the fact that default payment is rare compared to non-default payments. Unbalanced dataset will easily fail using most machine learning techniques if the dataset is not treated properly.",
"title": ""
},
{
"docid": "ce8024ea5e55d41dc7008003c485b1ca",
"text": "1320 NOTICES OF THE AMS VOLUME 51, NUMBER 11 T he theory of stochastic processes was one of the most important mathematical developments of the twentieth century. Intuitively, it aims to model the interaction of “chance” with “time”. The tools with which this is made precise were provided by the great Russian mathematician A. N. Kolmogorov in the 1930s. He realized that probability can be rigorously founded on measure theory, and then a stochastic process is a family of random variables (X(t), t ≥ 0) defined on a probability space (Ω,F , P ) and taking values in a measurable space (E,E) . Here Ω is a set (the sample space of possible outcomes), F is a σ-algebra of subsets of Ω (the events), and P is a positive measure of total mass 1 on (Ω,F ) (the probability). E is sometimes called the state space. Each X(t) is a (F ,E) measurable mapping from Ω to E and should be thought of as a random observation made on E made at time t . For many developments, both theoretical and applied, E is Euclidean space Rd (often with d = 1); however, there is also considerable interest in the case where E is an infinite dimensional Hilbert or Banach space, or a finite-dimensional Lie group or manifold. In all of these cases E can be taken to be the Borel σalgebra generated by the open sets. To model probabilities arising within quantum theory, the scheme described above is insufficiently general and must be embedded into a suitable noncommutative structure. Stochastic processes are not only mathematically rich objects. They also have an extensive range of applications in, e.g., physics, engineering, ecology, and economics—indeed, it is difficult to conceive of a quantitative discipline in which they do not feature. There is a limited amount that can be said about the general concept, and much of both theory and applications focusses on the properties of specific classes of process that possess additional structure. Many of these, such as random walks and Markov chains, will be well known to readers. Others, such as semimartingales and measure-valued diffusions, are more esoteric. In this article, I will give an introduction to a class of stochastic processes called Lévy processes, in honor of the great French probabilist Paul Lévy, who first studied them in the 1930s. Their basic structure was understood during the “heroic age” of probability in the 1930s and 1940s and much of this was due to Paul Lévy himself, the Russian mathematician A. N. Khintchine, and to K. Itô in Japan. During the past ten years, there has been a great revival of interest in these processes, due to new theoretical developments and also a wealth of novel applications—particularly to option pricing in mathematical finance. As well as a vast number of research papers, a number of books on the subject have been published ([3], [11], [1], [2], [12]) and there have been annual international conferences devoted to these processes since 1998. Before we begin the main part of the article, it is worth David Applebaum is professor of probability and statistics at the University of Sheffield. His email address is [email protected]. He is the author of Lévy Processes and Stochastic Calculus, Cambridge University Press, 2004, on which part of this article is based.",
"title": ""
},
{
"docid": "df1c6a5325dae7159b5bdf5dae65046d",
"text": "Researchers from a wide range of management areas agree that conflicts are an important part of organizational life and that their study is important. Yet, interpersonal conflict is a neglected topic in information system development (ISD). Based on definitional properties of interpersonal conflict identified in the management and organizational behavior literatures, this paper presents a model of how individuals participating in ISD projects perceive conflict and its influence on ISD outcomes. Questionnaire data was obtained from 265 IS staff (main sample) and 272 users (confirmatory sample) working on 162 ISD projects. Results indicated that the construct of interpersonal conflict was reflected by three key dimensions: disagreement, interference, and negative emotion. While conflict management was found to have positive effects on ISD outcomes, it did not substantially mitigate the negative effects of interpersonal conflict on these outcomes. In other words, the impact of interpersonal conflict was perceived to be negative, regardless of how it was managed or resolved.",
"title": ""
},
{
"docid": "f6ddb7fd8a4a06d8a0e58b02085b9481",
"text": "We explore approximate policy iteration (API), replacing t he usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous techniqu e can solve. In particular, we induce high-quality domain-specific plan ners for classical planning domains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs.",
"title": ""
},
{
"docid": "9082dc8e8d60b05255487232fdbec189",
"text": "Energy harvesting has been widely investigated as a promising method of providing power for ultra-low-power applications. Such energy sources include solar energy, radio-frequency (RF) radiation, piezoelectricity, thermal gradients, etc. However, the power supplied by these sources is highly unreliable and dependent upon ambient environment factors. Hence, it is necessary to develop specialized systems that are tolerant to this power variation, and also capable of making forward progress on the computation tasks. The simulation platform in this paper is calibrated using measured results from a fabricated nonvolatile processor and used to explore the design space for a nonvolatile processor with different architectures, different input power sources, and policies for maximizing forward progress.",
"title": ""
},
{
"docid": "55cfcee1d1e83600ad88a1faef13f684",
"text": "In spite of amazing progress in food supply and nutritional science, and a striking increase in life expectancy of approximately 2.5 months per year in many countries during the previous 150 years, modern nutritional research has a great potential of still contributing to improved health for future generations, granted that the revolutions in molecular and systems technologies are applied to nutritional questions. Descriptive and mechanistic studies using state of the art epidemiology, food intake registration, genomics with single nucleotide polymorphisms (SNPs) and epigenomics, transcriptomics, proteomics, metabolomics, advanced biostatistics, imaging, calorimetry, cell biology, challenge tests (meals, exercise, etc.), and integration of all data by systems biology, will provide insight on a much higher level than today in a field we may name molecular nutrition research. To take advantage of all the new technologies scientists should develop international collaboration and gather data in large open access databases like the suggested Nutritional Phenotype database (dbNP). This collaboration will promote standardization of procedures (SOP), and provide a possibility to use collected data in future research projects. The ultimate goals of future nutritional research are to understand the detailed mechanisms of action for how nutrients/foods interact with the body and thereby enhance health and treat diet-related diseases.",
"title": ""
},
{
"docid": "412f6d3e7bb303930e96d46614b7c835",
"text": "Abstract:The field of technology is evolving at a very fast pace. The competition is very intense. So the need of the hour is to produce efficient system. In accomplishing this objective we are required to establish better interaction among all the components of the system. This requirement is fulfilled by the Advanced Microcontroller Bus Architecture (AMBA) protocol from Advanced RISC Machines (ARM).The AMBA is the on-chip standard for the communication among components in Application Specific Integrated Circuits (ASIC) or System on Chip (SoC). This paper focuses on the 2 protocols of AMBA which are Advanced High Performance Bus (AHB) and Advanced Peripheral Bus (APB) and theAPB bridge. The coding is done in Verilog synthesis on Xilinx 14.7 ISE and simulation on ISim simulator and FPGA implementation on Spartan 3.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "e546f1bc6476a0d427caf6563aa41ac5",
"text": "Analysis and reconstruction of range images usually focuses on complex objects completely contained in the field of view; little attention has been devoted so far to the reconstruction of partially occluded simple-shaped wide areas like parts of a wall hidden behind furniture pieces in an indoor range image. The work in this paper is aimed at such reconstruction. First of all the range image is partitioned and surfaces are fitted to these partitions. A further step lo cates possibly occluded areas, while a final step determines which areas are actually occluded. The reconstruction of data occurs in this last step.",
"title": ""
},
{
"docid": "e1a4468ccd5305b5158c26b2160d04a6",
"text": "Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.",
"title": ""
},
{
"docid": "8f212b657bc99532387d008282cc75b1",
"text": "Mindfulness training has been considered an effective mode for optimizing sport performance. The purpose of this study was to examine the impact of a twelve-session, 30-minute mindfulness meditation training session for sport (MMTS) intervention. The sample included a Division I female collegiate athletes, using quantitative comparisons based on preand post-test ratings on the Mindfulness Attention Awareness Scale (MAAS), the Positive Affect Negative Affect Scale (PANAS), the Psychological Well-Being Scale and the Life Satisfaction Scale. Paired sample t-tests highlight significant increases in mindfulness scores for the intervention group (p < .01), while the comparison group score of mindfulness remained constant. Both groups remained stable in reported positive affect however the intervention group maintained stable reports of negative affect while the comparison group experienced a significant increase in Negative Affect (p < .001). Results are discussed in relation to existing theories on mindfulness and meditation.",
"title": ""
}
] |
scidocsrr
|
2b858b83c97ce14a8bf33708d3bb3d09
|
Personalized Grade Prediction: A Data Mining Approach
|
[
{
"docid": "ab23f66295574368ccd8fc4e1b166ecc",
"text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.",
"title": ""
},
{
"docid": "ae67aadc3cddd3642bf0a7f6336b9817",
"text": "To increase efficacy in traditional classroom courses as well as in Massive Open Online Courses (MOOCs), automated systems supporting the instructor are needed. One important problem is to automatically detect students that are going to do poorly in a course early enough to be able to take remedial actions. Existing grade prediction systems focus on maximizing the accuracy of the prediction while overseeing the importance of issuing timely and personalized predictions. This paper proposes an algorithm that predicts the final grade of each student in a class. It issues a prediction for each student individually, when the expected accuracy of the prediction is sufficient. The algorithm learns online what is the optimal prediction and time to issue a prediction based on past history of students' performance in a course. We derive a confidence estimate for the prediction accuracy and demonstrate the performance of our algorithm on a dataset obtained based on the performance of approximately 700 UCLA undergraduate students who have taken an introductory digital signal processing over the past seven years. We demonstrate that for 85% of the students we can predict with 76% accuracy whether they are going do well or poorly in the class after the fourth course week. Using data obtained from a pilot course, our methodology suggests that it is effective to perform early in-class assessments such as quizzes, which result in timely performance prediction for each student, thereby enabling timely interventions by the instructor (at the student or class level) when necessary.",
"title": ""
}
] |
[
{
"docid": "883191185d4671164eb4f12f19eb47f3",
"text": "Lustre is a declarative, data-flow language, which is devoted to the specification of synchronous and real-time applications. It ensures efficient code generation and provides formal specification and verification facilities. A graphical tool dedicated to the development of critical embedded systems and often used by industries and professionals is SCADE (Safety Critical Application Development Environment). SCADE is a graphical environment based on the LUSTRE language and it allows the hierarchical definition of the system components and the automatic code generation. This research work is partially concerned with Lutess, a testing environment which automatically transforms formal specifications into test data generators.",
"title": ""
},
{
"docid": "7d1348ad0dbd8f33373e556009d4f83a",
"text": "Laryngeal neoplasms represent 2% of all human cancers. They befall mainly the male sex, especially between 50 and 70 years of age, but exceptionally may occur in infancy or extreme old age. Their occurrence has increased considerably inclusively due to progressive population again. The present work aims at establishing a relation between this infirmity and its prognosis in patients submitted to the treatment recommended by Departament of Otolaryngology and Head Neck Surgery of the School of Medicine of São José do Rio Preto. To this effect, by means of karyometric optical microscopy, cell nuclei in the glottic region of 20 individuals, divided into groups according to their tumor stage and time of survival, were evaluated. Following comparation with a control group and statistical analsis, it became possible to verify that the lesser diameter of nuclei is of prognostic value for initial tumors in this region.",
"title": ""
},
{
"docid": "b8d840944817351bb2969a745b55f5c6",
"text": ".............................................................................................................................................................. 7 Tiivistelmä .......................................................................................................................................................... 9 List of original papers .................................................................................................................................. 11 Acknowledgements ..................................................................................................................................... 13",
"title": ""
},
{
"docid": "3d56f88bf8053258a12e609129237b19",
"text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.",
"title": ""
},
{
"docid": "33cab03ab9773efe22ba07dd461811ef",
"text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.",
"title": ""
},
{
"docid": "95ead545f73f70398291bdf9e2b5b104",
"text": "Diffusion-based classifiers such as those relying on the Personalized PageRank and the heat kernel enjoy remarkable classification accuracy at modest computational requirements. Their performance however is affected by the extent to which the chosen diffusion captures a typically unknown label propagation mechanism, which can be specific to the underlying graph, and potentially different for each class. This paper introduces a disciplined, data-efficient approach to learning class-specific diffusion functions adapted to the underlying network topology. The novel learning approach leverages the notion of “landing probabilities” of class-specific random walks, which can be computed efficiently, thereby ensuring scalability to large graphs. This is supported by rigorous analysis of the properties of the model as well as the proposed algorithms. Furthermore, a robust version of the classifier facilitates learning even in noisy environments. Classification tests on real networks demonstrate that adapting the diffusion function to the given graph and observed labels significantly improves the performance over fixed diffusions, reaching—and many times surpassing—the classification accuracy of computationally heavier state-of-the-art competing methods, which rely on node embeddings and deep neural networks.",
"title": ""
},
{
"docid": "02de9e47c4cba04cc2795af68ec449b9",
"text": "We explore the performance of latent variable models for conditional text generation in the context of neural machine translation (NMT). Similar to (Zhang et al., 2016), we augment the encoder-decoder NMT paradigm by introducing a continuous latent variable to model features of the translation process. We extend this model with a co-attention mechanism motivated by (Parikh et al., 2016) in the inference network. Compared to the vision domain, latent variable models for text face additional challenges due to the discrete nature of language, namely posterior collapse (Bowman et al., 2015). We experiment with different approaches to mitigate this issue. We show that our conditional variational model improves upon both discriminative attention-based translation and the variational baseline presented in (Zhang et al., 2016). Finally, we present some exploration of the learned latent space to illustrate what the latent variable is capable of capturing. This is the first reported conditional variational model for text that meaningfully utilizes the latent variable without weakening the translation model.",
"title": ""
},
{
"docid": "ca5b9cd1634431254e1a454262eecb40",
"text": "This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.",
"title": ""
},
{
"docid": "4e55d02fdd8ff4c5739cc433f4f15e9b",
"text": "muchine, \" a progrum f o r uutomuticully generating syntacticully correct progrums (test cusrs> f o r checking compiler front ends. The notion of \" clynumic grammur \" is introduced und is used in a syntax-defining notution thut procides f o r context-sensitiuity. Exurnples demonstrute use of the syntax machine. The \" syntax machine \" discussed here automatically generates random test cases for any suitably defined programming language.' The test cases it produces are syntactically valid programs. But they are not \" meaningful, \" and if an attempt is made to execute them, the results are unpredictable and uncheckable. For this reason, they are less valuable than handwritten test cases. However, as an inexhaustible source of new test material, the syntax machine has shown itself to be a valuable tool. In the following sections, we characterize the use of this tool in testing different types of language processors, introduce the concept of \" dynamic grammar \" of a programming language, outline the structure of the system, and show what the syntax machine does by means of some examples. Test cases Test cases for a language processor are programs written following the rules of the language, as documented. The test cases, when processed, should give known results. If this does not happen, then either the processor or its documentation is in error. We can distinguish three categories of language processors and assess the usefulness of the syntax machine for testing them. For an interpreter, the syntax machine test cases are virtually useless,",
"title": ""
},
{
"docid": "eed5c66d0302c492f2480a888678d1dc",
"text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.",
"title": ""
},
{
"docid": "10e88f0d1a339c424f7e0b8fa5b43c1e",
"text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date",
"title": ""
},
{
"docid": "1cf3ee00f638ca44a3b9772a2df60585",
"text": "Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.",
"title": ""
},
{
"docid": "07575ce75d921d6af72674e1fe563ff7",
"text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.",
"title": ""
},
{
"docid": "fed5b83e2e35a3a5e2c8df38d96be981",
"text": "The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.",
"title": ""
},
{
"docid": "72a283eda92eb25404536308d8909999",
"text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.",
"title": ""
},
{
"docid": "8b6d3b5fb8af809619119ee0f75cb3c6",
"text": "This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "bb0ac3d88646bf94710a4452ddf50e51",
"text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension",
"title": ""
},
{
"docid": "1c05027fba55d64070cb3ff698b9c253",
"text": "The advancement of the World Wide Web has resulted in the creation of a new form of retail transactionselectronic retailing (e-tailing) or web-shopping. Thus, customers’ involvements in online purchasing have become an important trend. As such, it is vital to identify the determinants of the customer online purchase intention. The aim of this research is to evaluate the impacts of shopping orientations, online trust and prior online purchase experience to the customer online purchase intention. A total of 242 undergraduate information technology students from a private university in Malaysia participated in this research. The findings revealed that impulse purchase intention, quality orientation, brand orientation, online trust and prior online purchase experience were positively related to the customer online purchase intention.",
"title": ""
},
{
"docid": "a862bcbf9addb965b9f05ed4ba6ace07",
"text": "Delivery of electroporation pulses in electroporation-based treatments could potentially induce heartrelated effects. The objective of our work was to develop a software tool for electrocardiogram (ECG) analysis to facilitate detection of such effects in pre-selected ECGor heart rate variability (HRV) parameters. Our software tool consists of five distinct modules for: (i) preprocessing; (ii) learning; (iii) detection and classification; (iv) selection and verification; and (v) ECG and HRV analysis. Its key features are: automated selection of ECG segments from ECG signal according to specific user-defined requirements (e.g., selection of relatively noise-free ECG segments); automated detection of prominent heartbeat features, such as Q, R and T wave peak; automated classification of individual heartbeat as normal or abnormal; displaying of heartbeat annotations; quick manual screening of analyzed ECG signal; and manual correction of annotation and classification errors. The performance of the detection and classification module was evaluated on 19 two-hour-long ECG records from Long-Term ST database. On average, the QRS detection algorithm had high sensitivity (99.78%), high positive predictivity (99.98%) and low detection error rate (0.35%). The classification algorithm correctly classified 99.45% of all normal QRS complexes. For normal heartbeats, the positive predictivity of 99.99% and classification error rate of 0.01% were achieved. The software tool provides for reliable and effective detection and classification of heartbeats and for calculation of ECG and HRV parameters. It will be used to clarify the issues concerning patient safety during the electroporation-based treatments used in clinical practice. Preventing the electroporation pulses from interfering with the heart is becoming increasingly important because new applications of electroporation-based treatments are being developed which are using endoscopic, percutaneous or surgical means to access internal tumors or tissues and in which the target tissue can be located in immediate vicinity to the heart.",
"title": ""
},
{
"docid": "8cac4d9b14b0e2918a52f3e71cc440bd",
"text": "Cyber-Physical Systems refer to systems that have an interaction between computers, communication channels and physical devices to solve a real-world problem. Towards industry 4.0 revolution, Cyber-Physical Systems currently become one of the main targets of hackers and any damage to them lead to high losses to a nation. According to valid resources, several cases reported involved security breaches on Cyber-Physical Systems. Understanding fundamental and theoretical concept of security in the digital world was discussed worldwide. Yet, security cases in regard to the cyber-physical system are still remaining less explored. In addition, limited tools were introduced to overcome security problems in Cyber-Physical System. To improve understanding and introduce a lot more security solutions for the cyber-physical system, the study on this matter is highly on demand. In this paper, we investigate the current threats on Cyber-Physical Systems and propose a classification and matrix for these threats, and conduct a simple statistical analysis of the collected data using a quantitative approach. We confirmed four components i.e., (the type of attack, impact, intention and incident categories) main contributor to threat taxonomy of Cyber-Physical Systems. Keywords—Cyber-Physical Systems; threats; incidents; security; cybersecurity; taxonomies; matrix; threats analysis",
"title": ""
}
] |
scidocsrr
|
bb090e623e20242028023fecb3d439eb
|
Deep Learning with Nonparametric Clustering
|
[
{
"docid": "11ce5da16cf0c0c6cfb85e0d0bbdc13e",
"text": "Recently, fully-connected and convolutional neural networks have been trained to reach state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics data. For classification tasks, much of these “deep learning” models employ the softmax activation functions to learn output labels in 1-of-K format. In this paper, we demonstrate a small but consistent advantage of replacing softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. In almost all of the previous works, hidden representation of deep networks are first learned using supervised or unsupervised techniques, and then are fed into SVMs as inputs. In contrast to those models, we are proposing to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers. Our experiments show that simply replacing softmax with linear SVMs gives significant gains on datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop’s face expression recognition challenge.",
"title": ""
},
{
"docid": "e8a78557974794594acb1f0cafb93be4",
"text": "In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the “right” number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.",
"title": ""
},
{
"docid": "693e935d405b255ac86b8a9f5e7852a3",
"text": "Recent developments have demonstrated the capacity of rest rict d Boltzmann machines (RBM) to be powerful generative models, able to extract useful featu r s from input data or construct deep artificial neural networks. In such settings, the RBM only yields a preprocessing or an initialization for some other model, instead of acting as a complete supervised model in its own right. In this paper, we argue that RBMs can provide a self-contained framework fo r developing competitive classifiers. We study the Classification RBM (ClassRBM), a variant on the R BM adapted to the classification setting. We study different strategies for training the Cla ssRBM and show that competitive classification performances can be reached when appropriately com bining discriminative and generative training objectives. Since training according to the gener ative objective requires the computation of a generally intractable gradient, we also compare differen t approaches to estimating this gradient and address the issue of obtaining such a gradient for proble ms with very high dimensional inputs. Finally, we describe how to adapt the ClassRBM to two special cases of classification problems, namely semi-supervised and multitask learning.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "232d7e7986de374499c8ca580d055729",
"text": "In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.",
"title": ""
},
{
"docid": "f3dcf620edb77a199b2ad9d2410cc858",
"text": "As the amount of digital data grows, so does the theft of sensitive data through the loss or misplacement of laptops, thumb drives, external hard drives, and other electronic storage media. Sensitive data may also be leaked accidentally due to improper disposal or resale of storage media. To protect the secrecy of the entire data lifetime, we must have confidential ways to store and delete data. This survey summarizes and compares existing methods of providing confidential storage and deletion of data in personal computing environments.",
"title": ""
},
{
"docid": "ec377000353bce311c0887cd4edab554",
"text": "This paper explains various security issues in the existing home automation systems and proposes the use of logic-based security algorithms to improve home security. This paper classifies natural access points to a home as primary and secondary access points depending on their use. Logic-based sensing is implemented by identifying normal user behavior at these access points and requesting user verification when necessary. User position is also considered when various access points changed states. Moreover, the algorithm also verifies the legitimacy of a fire alarm by measuring the change in temperature, humidity, and carbon monoxide levels, thus defending against manipulative attackers. The experiment conducted in this paper used a combination of sensors, microcontrollers, Raspberry Pi and ZigBee communication to identify user behavior at various access points and implement the logical sensing algorithm. In the experiment, the proposed logical sensing algorithm was successfully implemented for a month in a studio apartment. During the course of the experiment, the algorithm was able to detect all the state changes of the primary and secondary access points and also successfully verified user identity 55 times generating 14 warnings and 5 alarms.",
"title": ""
},
{
"docid": "55b967cd6d28082ba0fa27605f161060",
"text": "Background. A scheme for format-preserving encryption (FPE) is supposed to do that which a conventional (possibly tweakable) blockcipher does—encipher messages within some message space X—except that message space, instead of being something like X = {0, 1}128, is more gen eral [1, 3]. For example, the message space might be the set X = {0, 1, . . . , 9}16, in which case each 16-digit plaintext X ∈ X gets enciphered into a 16-digit ciphertext Y ∈ X . In a stringbased FPE scheme—the only type of FPE that we consider here—the message space is of the form n X = {0, 1, . . . , radix − 1} for some message length n and alphabet size radix.",
"title": ""
},
{
"docid": "4edb9dea1e949148598279c0111c4531",
"text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.",
"title": ""
},
{
"docid": "6544cffbaf9cc0c6c12991c2acbe2dd5",
"text": "The aim of this updated statement is to provide comprehensive and timely evidence-based recommendations on the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack. Evidence-based recommendations are included for the control of risk factors, interventional approaches for atherosclerotic disease, antithrombotic treatments for cardioembolism, and the use of antiplatelet agents for noncardioembolic stroke. Further recommendations are provided for the prevention of recurrent stroke in a variety of other specific circumstances, including arterial dissections; patent foramen ovale; hyperhomocysteinemia; hypercoagulable states; sickle cell disease; cerebral venous sinus thrombosis; stroke among women, particularly with regard to pregnancy and the use of postmenopausal hormones; the use of anticoagulation after cerebral hemorrhage; and special approaches to the implementation of guidelines and their use in high-risk populations.",
"title": ""
},
{
"docid": "1ea2074181341aaa112a678d75ec5de7",
"text": "5 Evacuation planning and scheduling is a critical aspect of disaster management and national security applications. This paper proposes a conflict-based path-generation approach for evacuation planning. Its key idea is to decompose the evacuation planning problem into a master and a subproblem. The subproblem generates new evacuation paths for each evacuated area, while the master problem optimizes the flow of evacuees and produce an evacuation plan. Each new path is generated to remedy conflicts in the evacuation flows and adds new columns and a new row in the master problem. The algorithm is applied to a set of large-scale evacuation scenarios ranging from the Hawkesbury-Nepean flood plain (West Sydney, Australia) which require evacuating in the order of 70,000 persons, to the New Orleans metropolitan area and its 1,000,000 residents. Experiments illustrate the scalability of the approach which is able to produce evacuation for scenarios with more than 1,200 nodes, while a direct Mixed Integer Programming formulation becomes intractable for instances with more than 5 nodes. With this approach, realistic evacuations scenarios can be solved near-optimally in reasonable time, supporting both evacuation planning in strategic, tactical, and operational environments.",
"title": ""
},
{
"docid": "3ac230304ab65efa3c31b10dc0dffa4d",
"text": "Current networking integrates common \"Things\" to the Web, creating the Internet of Things (IoT). The considerable number of heterogeneous Things that can be part of an IoT network demands an efficient management of resources. With the advent of Fog computing, some IoT management tasks can be distributed toward the edge of the constrained networks, closer to physical devices. Blockchain protocols hosted on Fog networks can handle IoT management tasks such as communication, storage, and authentication. This research goes beyond the current definition of Things and presents the Internet of \"Smart Things.\" Smart Things are provisioned with Artificial Intelligence (AI) features based on CLIPS programming language to become self-inferenceable and self-monitorable. This work uses the permission-based blockchain protocol Multichain to communicate many Smart Things by reading and writing blocks of information. This paper evaluates Smart Things deployed on Edison Arduino boards. Also, this work evaluates Multichain hosted on a Fog network.",
"title": ""
},
{
"docid": "976507b0b89c2202ab603ccedae253f5",
"text": "We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to ngram-based scores while providing more relevant outputs.",
"title": ""
},
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
},
{
"docid": "ac34478a54d67abce7c892e058295e63",
"text": "The popularity of the term \"integrated curriculum\" has grown immensely in medical education over the last two decades, but what does this term mean and how do we go about its design, implementation, and evaluation? Definitions and application of the term vary greatly in the literature, spanning from the integration of content within a single lecture to the integration of a medical school's comprehensive curriculum. Taking into account the integrated curriculum's historic and evolving base of knowledge and theory, its support from many national medical education organizations, and the ever-increasing body of published examples, we deem it necessary to present a guide to review and promote further development of the integrated curriculum movement in medical education with an international perspective. We introduce the history and theory behind integration and provide theoretical models alongside published examples of common variations of an integrated curriculum. In addition, we identify three areas of particular need when developing an ideal integrated curriculum, leading us to propose the use of a new, clarified definition of \"integrated curriculum\", and offer a review of strategies to evaluate the impact of an integrated curriculum on the learner. This Guide is presented to assist educators in the design, implementation, and evaluation of a thoroughly integrated medical school curriculum.",
"title": ""
},
{
"docid": "d529d1052fce64ae05fbc64d2b0450ab",
"text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "362b0fc349c827316116a620da34ac91",
"text": "Identifying and correcting grammatical errors in the text written by non-native writers have received increasing attention in recent years. Although a number of annotated corpora have been established to facilitate data-driven grammatical error detection and correction approaches, they are still limited in terms of quantity and coverage because human annotation is labor-intensive, time-consuming, and expensive. In this work, we propose to utilize unlabeled data to train neural network based grammatical error detection models. The basic idea is to cast error detection as a binary classification problem and derive positive and negative training examples from unlabeled data. We introduce an attention-based neural network to capture long-distance dependencies that influence the word being detected. Experiments show that the proposed approach significantly outperforms SVM and convolutional networks with fixed-size context window.",
"title": ""
},
{
"docid": "20d02454fd850d8a7e05123a1769d44b",
"text": "We describe the extension and objective evaluation of a network of semantically related noun senses (or concepts) that has been automatically acquired by analyzing lexical cooccurrence in Wikipedia. The acquisition process makes no use of the metadata or links that have been manually built into the encyclopedia, and nouns in the network are automatically disambiguated to their corresponding noun senses without supervision. For this task, we use the noun sense inventory of WordNet 3.0. Thus, this work can be conceived of as augmenting the WordNet noun ontology with unweighted, undirected relatedto edges between synsets. Our network contains 208,832 such edges. We evaluate our network’s performance on a word sense disambiguation (WSD) task and show: a) the network is competitive with WordNet when used as a stand-alone knowledge source for two WSD algorithms; b) combining our network with WordNet achieves disambiguation results that exceed the performance of either resource individually; and c) our network outperforms a similar resource that has been automatically derived from semantic annotations in the Wikipedia corpus.",
"title": ""
},
{
"docid": "4be5f35876daebc0c00528bede15b66c",
"text": "Information Extraction (IE) is concerned with mining factual structures from unstructured text data, including entity and relation extraction. For example, identifying Donald Trump as “person” and Washington D.C. as “location”, and understand the relationship between them (say, Donald Trump spoke at Washington D.C.), from a specific sentence. Typically, IE systems rely on large amount of training data, primarily acquired via human annotation, to achieve the best performance. But since human annotation is costly and non-scalable, the focus has shifted to adoption of a new strategy Distant Supervision [1]. Distant supervision is a technique that can automatically extract labeled training data from existing knowledge bases without human efforts. However the training data generated by distant supervision is context-agnostic and can be very noisy. Moreover, we also observe the difference between the quality of training examples in terms of to what extent it infers the target entity/relation type. In this project, we focus on removing the noise and identifying the quality difference in the training data generated by distant supervision, by leveraging the feedback signals from one of IE’s downstream applications, QA, to improve the performance of one of the state-of-the-art IE framework, CoType [3]. Keywords—Data Mining, Relation Extraction, Question Answering.",
"title": ""
},
{
"docid": "158b554ee5aedcbee9136dcde010dc30",
"text": "In this paper, we propose a novel progressive parameter pruning method for Convolutional Neural Network acceleration, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria in the training process. Experiments show that, with 4× speedup, SPP can accelerate AlexNet with only 0.3% loss of top-5 accuracy and VGG-16 with 0.8% loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2× speedup ResNet-50 only suffers 0.8% loss of top-5 accuracy on ImageNet. We further show the effectiveness of SPP on transfer learning tasks.",
"title": ""
},
{
"docid": "d1357b2e247d521000169dce16f182ee",
"text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.",
"title": ""
},
{
"docid": "88dd795c6d1fa37c13fbf086c0eb0e37",
"text": "We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.",
"title": ""
}
] |
scidocsrr
|
dd286bccd8bf96ab971a1e75d8a650d0
|
New variants of ABCA12 in harlequin ichthyosis baby
|
[
{
"docid": "e9d79ece14c21fcf859e53a1e730a217",
"text": "ABCA12: adenosine triphosphate binding cassette A12 HI: harlequin ichthyosis NICU: neonatal intensive care unit INTRODUCTION Harlequin ichthyosis (HI) is a rare autosomal recessive congenital ichthyosis associated with mutations in the keratinocyte lipid transporter adenosine triphosphate binding cassette A12 (ABCA12), leading to disruption in lipid and protease transport into lamellar granules in the granular layer of the epidermis. Subsequent defective desquamation with compensatory hyperkeratinization follows. Historically, there has been a high early mortality rate in infants with HI; however, improved neonatal management and the early introduction of systemic retinoids may contribute to improved prognosis. Death in these patients is most commonly caused by sepsis, respiratory failure, or electrolyte imbalances. We report a case of a neonate with HI treated in the first few days of life with acitretin. The patient initially improved but eventually died of pseudomonas sepsis at 6 weeks of age.",
"title": ""
}
] |
[
{
"docid": "a90a20f66d3e73947fbc28dc60bcee24",
"text": "It is well known that the performance of speech recognition algorithms degrade in the presence of adverse environments where a speaker is under stress, emotion, or Lombard effect. This study evaluates the effectiveness of traditional features in recognition of speech under stress and formulates new features which are shown to improve stressed speech recognition. The focus is on formulating robust features which are less dependent on the speaking conditions rather than applying compensation or adaptation techniques. The stressed speaking styles considered are simulated angry and loud, Lombard effect speech, and noisy actual stressed speech from the SUSAS database which is available on CD-ROM through the NATO IST/TG-01 research group and LDC1 . In addition, this study investigates the immunity of linear prediction power spectrum and fast Fourier transform power spectrum to the presence of stress. Our results show that unlike fast Fourier transform’s (FFT) immunity to noise, the linear prediction power spectrum is more immune than FFT to stress as well as to a combination of a noisy and stressful environment. Finally, the effect of various parameter processing such as fixed versus variable preemphasis, liftering, and fixed versus cepstral mean normalization are studied. Two alternative frequency partitioning methods are proposed and compared with traditional mel-frequency cepstral coefficients (MFCC) features for stressed speech recognition. It is shown that the alternate filterbank frequency partitions are more effective for recognition of speech under both simulated and actual stressed conditions.",
"title": ""
},
{
"docid": "a5b147f5b3da39fed9ed11026f5974a2",
"text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).",
"title": ""
},
{
"docid": "134ecc62958fa9bb930ff934c5fad7a3",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
},
{
"docid": "6aaabe17947bc455d940047745ed7962",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "876bbee05b7838f4de218b424d895887",
"text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-",
"title": ""
},
{
"docid": "10bc2f9827aa9a53e3ca4b7188bd91c3",
"text": "Learning hash functions across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data objects. In this paper, we propose an approach to obtain the sparse codesets for the data objects across different modalities via joint multi-modal dictionary learning, which we call sparse multi-modal hashing (abbreviated as SM2H). In SM2H, both intra-modality similarity and inter-modality similarity are first modeled by a hypergraph, then multi-modal dictionaries are jointly learned by Hypergraph Laplacian sparse coding. Based on the learned dictionaries, the sparse codeset of each data object is acquired and conducted for multi-modal approximate nearest neighbor retrieval using a sensitive Jaccard metric. The experimental results show that SM2H outperforms other methods in terms of mAP and Percentage on two real-world data sets.",
"title": ""
},
{
"docid": "bf5cedb076c779157e1c1fbd4df0adc9",
"text": "Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research. This is especially important in the task of molecular graph generation, whose goal is to discover novel molecules with desired properties such as drug-likeness and synthetic accessibility, while obeying physical laws such as chemical valency. However, designing models to find molecules that optimize desired properties while incorporating highly complex and non-differentiable rules remains to be a challenging task. Here we propose Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goaldirected graph generation through reinforcement learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules. Experimental results show that GCPN can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improvement on the constrained property optimization task.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "a9d136429d3d5b871fa84c3209bd763c",
"text": "Portable embedded computing systems require energy autonomy. This is achieved by batteries serving as a dedicated energy source. The requirement of portability places severe restrictions on size and weight, which in turn limits the amount of energy that is continuously available to maintain system operability. For these reasons, efficient energy utilization has become one of the key challenges to the designer of battery-powered embedded computing systems.In this paper, we first present a novel analytical battery model, which can be used for the battery lifetime estimation. The high quality of the proposed model is demonstrated with measurements and simulations. Using this battery model, we introduce a new \"battery-aware\" cost function, which will be used for optimizing the lifetime of the battery. This cost function generalizes the traditional minimization metric, namely the energy consumption of the system. We formulate the problem of battery-aware task scheduling on a single processor with multiple voltages. Then, we prove several important mathematical properties of the cost function. Based on these properties, we propose several algorithms for task ordering and voltage assignment, including optimal idle period insertion to exercise charge recovery.This paper presents the first effort toward a formal treatment of battery-aware task scheduling and voltage scaling, based on an accurate analytical model of the battery behavior.",
"title": ""
},
{
"docid": "be7f7d9c6a28b7d15ec381570752de95",
"text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.",
"title": ""
},
{
"docid": "443f718fdc81e2ff64c1069ad105e601",
"text": "With the fast progression of digital data exchange in electronic way, information security is becoming much more important in data storage and transmission. Cryptography has come up as a solution which plays a vital role in information security system against malicious attacks. This security mechanism uses some algorithms to scramble data into unreadable text which can be only being decoded or decrypted by party those possesses the associated key. These algorithms consume a significant amount of computing resources such as CPU time, memory and computation time. In this paper two most widely used symmetric encryption techniques i.e. data encryption standard (DES) and advanced encryption standard (AES) have been implemented using MATLAB software. After the implementation, these techniques are compared on some points. These points are avalanche effect due to one bit variation in plaintext keeping the key constant, avalanche effect due to one bit variation in key keeping the plaintext constant, memory required for implementation and simulation time required for encryption.",
"title": ""
},
{
"docid": "575208e6df214fa4378fa18be48af51d",
"text": "A parser based on logic programming language (DCG) has very useful features; perspicuity, power, generality and so on. However, it does have some drawbacks in which it cannot deal with CFG with left recursive rules, for example. To overcome these drawbacks, a Bottom-Up parser embedded in Prolog (BUP) has been developed. In BUP, CFG rules are translated into Prolog clauses which work as a bottom-up left corner parser with top-down expectation. BUP is augmented by introducing a “link” relation to reduce the size of a search space. Furthermore, BUP can be revised to maintain partial parsing results to avoid computational duplication. A BUP translator and a BUP tracer which support the development of grammar rules are described.",
"title": ""
},
{
"docid": "0326178ab59983db61eb5dfe0e2b25a4",
"text": "Article history: Received 9 September 2008 Received in revised form 16 April 2009 Accepted 14 May 2009",
"title": ""
},
{
"docid": "a1fe2227bc9d6ddeda58ff8d137d660b",
"text": "Vulnerability exploits remain an important mechanism for malware delivery, despite efforts to speed up the creation of patches and improvements in software updating mechanisms. Vulnerabilities in client applications (e.g., Browsers, multimedia players, document readers and editors) are often exploited in spear phishing attacks and are difficult to characterize using network vulnerability scanners. Analyzing their lifecycle requires observing the deployment of patches on hosts around the world. Using data collected over 5 years on 8.4 million hosts, available through Symantec's WINE platform, we present the first systematic study of patch deployment in client-side vulnerabilities. We analyze the patch deployment process of 1,593 vulnerabilities from 10 popular client applications, and we identify several new threats presented by multiple installations of the same program and by shared libraries distributed with several applications. For the 80 vulnerabilities in our dataset that affect code shared by two applications, the time between patch releases in the different applications is up to 118 days (with a median of 11 days). Furthermore, as the patching rates differ considerably among applications, many hosts patch the vulnerability in one application but not in the other one. We demonstrate two novel attacks that enable exploitation by invoking old versions of applications that are used infrequently, but remain installed. We also find that the median fraction of vulnerable hosts patched when exploits are released is at most 14%. Finally, we show that the patching rate is affected by user-specific and application-specific factors, for example, hosts belonging to security analysts and applications with an automated updating mechanism have significantly lower median times to patch.",
"title": ""
},
{
"docid": "c3473e7fe7b46628d384cbbe10bfe74c",
"text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.",
"title": ""
},
{
"docid": "03977b7bdc0102caf7033012354aa897",
"text": "One of the important issues in service organizations is to identify the customers, understanding their difference and ranking them. Recently, the customer value as a quantitative parameter has been used for segmenting customers. A practical solution for analytical development is using analytical techniques such as dynamic clustering algorithms and programs to explore the dynamics in consumer preferences. The aim of this research is to understand the current customer behavior and suggest a suitable policy for new customers in order to attain the highest benefits and customer satisfaction. To identify such market in life insurance customers, We have used the FKM.pf.niose fuzzy clustering technique for classifying the customers based on their demographic and behavioral data of 1071 people in the period April to October 2014. Results show the optimal number of clusters is 3. These three clusters can be named as: investment, security of life and a combination of both. Some suggestions are presented to improve the performance of the insurance company.",
"title": ""
},
{
"docid": "2bfe219ce52a44299178513d88721353",
"text": "This paper describes a spatio-temporal model of the human visual system (HVS) for video imaging applications, predicting the response of the neurons of the primary visual cortex. The model simulates the behavior of the HVS with a three-dimensional lter bank which decomposes the data into perceptual channels, each one being tuned to a speciic spatial frequency, orientation and temporal frequency. It further accounts for contrast sensitivity, inter-stimuli masking and spatio-temporal interaction. The free parameters of the model have been estimated by psychophysics. The model can then be used as the basis for many applications. As an example, a quality metric for coded video sequences is presented.",
"title": ""
},
{
"docid": "dd2e81d24584fe0684266217b732d881",
"text": "In order to understand the role of titanium isopropoxide (TIPT) catalyst on insulation rejuvenation for water tree aged cables, dielectric properties and micro structure changes are investigated for the rejuvenated cables. Needle-shape defects are made inside cross-linked polyethylene (XLPE) cable samples to form water tree in the XLPE layer. The water tree aged samples are injected by the liquid with phenylmethyldimethoxy silane (PMDMS) catalyzed by TIPT for rejuvenation, and the breakdown voltage of the rejuvenated samples is significantly higher than that of the new samples. By the observation of scanning electronic microscope (SEM), the nano-TiO2 particles are observed inside the breakdown channels of the rejuvenated samples. Accordingly, the insulation performance of rejuvenated samples is significantly enhanced by the nano-TiO2 particles. Through analyzing the products of hydrolysis from TIPT, the nano-scale TiO2 particles are observed, and its micro-morphology is consistent with that observed inside the breakdown channels. According to the observation, the insulation enhancement mechanism is described. Therefore, the dielectric property of the rejuvenated cables is improved due to the nano-TiO2 produced by the hydrolysis from TIPT.",
"title": ""
},
{
"docid": "64635c4d7d372acdba1fc3c36ffaaf12",
"text": "We investigate a technique from the literature, called the phantom-types technique, that uses parametric polymorphism, type constraints, and unification of polymorphic types to model a subtyping hierarchy. Hindley-Milner type systems, such as the one found in Standard ML, can be used to enforce the subtyping relation, at least for first-order values. We show that this technique can be used to encode any finite subtyping hierarchy (including hierarchies arising from multiple interface inheritance). We formally demonstrate the suitability of the phantom-types technique for capturing first-order subtyping by exhibiting a type-preserving translation from a simple calculus with bounded polymorphism to a calculus embodying the type system of SML.",
"title": ""
}
] |
scidocsrr
|
8611d246d285828a8fdd0649368f65fa
|
Slicing: A New Approach to Privacy Preserving Data Publishing
|
[
{
"docid": "dec31dfa7aed317742c6f32fcc082044",
"text": "Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the ℓ-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.",
"title": ""
}
] |
[
{
"docid": "37f157cdcd27c1647548356a5194f2bc",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "fd48a775116cf9de55827da8741335af",
"text": "OBJECTIVES To characterize dermoscopic criteria of squamous cell carcinoma (SCC) and keratoacanthoma and to compare them with other lesions. DESIGN Observer-masked study of consecutive lesions performed from March 1 through December 31, 2011. SETTING Primary care skin cancer practice in Brisbane, Australia. PARTICIPANTS A total of 186 patients with 206 lesions. MAIN OUTCOME MEASURES Sensitivity, specificity, predictive values, and odds ratios. RESULTS In a retrospective analysis of 60 invasive SCC and 43 keratoacanthoma cases, keratin, surface scale, blood spots, white structureless zones, white circles, and coiled vessels were commonly found in both types of lesions. We reevaluated the significance of these criteria in 206 raised, nonpigmented lesions (32 SCCs, 29 keratoacanthomas, and 145 other lesions). Central keratin was more common in keratoacanthoma than in SCC (51.2% vs 30.0%, P = .03). Keratin had the highest sensitivity for keratoacanthoma and SCC (79%), and white circles had the highest specificity (87%). When keratoacanthoma and SCC were contrasted with basal cell carcinoma, the positive predictive values of keratin and white circles were 92% and 89%, respectively. When SCC and keratoacanthoma were contrasted with actinic keratosis and Bowen disease, the positive predictive value of keratin was 50% and that of white circles was 92%. In a multivariate model, white circles, keratin, and blood spots were independent predictors of SCC and keratoacanthoma. White circles had the highest odds ratio in favor of SCC and keratoacanthoma. The interobserver agreement for white circles was good (0.55; 95% CI, 0.44-0.65). CONCLUSIONS White circles, keratin, and blood spots are useful clues to differentiate SCC and keratoacanthoma from other raised nonpigmented skin lesions by dermoscopy. The significance of these criteria depends on the clinical context.",
"title": ""
},
{
"docid": "b3bc34cfbe6729f7ce540a792c32bf4c",
"text": "The employment of MIMO OFDM technique constitutes a cost effective approach to high throughput wireless communications. The system performance is sensitive to frequency offset which increases with the doppler spread and causes Intercarrier interference (ICI). ICI is a major concern in the design as it can potentially cause a severe deterioration of quality of service (QoS) which necessitates the need for a high speed data detection and decoding with ICI cancellation along with the intersymbol interference (ISI) cancellation in MIMO OFDM communication systems. Iterative parallel interference canceller (PIC) with joint detection and decoding is a promising approach which is used in this work. The receiver consists of a two stage interference canceller. The co channel interference cancellation is performed based on Zero Forcing (ZF) Detection method used to suppress the effect of ISI in the first stage. The latter stage consists of a simplified PIC scheme. High bit error rates of wireless communication system require employing forward error correction (FEC) methods on the data transferred in order to avoid burst errors that occur in physical channel. To achieve high capacity with minimum error rate Low Density Parity Check (LDPC) codes which have recently drawn much attention because of their error correction performance is used in this system. The system performance is analyzed for two different values of normalized doppler shift for varying speeds. The bit error rate (BER) is shown to improve in every iteration due to the ICI cancellation. The interference analysis with the use of ICI cancellation is examined for a range of normalized doppler shift which corresponds to mobile speeds varying from 5Km/hr to 250Km/hr.",
"title": ""
},
{
"docid": "2ab5da747c5db82b0d18fee66d46cc36",
"text": "Aiming at the problem of maximum power point tracking (MPPT) of PV power generation, combined with the purpose of temperature adjustment of rural PV heating system, the MPPT method based on the model of adjustable load resistance matching is proposed. The output characteristics of PV cells and the principle of load matching are analyzed, and the MPPT of PV generation is realized by perturbation and observation (P&O) method. Regarding the heating equipment as the resistive load, a group of resistors with a wide range and small step length which can be controlled by the relay are designed to test. Finally, taking STM32 as the core processor, adjusting the step size and tracking the change of PV output voltage, the controller can track the change of the external environment accurately, that is, it can track the maximum power point of a PV power system effectively.",
"title": ""
},
{
"docid": "04013595912b4176574fb81b38beade5",
"text": "This chapter presents an overview of the current state of cognitive task analysis (CTA) in research and practice. CTA uses a variety of interview and observation strategies to capture a description of the explicit and implicit knowledge that experts use to perform complex tasks. The captured knowledge is most often transferred to training or the development of expert systems. The first section presents descriptions of a variety of CTA techniques, their common characteristics, and the typical strategies used to elicit knowledge from experts and other sources. The second section describes research on the impact of CTA and synthesizes a number of studies and reviews pertinent to issues underlying knowledge elicitation. In the third section, we discuss the integration of CTA with training design. Finally, in the fourth section, we present a number of recommendations for future research and conclude with general comments.",
"title": ""
},
{
"docid": "e458ba119fe15f17aa658c5b42a21e2b",
"text": "In this paper, with the help of controllable active near-infrared (NIR) lights, we construct near-infrared differential (NIRD) images. Based on reflection model, NIRD image is believed to contain the lighting difference between images with and without active NIR lights. Two main characteristics based on NIRD images are exploited to conduct spoofing detection. Firstly, there exist obviously spoofing media around the faces in most conditions, which reflect incident lights in almost the same way as the face areas do. We analyze the pixel consistency between face and non-face areas and employ context clues to distinguish the spoofing images. Then, lighting feature, extracted only from face areas, is utilized to detect spoofing attacks of deliberately cropped medium. Merging the two features, we present a face spoofing detection system. In several experiments on self collected datasets with different spoofing media, we demonstrate the excellent results and robustness of proposed method.",
"title": ""
},
{
"docid": "7c00c5d75ab4beffc595aff99a66b402",
"text": "We develop a unified model, known as MgNet, that simultaneously recovers some convolutional neural networks (CNN) for image classification and multigrid (MG) methods for solving discretized partial different equations (PDEs). This model is based on close connections that we have observed and uncovered between the CNN and MG methodologies. For example, pooling operation and feature extraction in CNN correspond directly to restriction operation and iterative smoothers in MG, respectively. As the solution space is often the dual of the data space in PDEs, the analogous concept of feature space and data space (which are dual to each other) is introduced in CNN. With such connections and new concept in the unified model, the function of various convolution operations and pooling used in CNN can be better understood. As a result, modified CNN models (with fewer weights and hyper parameters) are developed that exhibit competitive and sometimes better performance in comparison with existing CNN models when applied to both CIFAR-10 and CIFAR-100 data sets.",
"title": ""
},
{
"docid": "e680f8b83e7a2137321cc644724827de",
"text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.",
"title": ""
},
{
"docid": "2ee88fbbe36da024188eb5af40a74bcd",
"text": "Synchronization phenomena in large populations of interacting elements are the subject of intense research efforts in physical, biological, chemical, and social systems. A successful approach to the problem of synchronization consists of modeling each member of the population as a phase oscillator. In this review, synchronization is analyzed in one of the most representative models of coupled phase oscillators, the Kuramoto model. A rigorous mathematical treatment, specific numerical methods, and many variations and extensions of the original model that have appeared in the last few years are presented. Relevant applications of the model in different contexts are also included.",
"title": ""
},
{
"docid": "310036a45a95679a612cc9a60e44e2e0",
"text": "A broadband single layer, dual circularly polarized (CP) reflectarrays with linearly polarized feed is introduced in this paper. To reduce the electrical interference between the two orthogonal polarizations of the CP element, a novel subwavelength multiresonance element with a Jerusalem cross and an open loop is proposed, which presents a broader bandwidth and phase range excessing 360° simultaneously. By tuning the x- and y-axis dimensions of the proposed element, an optimization technique is used to minimize the phase errors on both orthogonal components. Then, a single-layer offset-fed 20 × 20-element dual-CP reflectarray has been designed and fabricated. The measured results show that the 1-dB gain and 3-dB axial ratio (AR) bandwidths of the dual-CP reflectarray can reach 12.5% and 50%, respectively, which shows a significant improvement in gain and AR bandwidths as compared to reflectarrays with conventional λ/2 cross-dipole elements.",
"title": ""
},
{
"docid": "103f4a18b4ae42756fef6ae583c4d742",
"text": "The Essex intelligent dormitory, iDorm, uses embedded agents to create an ambient-intelligence environment. In a five-and-a-half-day experiment, a user occupied the iDorm, testing its ability to learn user behavior and adapt to user needs. The embedded agent discreetly controls the iDorm according to user preferences. Our work focuses on developing learning and adaptation techniques for embedded agents. We seek to provide online, lifelong, personalized learning of anticipatory adaptive control to realize the ambient-intelligence vision in ubiquitous-computing environments. We developed the Essex intelligent dormitory, or iDorm, as a test bed for this work and an exemplar of this approach.",
"title": ""
},
{
"docid": "234804b51e137cb213998a6d00f2db14",
"text": "Linear Dynamical Systems (LDSs) are the fundamental tools for encoding spatio-temporal data in various disciplines. To enhance the performance of LDSs, in this paper, we address the challenging issue of performing sparse coding on the space of LDSs, where both data and dictionary atoms are LDSs. Rather than approximate the extended observability with a finite-order matrix, we represent the space of LDSs by an infinite Grassmannian consisting of the orthonormalized extended observability subspaces. Via a homeomorphic mapping, such Grassmannian is embedded into the space of symmetric matrices, where a tractable objective function can be derived for sparse coding. Then, we propose an efficient method to learn the system parameters of the dictionary atoms explicitly, by imposing the symmetric constraint to the transition matrices of the data and dictionary systems. Moreover, we combine the state covariance into the algorithm formulation, thus further promoting the performance of the models with symmetric transition matrices. Comparative experimental evaluations reveal the superior performance of proposed methods on various tasks including video classification and tactile recognition.",
"title": ""
},
{
"docid": "e566bb3425c986c22e76f78183eb2bb7",
"text": "A blog site consists of many individual blog postings. Current blog search services focus on retrieving postings but there is also a need to identify relevant blog sites. Blog site search is similar to resource selection in distributed information retrieval, in that the target is to find relevant collections of documents. We introduce resource selection techniques for blog site search and evaluate their performance. Further, we propose a \"diversity factor\" that measures the topic diversity of each blog site. Our results show that the appropriate combination of the resource selection techniques and the diversity factor can achieve significant improvements in retrieval performance compared to baselines. We also report results using these techniques on the TREC blog distillation task.",
"title": ""
},
{
"docid": "08823059d089c1e553af85d5768332ca",
"text": "Hyperbolic discount functions induce dynamically inconsistent preferences, implying a motive for consumers to constrain their own future choices. This paper analyzes the decisions of a hyperbolic consumer who has access to an imperfect commitment technology: an illiquid asset whose sale must be initiated one period before the sale proceeds are received. The model predicts that consumption tracks income, and the model explains why consumers have asset-specic marginal propensities to consume. The model suggests that nancial innovation may have caused the ongoing decline in U. S. savings rates, since nancial innovation increases liquidity, eliminating commitment opportunities. Finally, the model implies that nancial market innovation may reduce welfare by providing “too much” liquidity.",
"title": ""
},
{
"docid": "541de3d6af2edacf7396e5ca66c385e2",
"text": "This paper presents a simple and intuitive method for mining search engine query logs to get fast query recommendations on a large scale industrial strength search engine. In order to get a more comprehensive solution, we combine two methods together. On the one hand, we study and model search engine users' sequential search behavior, and interpret this consecutive search behavior as client-side query refinement, that should form the basis for the search engine's own query refinement process. On the other hand, we combine this method with a traditional content based similarity method to compensate for the high sparsity of real query log data, and more specifically, the shortness of most query sessions. To evaluate our method, we use one hundred day worth query logs from SINA' search engine to do off-line mining. Then we analyze three independent editors evaluations on a query test set. Based on their judgement, our method was found to be effective for finding related queries, despite its simplicity. In addition to the subjective editors' rating, we also perform tests based on actual anonymous user search sessions.",
"title": ""
},
{
"docid": "b174bbcb91d35184674532b6ab22dcdf",
"text": "Many studies have confirmed the benefit of gamification on learners’ motivation. However, gamification may also demotivate some learners, or learners may focus on the gamification elements instead of the learning content. Some researchers have recommended building learner models that can be used to adapt gamification elements based on learners’ personalities. Building such a model requires a strong understanding of the relationship between gamification and personality. Existing empirical work has focused on measuring knowledge gain and learner preference. These findings may not be reliable because the analyses are based on learners who complete the study and because they rely on self-report from learners. This preliminary study explores a different approach by allowing learners to drop out at any time and then uses the number of students left as a proxy for motivation and engagement. Survival analysis is used to analyse the data. The results confirm the benefits of gamification and provide some pointers to how this varies with personality.",
"title": ""
},
{
"docid": "0b7b3ed807e88a558b27008acedefa08",
"text": "In this paper, a semi automated technique to generate slide presentations from english text documents is proposed. The technique discussed in this paper is considered to be a pioneering attempt in the field of NLP (Natural Language Processing). The technique involves an information extractor and a slide generator, which combines certain NLP methods such as segmentation, chunking, summarization etc.., with certain special linguistic features of the text such as the ontology of the words, noun phrases found, semantic links, sentence centrality etc., In order to aid the language processing task, two tools can be utilized namely, MontyLingua which helps in chunking and Doddle helps in creating an ontology for the input text represented as an OWL (Ontology Web Language) file. The process of the technique comprises of extracting text, creating an ontology, identifying important phrases for bullets and generating slides.",
"title": ""
},
{
"docid": "aa678a85779188753f974009cec18c23",
"text": "Pancreatic-islet inflammation contributes to the failure of β cell insulin secretion during obesity and type 2 diabetes. However, little is known about the nature and function of resident immune cells in this context or in homeostasis. Here we show that interleukin (IL)-33 was produced by islet mesenchymal cells and enhanced by a diabetes milieu (glucose, IL-1β, and palmitate). IL-33 promoted β cell function through islet-resident group 2 innate lymphoid cells (ILC2s) that elicited retinoic acid (RA)-producing capacities in macrophages and dendritic cells via the secretion of IL-13 and colony-stimulating factor 2. In turn, local RA signaled to the β cells to increase insulin secretion. This IL-33-ILC2 axis was activated after acute β cell stress but was defective during chronic obesity. Accordingly, IL-33 injections rescued islet function in obese mice. Our findings provide evidence that an immunometabolic crosstalk between islet-derived IL-33, ILC2s, and myeloid cells fosters insulin secretion.",
"title": ""
},
{
"docid": "52f4887e800456dbaddc6d99ce126d5b",
"text": "Mesenchymal stem cells (MSCs) can become potently immunosuppressive through unknown mechanisms. We found that the immunosuppressive function of MSCs is elicited by IFNgamma and the concomitant presence of any of three other proinflammatory cytokines, TNFalpha, IL-1alpha, or IL-1beta. These cytokine combinations provoke the expression of high levels of several chemokines and inducible nitric oxide synthase (iNOS) by MSCs. Chemokines drive T cell migration into proximity with MSCs, where T cell responsiveness is suppressed by nitric oxide (NO). This cytokine-induced immunosuppression was absent in MSCs derived from iNOS(-/-) or IFNgammaR1(-/-) mice. Blockade of chemokine receptors also abolished the immunosuppression. Administration of wild-type MSCs, but not IFNgammaR1(-/-) or iNOS(-/-) MSCs, prevented graft-versus-host disease in mice, an effect reversed by anti-IFNgamma or iNOS inhibitors. Wild-type MSCs also inhibited delayed-type hypersensitivity, while iNOS(-/-) MSCs aggravated it. Therefore, proinflammatory cytokines are required to induce immunosuppression by MSCs through the concerted action of chemokines and NO.",
"title": ""
}
] |
scidocsrr
|
87138638e9e8a41ab72e8795a3b19ac5
|
Trajectory tracking control for hovering and acceleration maneuver of Quad Tilt Rotor UAV
|
[
{
"docid": "bba979cd5d69dac380ba1023441460d3",
"text": "This paper presents a model of a particular class of a convertible MAV with fixed wings. This vehicle can operate as a helicopter as well as a conventional airplane, i.e. the aircraft is able to switch their flight configuration from hover to level flight and vice versa by means of a transition maneuver. The paper focuses on finding a controller capable of performing such transition via the tilting of their four rotors. The altitude should remain on a predefined value throughout the transition stage. For this purpose a nonlinear control strategy based on saturations and Lyapunov design is given. The use of this control law enables to make the transition maneuver while maintaining the aircraft in flight. Numerical results are presented, showing the effectiveness of the proposed methodology to deal with the transition stage.",
"title": ""
}
] |
[
{
"docid": "1fd0f4fd2d63ef3a71f8c56ce6a25fb5",
"text": "A new ‘growing’ maximum likelihood classification algorithm for small reservoir delineation has been developed and is tested with Radarsat-2 data for reservoirs in the semi-arid Upper East Region, Ghana. The delineation algorithm is able to find the land-water boundary from SAR imagery for different weather and environmental conditions. As such, the algorithm allows for remote sensed operational monitoring of small reservoirs.",
"title": ""
},
{
"docid": "8df2c8cf6f6662ed60280b8777c64336",
"text": "In comparative genomics, functional annotations are transferred from one organism to another relying on sequence similarity. With more than 20 million citations in PubMed, text mining provides the ideal tool for generating additional large-scale homology-based predictions. To this end, we have refined a recent dataset of biomolecular events extracted from text, and integrated these predictions with records from public gene databases. Accounting for lexical variation of gene symbols, we have implemented a disambiguation algorithm that uniquely links the arguments of 11.2 million biomolecular events to well-defined gene families, providing interesting opportunities for query expansion and hypothesis generation. The resulting MySQL database, including all 19.2 million original events as well as their homology-based variants, is publicly available at http://bionlp.utu.fi/.",
"title": ""
},
{
"docid": "cad2742f731edaf67924ce002d9a1f94",
"text": "Output impedance of active-clamp converters is a valid method to achieve current sharing among parallel-connected power stages. Nevertheless, parasitic capacitances result in resonances that modify converter behavior and current balance. A solution is presented and validated. The current balance is achieved without a dedicated control.",
"title": ""
},
{
"docid": "724734077fbc469f1bbcad4d7c3b0cbc",
"text": "Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use.",
"title": ""
},
{
"docid": "f753712eed9e5c210810d2afd1366eb8",
"text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.",
"title": ""
},
{
"docid": "7956e5fd3372716cb5ae16c6f9e846fb",
"text": "Understanding query intent helps modern search engines to improve search results as well as to display instant answers to the user. In this work, we introduce an accurate query classification method to detect the intent of a user search query. We propose using convolutional neural networks (CNN) to extract query vector representations as the features for the query classification. In this model, queries are represented as vectors so that semantically similar queries can be captured by embedding them into a vector space. Experimental results show that the proposed method can effectively detect intents of queries with higher precision and recall compared to current methods.",
"title": ""
},
{
"docid": "bafdfa2ecaeb18890ab8207ef1bc4f82",
"text": "This content analytic study investigated the approaches of two mainstream newspapers—The New York Times and the Chicago Tribune—to cover the gay marriage issue. The study used the Massachusetts legitimization of gay marriage as a dividing point to look at what kinds of specific political or social topics related to gay marriage were highlighted in the news media. The study examined how news sources were framed in the coverage of gay marriage, based upon the newspapers’ perspectives and ideologies. The results indicated that The New York Times was inclined to emphasize the topic of human equality related to the legitimization of gay marriage. After the legitimization, The New York Times became an activist for gay marriage. Alternatively, the Chicago Tribune highlighted the importance of human morality associated with the gay marriage debate. The perspective of the Chicago Tribune was not dramatically influenced by the legitimization. It reported on gay marriage in terms of defending American traditions and family values both before and after the gay marriage legitimization. Published by Elsevier Inc on behalf of Western Social Science Association. Gay marriage has been a controversial issue in the United States, especially since the Massachusetts Supreme Judicial Court officially authorized it. Although the practice has been widely discussed for several years, the acceptance of gay marriage does not seem to be concordant with mainstream American values. This is in part because gay marriage challenges the traditional value of the family institution. In the United States, people’s perspectives of and attitudes toward gay marriage have been mostly polarized. Many people optimistically ∗ Corresponding author. E-mail addresses: [email protected], [email protected] (P.-L. Pan). 0362-3319/$ – see front matter. Published by Elsevier Inc on behalf of Western Social Science Association. doi:10.1016/j.soscij.2010.02.002 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 631 support gay legal rights and attempt to legalize it in as many states as possible, while others believe legalizing homosexuality may endanger American society and moral values. A number of forces and factors may expand this divergence between the two polarized perspectives, including family, religion and social influences. Mass media have a significant influence on socialization that cultivates individual’s belief about the world as well as affects individual’s values on social issues (Comstock & Paik, 1991). Moreover, news media outlets become a strong factor in influencing people’s perceptions of and attitudes toward gay men and lesbians because the news is one of the most powerful media to influence people’s attitudes toward gay marriage (Anderson, Fakhfakh, & Kondylis, 1999). Some mainstream newspapers are considered as media elites (Lichter, Rothman, & Lichter, 1986). Furthermore, numerous studies have demonstrated that mainstream newspapers would produce more powerful influences on people’s perceptions of public policies and political issues than television news (e.g., Brians & Wattenberg, 1996; Druckman, 2005; Eveland, Seo, & Marton, 2002) Gay marriage legitimization, a specific, divisive issue in the political and social dimensions, is concerned with several political and social issues that have raised fundamental questions about Constitutional amendments, equal rights, and American family values. The role of news media becomes relatively important while reporting these public debates over gay marriage, because not only do the news media affect people’s attitudes toward gays and lesbians by positively or negatively reporting the gay and lesbian issue, but also shape people’s perspectives of the same-sex marriage policy by framing the recognition of gay marriage in the news coverage. The purpose of this study is designed to examine how gay marriage news is described in the news coverage of The New York Times and the Chicago Tribune based upon their divisive ideological framings. 1. Literature review 1.1. Homosexual news coverage over time Until the 1940s, news media basically ignored the homosexual issue in the United States (Alwood, 1996; Bennett, 1998). According to Bennett (1998), of the 356 news stories about gays and lesbians that appeared in Time and Newsweek from 1947 to 1997, the Kinsey report on male sexuality published in 1948 was the first to draw reporters to the subject of homosexuality. From the 1940s to 1950s, the homosexual issue was reported as a social problem. Approximately 60% of the articles described homosexuals as a direct threat to the strength of the U.S. military, the security of the U.S. government, and the safety of ordinary Americans during this period. By the 1960s, the gay and lesbian issue began to be discussed openly in the news media. However, these portrayals were covered in the context of crime stories and brief items that ridiculed effeminate men or masculine women (Miller, 1991; Streitmatter, 1993). In 1963, a cover story, “Let’s Push Homophile Marriage,” was the first to treat gay marriage as a matter of winning legal recognition (Stewart-Winter, 2006). However, this cover story did not cause people to pay positive attention to gay marriage, but raised national debates between punishment and pity of homosexuals. Specifically speaking, although numerous arti632 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 cles reported before the 1960s provided growing visibility for homosexuals, they were still highly critical of them (Bennett, 1998). In September 1967, the first hard-hitting gay newspaper—the Los Angeles Advocate—began publication. Different from other earlier gay and lesbian publications, its editorial mix consisted entirely of non-fiction materials, including news stories, editorials, and columns (Cruikshank, 1992; Streitmatter, 1993). The Advocate was the first gay publication to operate as an independent business financed entirely by advertising and circulation, rather than by subsidies from a membership organization (Streitmatter, 1995a, 1995b). After the Stonewall Rebellion in June 1969 in New York City ignited the modern phase of the gay and lesbian liberation movement, the number and circulation of the gay and lesbian press exploded (Streitmatter, 1998). Therefore, gay rights were discussed in the news media during the early 1970s. Homosexuals began to organize a series of political actions associated with gay rights, which was widely covered by the news media, while a backlash also appeared against the gay-rights movements, particularly among fundamentalist Christians (Alwood, 1996; Bennett, 1998). Later in the 1970s, the genre entered a less political phrase by exploring the dimensions of the developing culture of gay and lesbian. The news media plumbed the breadth and depth of topics ranging from the gay and lesbian sensibility in art and literature to sex, spirituality, personal appearance, dyke separatism, lesbian mothers, drag queen, leather men, and gay bathhouses (Streitmatter, 1995b). In the 1980s, the gay and lesbian issue confronted a most formidable enemy when AIDS/HIV, one of the most devastating diseases in the history of medicine, began killing gay men at an alarming rate. Accordingly, AIDS/HIV became the biggest gay story reported by the news media. Numerous news media outlets linked the AIDS/HIV epidemic with homosexuals, which implied the notion of the promiscuous gay and lesbian lifestyle. The gays and lesbians, therefore, were described as a dangerous minority in the news media during the 1980s (Altman, 1986; Cassidy, 2000). In the 1990s, issues about the growing visibility of gays and lesbians and their campaign for equal rights were frequently covered in the news media, primarily because of AIDS and the debate over whether the ban on gays in the military should be lifted. The increasing visibility of gay people resulted in the emergence of lifestyle magazines (Bennett, 1998; Streitmatter, 1998). The Out, a lifestyle magazine based in New York City but circulated nationally, led the new phase, since its upscale design and fashion helped attract mainstream advertisers. This magazine, which devalued news in favor of stories on entertainment and fashions, became the first gay and lesbian publication sold in mainstream bookstores and featured on the front page of The New York Times (Streitmatter, 1998). From the late 1990s to the first few years of the 2000s, homosexuals were described as a threat to children’s development as well as a danger to family values in the news media. The legitimacy of same-sex marriage began to be discussed, because news coverage dominated the issue of same-sex marriage more frequently than before (Bennett, 1998). According to Gibson (2004), The New York Times first announced in August 2002 that its Sunday Styles section would begin publishing reports of same-sex commitment ceremonies along with the traditional heterosexual wedding announcements. Moreover, many newspapers joined this trend. Gibson (2004) found that not only the national newspapers, such as The New York Times, but also other regional newspapers, such as the Houston Chronicle and the Seattle Times, reported surprisingly large P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 633 number of news stories about the everyday lives of gays and lesbians, especially since the Massachusetts Supreme Judicial Court ruled in November 2003 that same-sex couples had the same right to marry as heterosexuals. Previous studies investigated the increased amount of news coverage of gay and lesbian issues in the past six decades, but they did not analyze how homosexuals are framed in the news media in terms of public debates on the gay marriage issue. These studies failed to examine how newspapers report this national debate on gay marriage as well as what kinds of news frames are used in reporting this controversial issue. 1.2. Framing gay and lesbian partnersh",
"title": ""
},
{
"docid": "11d06fb5474df44a6bc733bd5cd1263d",
"text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.",
"title": ""
},
{
"docid": "988b56fdbfd0fbb33bb715adb173c63c",
"text": "This paper presents a new sensing system for home-based rehabilitation based on optical linear encoder (OLE), in which the motion of an optical encoder on a code strip is converted to the limb joints' goniometric data. A body sensing module was designed, integrating the OLE and an accelerometer. A sensor network of three sensing modules was established via controller area network bus to capture human arm motion. Experiments were carried out to compare the performance of the OLE module with that of commercial motion capture systems such as electrogoniometers and fiber-optic sensors. The results show that the inexpensive and simple-design OLE's performance is comparable to that of expensive systems. Moreover, a statistical study was conducted to confirm the repeatability and reliability of the sensing system. The OLE-based system has strong potential as an inexpensive tool for motion capture and arm-function evaluation for short-term as well as long-term home-based monitoring.",
"title": ""
},
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
},
{
"docid": "89dd97465c8373bb9dabf3cbb26a4448",
"text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.",
"title": ""
},
{
"docid": "330704fbad279c826eb7cf3a174b78a3",
"text": "The problem of planning and goal-directed behavior has been addressed in computer science for many years, typically based on classical concepts like Bellman’s optimality principle, dynamic programming, or Reinforcement Learning methods – but is this the only way to address the problem? Recently there is growing interest in using probabilistic inference methods for decision making and planning. Promising about such approaches is that they naturally extend to distributed state representations and efficiently cope with uncertainty. In sensor processing, inference methods typically compute a posterior over state conditioned on observations – applied in the context of action selection they compute a posterior over actions conditioned on goals. In this paper we will first introduce the idea of using inference for reasoning about actions on an intuitive level, drawing connections to the idea of internal simulation. We then survey previous and own work using the new approach to address (partially observable) Markov Decision Processes and stochastic optimal control problems.",
"title": ""
},
{
"docid": "2fa61482be37fd956e6eceb8e517411d",
"text": "According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.",
"title": ""
},
{
"docid": "3e4d937d38a61a94bb8647d3f7b02802",
"text": "Most classification algorithms deal with datasets which have a set of input features, the variables to be used as predictors, and only one output class, the variable to be predicted. However, in late years many scenarios in which the classifier has to work with several outputs have come to life. Automatic labeling of text documents, image annotation or protein classification are among them. Multilabel datasets are the product of these new needs, and they have many specific traits. The mldr package allows the user to load datasets of this kind, obtain their characteristics, produce specialized plots, and manipulate them. The goal is to provide the exploratory tools needed to analyze multilabel datasets, as well as the transformation and manipulation functions that will make possible to apply binary and multiclass classification models to this data or the development of new multilabel classifiers. Thanks to its integrated user interface, the exploratory functions will be available even to non-specialized R users.",
"title": ""
},
{
"docid": "83c0b27f08494806481468fa4704d679",
"text": "A RFID system with a chipless RFID tag on a 90-µm thin Taconic TF290 laminate is presented. The chipless tag encodes data into the spectral signature in both magnitude and phase of the spectrum. The design and operation of a prototype RFID reader is also presented. The RFID reader operates between 5 – 10.7 GHz frequency band and successfully detects a chipless tag at 15 cm range. The tag design can be transferred easily to plastic and paper, making it suitable for mass deployment for low cost items and has the potential to replace trillions of barcodes printed each year. The RFID reader is suitable for mounting over conveyor belt systems.",
"title": ""
},
{
"docid": "af2779ab87ff707d51e735977a4fa0e2",
"text": "The increasing availability of large motion databases, in addition to advancements in motion synthesis, has made motion indexing and classification essential for better motion composition. However, in order to achieve good connectivity in motion graphs, it is important to understand human behaviour; human movement though is complex and difficult to completely describe. In this paper, we investigate the similarities between various emotional states with regards to the arousal and valence of the Russell’s circumplex model. We use a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis (LMA). Motion capture data from acted dance performances were used for training and classification purposes. The experimental results show that the proposed features can partially extract the LMA components, providing a representative space for indexing and classification of dance movements with regards to the emotion. This work contributes to the understanding of human behaviour and actions, providing insights on how people express emotional states using their body, while the proposed features can be used as complement to the standard motion similarity, synthesis and classification methods.",
"title": ""
},
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "255ff39001f9bbcd7b1e6fe96f588371",
"text": "We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting, lattice alignment, and successive decoding.",
"title": ""
},
{
"docid": "b992e02ee3366d048bbb4c30a2bf822c",
"text": "Structured graphics models such as Scalable Vector Graphics (SVG) enable designers to create visually rich graphics for user interfaces. Unfortunately current programming tools make it difficult to implement advanced interaction techniques for these interfaces. This paper presents the Hierarchical State Machine Toolkit (HsmTk), a toolkit targeting the development of rich interactions. The key aspect of the toolkit is to consider interactions as first-class objects and to specify them with hierarchical state machines. This approach makes the resulting behaviors self-contained, easy to reuse and easy to modify. Interactions can be attached to graphical elements without knowing their detailed structure, supporting the parallel refinement of the graphics and the interaction.",
"title": ""
},
{
"docid": "a8aa7af1b9416d4bd6df9d4e8bcb8a40",
"text": "User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.",
"title": ""
}
] |
scidocsrr
|
5261b6253e16f674563883dab5a1697f
|
Evaluation of Retrieval Algorithms for Expertise Search
|
[
{
"docid": "82af21c1e687d7303c06cef4b66f1fb4",
"text": "Strategic planning and talent management in large enterprises composed of knowledge workers requires complete, accurate, and up-to-date representation of the expertise of employees in a form that integrates with business processes. Like other similar organizations operating in dynamic environments, the IBM Corporation strives to maintain such current and correct information, specifically assessments of employees against job roles and skill sets from its expertise taxonomy. In this work, we deploy an analytics-driven solution that infers the expertise of employees through the mining of enterprise and social data that is not specifically generated and collected for expertise inference. We consider job role and specialty prediction and pose them as supervised classification problems. We evaluate a large number of feature sets, predictive models and postprocessing algorithms, and choose a combination for deployment. This expertise analytics system has been deployed for key employee population segments, yielding large reductions in manual effort and the ability to continually and consistently serve up-to-date and accurate data for several business functions. This expertise management system is in the process of being deployed throughout the corporation.",
"title": ""
}
] |
[
{
"docid": "b4bc5ccbe0929261856d18272c47a3de",
"text": "ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how a single decision tree can represent a set of classifiers by choosing different labellings of its leaves, or equivalently, an ordering on the leaves. In this setting, rather than estimating the accuracy of a single tree, it makes more sense to use the area under the ROC curve (AUC) as a quality metric. We also propose a novel splitting criterion which chooses the split with the highest local AUC. To the best of our knowledge, this is the first probabilistic splitting criterion that is not based on weighted average impurity. We present experiments suggesting that the AUC splitting criterion leads to trees with equal or better AUC value, without sacrificing accuracy if a single labelling is chosen.",
"title": ""
},
{
"docid": "fa38b2d63562699af5200b5efa476f64",
"text": "Hashtags, originally introduced in Twitter, are now becoming the most used way to tag short messages in social networks since this facilitates subsequent search, classification and clustering over those messages. However, extracting information from hashtags is difficult because their composition is not constrained by any (linguistic) rule and they usually appear in short and poorly written messages which are difficult to analyze with classic IR techniques. In this paper we address two challenging problems regarding the “meaning of hashtags”— namely, hashtag relatedness and hashtag classification — and we provide two main contributions. First we build a novel graph upon hashtags and (Wikipedia) entities drawn from the tweets by means of topic annotators (such as TagME); this graph will allow us to model in an efficacious way not only classic co-occurrences but also semantic relatedness among hashtags and entities, or between entities themselves. Based on this graph, we design algorithms that significantly improve state-of-the-art results upon known publicly available datasets. The second contribution is the construction and the public release to the research community of two new datasets: the former is a new dataset for hashtag relatedness, the latter is a dataset for hashtag classification that is up to two orders of magnitude larger than the existing ones. These datasets will be used to show the robustness and efficacy of our approaches, showing improvements in F1 up to two-digits in percentage (absolute).",
"title": ""
},
{
"docid": "785164fa04344d976c1d8ed148715ec2",
"text": "Integrated Systems Health Management includes as key elements fault detection, fault diagnostics, and failure prognostics. Whereas fault detection and diagnostics have been the subject of considerable emphasis in the Artificial Intelligence (AI) community in the past, prognostics has not enjoyed the same attention. The reason for this lack of attention is in part because prognostics as a discipline has only recently been recognized as a game-changing technology that can push the boundary of systems health management. This paper provides a survey of AI techniques applied to prognostics. The paper is an update to our previously published survey of data-driven prognostics.",
"title": ""
},
{
"docid": "0685c33de763bdedf2a1271198569965",
"text": "The use of virtual-reality technology in the areas of rehabilitation and therapy continues to grow, with encouraging results being reported for applications that address human physical, cognitive, and psychological functioning. This article presents a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the field of VR rehabilitation and therapy. The SWOT analysis is a commonly employed framework in the business world for analyzing the factors that influence a company's competitive position in the marketplace with an eye to the future. However, the SWOT framework can also be usefully applied outside of the pure business domain. A quick check on the Internet will turn up SWOT analyses for urban-renewal projects, career planning, website design, youth sports programs, and evaluation of academic research centers, and it becomes obvious that it can be usefully applied to assess and guide any organized human endeavor designed to accomplish a mission. It is hoped that this structured examination of the factors relevant to the current and future status of VR rehabilitation will provide a good overview of the key issues and concerns that are relevant for understanding and advancing this vital application area.",
"title": ""
},
{
"docid": "e7946956e8195f9b596d90efe6d6fd09",
"text": "In this paper we present a new biologically inspired approach to the part-of-speech tagging problem, based on particle swarm optimization. As far as we know this is the first attempt of solving this problem using swarm intelligence. We divided the part-of-speech problem into two subproblems. The first concerns the way of automatically extracting disambiguation rules from an annotated corpus. The second is related with how to apply these rules to perform the automatic tagging. We tackled both problems with particle swarm optimization. We tested our approach using two different corpora of English language and also a Portuguese corpus. The accuracy obtained on both languages is comparable to the best results previously published, including other evolutionary approaches.",
"title": ""
},
{
"docid": "09b51a81775f598abed9c401c8f5617d",
"text": "In this work, we propose a novel method to involve full-scale-features into the fully convolutional neural networks (FCNs) for Semantic Segmentation. Current works on FCN has brought great advances in the task of semantic segmentation, but the receptive field, which represents region areas of input volume connected to any output neuron, limits the available information of output neuron’s prediction accuracy. We investigate how to involve the full-scale or full-image features into FCNs to enrich the receptive field. Specially, the fullscale feature network (FFN) extends the full-connected network and makes an end-to-end unified training structure. It has two appealing properties. First, the introduction of full-scale-features is beneficial for prediction. We build a unified extracting network and explore several fusion functions for concatenating features. Amounts of experiments have been carried out to prove that full-scale-features makes fair accuracy raising. Second, FFN is applicable to many variants of FCN which could be regarded as a general strategy to improve the segmentation accuracy. Our proposed method is evaluated on PASCAL VOC 2012, and achieves a state-of-art result.",
"title": ""
},
{
"docid": "ce384939966654196aabbb076326c779",
"text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.",
"title": ""
},
{
"docid": "34149311075a7f564abe632adbbed521",
"text": "This paper presents a high-gain broadband suspended plate antenna for indoor wireless access point applications. This antenna consists of two layers operating at two adjacent bands. The bottom plate is fed by a tapered down strip excited by a probe through an SMA RF connector. The top plate is shorted to a ground plane by a strip electromagnetically coupled with the feed-strip. The design is carried out using a commercial EM software package, and validated experimentally. The measured result shows that the antenna achieves a broad operational bandwidth of 66%, suitable for access points in WiFi (2.4-2.485 GHz) and WiMAX (2.3-2.7 GHz and the 3.4-3.6 GHz) systems (IEEE 802.11b/g and IEEE 802.16-2004/e). The measured antenna gain varies from 7.7-9.5 dBi across the frequency bands of interest. A parametric study of this antenna is also conducted.",
"title": ""
},
{
"docid": "72939e7f99727408acfa3dd3977b38ad",
"text": "We present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail.",
"title": ""
},
{
"docid": "e80136bf979ac354436cf92210d07687",
"text": "Lenders use rating and scoring models to rank credit applicants on their expected performance. The models and approaches are numerous. We explore the possibility that estimates generated by models developed with data drawn solely from extended loans are less valuable than they should be because of selectivity bias. We investigate the value of “reject inference” – methods that use a rejected applicant’s characteristics, rather than loan performance data, in scoring model development. In the course of making this investigation, we also discuss the advantages of using parametric as well as nonparametric modeling. These issues are discussed and illustrated in the context of a simple stylized model.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "f5a7a4729f9374ee7bee4401475647f9",
"text": "In the last decade, deep learning has contributed to advances in a wide range computer vision tasks including texture analysis. This paper explores a new approach for texture segmentation using deep convolutional neural networks, sharing important ideas with classic filter bank based texture segmentation methods. Several methods are developed to train Fully Convolutional Networks to segment textures in various applications. We show in particular that these networks can learn to recognize and segment a type of texture, e.g. wood and grass from texture recognition datasets (no training segmentation). We demonstrate that Fully Convolutional Networks can learn from repetitive patterns to segment a particular texture from a single image or even a part of an image. We take advantage of these findings to develop a method that is evaluated on a series of supervised and unsupervised experiments and improve the state of the art on the Prague texture segmentation datasets.",
"title": ""
},
{
"docid": "c998c8d5cc17ba668492d813d522a17d",
"text": "This paper presents a 3D face reconstruction method based on multi-view stereo algorithm, the proposed algorithm reconstructs 3D face model from videos captured around static human faces. Image sequence is processed as the input of shape from motion algorithm to estimate camera parameters and camera positions, 3D points with different denseness degree could be acquired by using a method named patch based multi-view stereopsis, finally, the proposed method uses a surface reconstruction algorithm to generate a watertight 3D face model. The proposed approach can automatically detect facial feature points; it does not need any initialization and special equipments; videos can be obtained with commonly used picture pick-up device such as mobile phones. Several groups of experiments have been conducted to validate the availability of the proposed method.",
"title": ""
},
{
"docid": "746bb0b7ed159fcfbe7940a33e6debf1",
"text": "Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network’s input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.",
"title": ""
},
{
"docid": "eece7dab68d56d3d5f28a72e873a0a72",
"text": "OBJECTIVES\nTo describe the effect of multidisciplinary care on survival in women treated for breast cancer.\n\n\nDESIGN\nRetrospective, comparative, non-randomised, interventional cohort study.\n\n\nSETTING\nNHS hospitals, health boards in the west of Scotland, UK.\n\n\nPARTICIPANTS\n14,358 patients diagnosed with symptomatic invasive breast cancer between 1990 and 2000, residing in health board areas in the west of Scotland. 13,722 (95.6%) patients were eligible (excluding 16 diagnoses of inflammatory cancers and 620 diagnoses of breast cancer at death).\n\n\nINTERVENTION\nIn 1995, multidisciplinary team working was introduced in hospitals throughout one health board area (Greater Glasgow; intervention area), but not in other health board areas in the west of Scotland (non-intervention area).\n\n\nMAIN OUTCOME MEASURES\nBreast cancer specific mortality and all cause mortality.\n\n\nRESULTS\nBefore the introduction of multidisciplinary care (analysed time period January 1990 to September 1995), breast cancer mortality was 11% higher in the intervention area than in the non-intervention area (hazard ratio adjusted for year of incidence, age at diagnosis, and deprivation, 1.11; 95% confidence interval 1.00 to 1.20). After multidisciplinary care was introduced (time period October 1995 to December 2000), breast cancer mortality was 18% lower in the intervention area than in the non-intervention area (0.82, 0.74 to 0.91). All cause mortality did not differ significantly between populations in the earlier period, but was 11% lower in the intervention area than in the non-interventional area in the later period (0.89, 0.82 to 0.97). Interrupted time series analyses showed a significant improvement in breast cancer survival in the intervention area in 1996, compared with the expected survival in the same year had the pre-intervention trend continued (P=0.004). This improvement was maintained after the intervention was introduced.\n\n\nCONCLUSION\nIntroduction of multidisciplinary care was associated with improved survival and reduced variation in survival among hospitals. Further analysis of clinical audit data for multidisciplinary care could identify which aspects of care are most associated with survival benefits.",
"title": ""
},
{
"docid": "9600f488c41b5574766067d32004400e",
"text": "A conversational agent, capable to have a ldquosense of humourrdquo is presented. The agent can both generate humorous sentences and recognize humoristic expressions introduced by the user during the dialogue. Humorist Bot makes use of well founded techniques of computational humor and it has been implemented using the ALICE framework embedded into an Yahoo! Messenger client. It includes also an avatar that changes the face expression according to humoristic content of the dialogue.",
"title": ""
},
{
"docid": "99ff0acb6d1468936ae1620bc26c205f",
"text": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment.",
"title": ""
},
{
"docid": "8839742941d10bd7d6b9b5f2e26794b2",
"text": "Tenzing is a query engine built on top of MapReduce [9] for ad hoc analysis of Google data. Tenzing supports a mostly complete SQL implementation (with several extensions) combined with several key characteristics such as heterogeneity, high performance, scalability, reliability, metadata awareness, low latency, support for columnar storage and structured data, and easy extensibility. Tenzing is currently used internally at Google by 1000+ employees and serves 10000+ queries per day over 1.5 petabytes of compressed data. In this paper, we describe the architecture and implementation of Tenzing, and present benchmarks of typical analytical queries.",
"title": ""
},
{
"docid": "60c1963a6d8f4f84d2bdc09d9f6f8e23",
"text": "This paper studies how to build a decision tree classifier under the following scenario: a database is vertically partitioned into two pieces, with one piece owned by Alice and the other piece owned by Bob. Alice and Bob want to build a decision tree classifier based on such a database, but due to the privacy constraints, neither of them wants to disclose their private pieces to the other party or to any third party. We present a protocol that allows Alice and Bob to conduct such a classifier building without having to compromise their privacy. Our protocol uses an untrusted third-party server, and is built upon a useful building block, the scalar product protocol. Our solution to the scalar product protocol is more efficient than any existing solutions.",
"title": ""
},
{
"docid": "5c788d1b3fc2f063407d5d370e7703bd",
"text": "Dimensional models have been proposed in psychology studies to represent complex human emotional expressions. Activation and valence are two common dimensions in such models. They can be used to describe certain emotions. For example, anger is one type of emotion with a low valence and high activation value; neutral has both a medium level valence and activation value. In this work, we propose to apply multi-task learning to leverage activation and valence information for acoustic emotion recognition based on the deep belief network (DBN) framework. We treat the categorical emotion recognition task as the major task. For the secondary task, we leverage activation and valence labels in two different ways, category level based classification and continuous level based regression. The combination of the loss functions from the major and secondary tasks is used as the objective function in the multi-task learning framework. After iterative optimization, the values from the last hidden layer in the DBN are used as new features and fed into a support vector machine classifier for emotion recognition. Our experimental results on the Interactive Emotional Dyadic Motion Capture and Sustained Emotionally Colored Machine-Human Interaction Using Nonverbal Expression databases show significant improvements on unweighted accuracy, illustrating the benefit of utilizing additional information in a multi-task learning setup for emotion recognition.",
"title": ""
}
] |
scidocsrr
|
b45b8e3f53d2afa87caab18259cc15fc
|
A Delay-Locked Loop Synchronization Scheme for High-Frequency Multiphase Hysteretic DC-DC Converters
|
[
{
"docid": "681d0a6dcad967340cfb3ebe9cf7b779",
"text": "We demonstrate an integrated buck dc-dc converter for multi-V/sub CC/ microprocessors. At nominal conditions, the converter produces a 0.9-V output from a 1.2-V input. The circuit was implemented in a 90-nm CMOS technology. By operating at high switching frequency of 100 to 317 MHz with four-phase topology and fast hysteretic control, we reduced inductor and capacitor sizes by three orders of magnitude compared to previously published dc-dc converters. This eliminated the need for the inductor magnetic core and enabled integration of the output decoupling capacitor on-chip. The converter achieves 80%-87% efficiency and 10% peak-to-peak output noise for a 0.3-A output current and 2.5-nF decoupling capacitance. A forward body bias of 500 mV applied to PMOS transistors in the bridge improves efficiency by 0.5%-1%.",
"title": ""
}
] |
[
{
"docid": "445733bc33df518b87b7eb1ca4d5558f",
"text": "In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them. To this end, we advocate for the use of a backtracking model that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state (or one that is estimated to have high value), predicts and samples which (state, action)-tuples may have led to that high value state. These traces of (state, action) pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy. We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards. Our method improves the sample efficiency of both onand off-policy RL algorithms across several environments and tasks.",
"title": ""
},
{
"docid": "fd5e6dcb20280daad202f34cd940e7ce",
"text": "Chapters cover topics in areas such as P and NP, space complexity, randomness, computational problems that are (or appear) infeasible to solve, pseudo-random generators, and probabilistic proof systems. The introduction nicely summarizes the material covered in the rest of the book and includes a diagram of dependencies between chapter topics. Initial chapters cover preliminary topics as preparation for the rest of the book. These are more than topical or historical summaries but generally not sufficient to fully prepare the reader for later material. Readers should approach this text already competent at undergraduate-level algorithms in areas such as basic analysis, algorithm strategies, fundamental algorithm techniques, and the basics for determining computability. Elective work in P versus NP or advanced analysis would be valuable but that isn‟t really required.",
"title": ""
},
{
"docid": "5b7b48fae57ca2335a598efc4e7718b3",
"text": "There are many industrial applications of large-scale dc power systems, but only a limited amount of scientific literature addresses the modeling of dc arcs. Since the early dc-arc research focused on the arc as an illuminant, most of the early data was obtained from low-current dc systems. More recent publications provide a better understanding of the high-current dc arc. The dc-arc models reviewed in this paper cover a wide range of arcing situations and test conditions. Even with the test variations, a comparison of dc-arc resistance equations shows a fair degree of consistency in the formulations. A method for estimating incident energy for a dc arcing fault is developed based on a nonlinear arc resistance. Additional dc-arc testing is needed so that more accurate incident-energy models can be developed for dc arcs.",
"title": ""
},
{
"docid": "28531c596a9df30b91d9d1e44d5a7081",
"text": "The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyword-based search, reading proceedings of conferences, browsing publication lists of known experts, or checking the references of the papers they are interested. Existing tools for the literature search are suitable for a first-level bibliographic search. However, they do not allow complex second-level searches. In this paper, we present a web service called TheAdvisor (http://theadvisor.osu.edu) which helps the users to build a strong bibliography by extending the document set obtained after a first-level search. The service makes use of the citation graph for recommendation. It also features diversification, relevance feedback, graphical visualization, venue and reviewer recommendation. In this work, we explain the design criteria and rationale we employed to make the TheAdvisor a useful and scalable web service along with a thorough experimental evaluation.",
"title": ""
},
{
"docid": "71ec2c62f6371c810b35aeef4172a392",
"text": "This survey, aimed mainly at mathematicians rather than practitioners, covers recent developments in homomorphic encryption (computing on encrypted data) and program obfuscation (generating encrypted but functional programs). Current schemes for encrypted computation all use essentially the same “noisy” approach: they encrypt via a noisy encoding of the message, they decrypt using an “approximate” ring homomorphism, and in between they employ techniques to carefully control the noise as computations are performed. This noisy approach uses a delicate balance between structure and randomness: structure that allows correct computation despite the randomness of the encryption, and randomness that maintains privacy against the adversary despite the structure. While the noisy approach “works”, we need new techniques and insights, both to improve efficiency and to better understand encrypted computation conceptually. Mathematics Subject Classification (2010). Primary 68Qxx; Secondary 68P25.",
"title": ""
},
{
"docid": "5e7297c25f2aafe8dbb733944ddc29e7",
"text": "Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity (\"alpha matte\") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.",
"title": ""
},
{
"docid": "0e53caa9c6464038015a6e83b8953d92",
"text": "Many interactive rendering algorithms require operations on multiple fragments (i.e., ray intersections) at the same pixel location: however, current Graphics Processing Units (GPUs) capture only a single fragment per pixel. Example effects include transparency, translucency, constructive solid geometry, depth-of-field, direct volume rendering, and isosurface visualization. With current GPUs, programmers implement these effects using multiple passes over the scene geometry, often substantially limiting performance. This paper introduces a generalization of the Z-buffer, called the k-buffer, that makes it possible to efficiently implement such algorithms with only a single geometry pass, yet requires only a small, fixed amount of additional memory. The k-buffer uses framebuffer memory as a read-modify-write (RMW) pool of k entries whose use is programmatically defined by a small k-buffer program. We present two proposals for adding k-buffer support to future GPUs and demonstrate numerous multiple-fragment, single-pass graphics algorithms running on both a software-simulated k-buffer and a k-buffer implemented with current GPUs. The goal of this work is to demonstrate the large number of graphics algorithms that the k-buffer enables and that the efficiency is superior to current multipass approaches.",
"title": ""
},
{
"docid": "1196ab65ddfcedb8775835f2e176576f",
"text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.",
"title": ""
},
{
"docid": "26b1c00522009440c0481453e0f6331c",
"text": "Software organizations that develop their software products using the agile software processes such as Extreme Programming (XP) face a number of challenges in their effort to demonstrate that their process activities conform to ISO 9001 requirements, a major one being product traceability: software organizations must provide evidence of ISO 9001 conformity, and they need to develop their own procedures, tools, and methodologies to do so. This paper proposes an auditing model for ISO 9001 traceability requirements that is applicable in agile (XP) environments. The design of our model is based on evaluation theory, and includes the use of several auditing “yardsticks” derived from the principles of engineering design, the SWEBOK Guide, and the CMMI-DEV guidelines for requirement management and traceability for each yardstick. Finally, five approaches for agile-XP traceability approaches are audited based on the proposed audit model.",
"title": ""
},
{
"docid": "aed7133c143edbe0e1c6f6dfcddee9ec",
"text": "This paper describes a version of the auditory image model (AIM) [1] implemented in MATLAB. It is referred to as “aim-mat” and it includes the basic modules that enable AIM to simulate the spectral analysis, neural encoding and temporal integration performed by the auditory system. The dynamic representations produced by non-static sounds can be viewed on a frame-by-frame basis or in movies with synchronized sound. The software has a sophisticated graphical user interface designed to facilitate the auditory modelling. It is also possible to add MATLAB code and complete modules to aim-mat. The software can be downloaded from http://www.mrccbu.cam.ac.uk/cnbh/aimmanual",
"title": ""
},
{
"docid": "a3cb91fb614f3f772a277b3d125c4088",
"text": "Exploring the inherent technical challenges in realizing the potential of Big Data.",
"title": ""
},
{
"docid": "82e823324c1717996d09b11bdfdc4a62",
"text": "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, the neural networks used in practice are going wider and deeper. On the theoretical side, a long line of works have been focusing on why we can train neural networks when there is only one hidden layer. The theory of multi-layer networks remains somewhat unsettled. In this work, we prove why simple algorithms such as stochastic gradient descent (SGD) can find global minima on the training objective of DNNs in polynomial time. We only make two assumptions: the inputs do not degenerate and the network is over-parameterized. The latter means the number of hidden neurons is sufficiently large: polynomial in L, the number of DNN layers and in n, the number of training samples. As concrete examples, on the training set and starting from randomly initialized weights, we show that SGD attains 100% accuracy in classification tasks, or minimizes regression loss in linear convergence speed ε ∝ e−Ω(T , with a number of iterations that only scales polynomial in n and L. Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet). ∗V1 appears on arXiv on this date and no new result is added since then. V2 adds citations and V3/V4 polish writing. This work was done when Yuanzhi Li and Zhao Song were 2018 summer interns at Microsoft Research Redmond. When this work was performed, Yuanzhi Li was also affiliated with Princeton, and Zhao Song was also affiliated with UW and Harvard. We would like to specially thank Greg Yang for many enlightening discussions, thank Ofer Dekel, Sebastien Bubeck, and Harry Shum for very helpful conversations, and thank Jincheng Mei for carefully checking the proofs of this paper. ar X iv :1 81 1. 03 96 2v 4 [ cs .L G ] 4 F eb 2 01 9",
"title": ""
},
{
"docid": "4936a07e1b6a42fde7a8fdf1b420776c",
"text": "One of many advantages of the cloud is the elasticity, the ability to dynamically acquire or release computing resources in response to demand. However, this elasticity is only meaningful to the cloud users when the acquired Virtual Machines (VMs) can be provisioned in time and be ready to use within the user expectation. The long unexpected VM startup time could result in resource under-provisioning, which will inevitably hurt the application performance. A better understanding of the VM startup time is therefore needed to help cloud users to plan ahead and make in-time resource provisioning decisions. In this paper, we study the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace. We analyze the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time. We also study the VM startup time of spot instances in EC2, which show a longer waiting time and greater variance compared to on-demand instances.",
"title": ""
},
{
"docid": "2fe45390c2e54c72f6575e291fd2db94",
"text": "Green start-ups contribute towards a transition to a more sustainable economy by developing sustainable and environmentally friendly innovation and bringing it to the market. Due to specific product/service characteristics, entrepreneurial motivation and company strategies that might differ from that of other start-ups, these companies might struggle even more than usual with access to finance in the early stages. This conceptual paper seeks to explain these challenges through the theoretical lenses of entrepreneurial finance and behavioural finance. While entrepreneurial finance theory contributes to a partial understanding of green start-up finance, behavioural finance is able to solve a remaining explanatory deficit produced by entrepreneurial finance theory. Although some behavioural finance theorists are suggesting that the current understanding of economic rationality underlying behavioural finance research is inadequate, most scholars have not yet challenged these assumptions, which constrict a comprehensive and realistic description of the reality of entrepreneurial finance in green start-ups. The aim of the paper is thus, first, to explore the specifics of entrepreneurial finance in green start-ups and, second, to demonstrate the need for a more up-to-date conception of rationality in behavioural finance theory in order to enable realistic empirical research in this field.",
"title": ""
},
{
"docid": "ad4547c0a82353f122f536352684384f",
"text": "Reported complication rates are low for lateral epicondylitis management, but the anatomic complexity of the elbow allows for possible catastrophic complication. This review documents complications associated with lateral epicondylar release: 67 studies reporting outcomes of lateral epicondylar release with open, percutaneous, or arthroscopic methods were reviewed and 6 case reports on specific complications associated with the procedure are included. Overall complication rate was 3.3%. For open procedures it was 4.3%, percutaneous procedures 1.9%, and arthroscopic procedures 1.1%. In higher-level studies directly comparing modalities, the complication rates were 1.3%, 0%, and 1.2%, respectively.",
"title": ""
},
{
"docid": "70fafdedd05a40db5af1eabdf07d431c",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
},
{
"docid": "1eb2aaf3e7b2f98e84105405b123fa7e",
"text": "Prognostics technique aims to accurately estimate the Remaining Useful Life (RUL) of a subsystem or a component using sensor data, which has many real world applications. However, many of the existing algorithms are based on linear models, which cannot capture the complex relationship between the sensor data and RUL. Although Multilayer Perceptron (MLP) has been applied to predict RUL, it cannot learn salient features automatically, because of its network structure. A novel deep Convolutional Neural Network (CNN) based regression approach for estimating the RUL is proposed in this paper. Although CNN has been applied on tasks such as computer vision, natural language processing, speech recognition etc., this is the first attempt to adopt CNN for RUL estimation in prognostics. Different from the existing CNN structure for computer vision, the convolution and pooling filters in our approach are applied along the temporal dimension over the multi-channel sensor data to incorporate automated feature learning from raw sensor signals in a systematic way. Through the deep architecture, the learned features are the higher-level abstract representation of low-level raw sensor signals. Furthermore, feature learning and RUL estimation are mutually enhanced by the supervised feedback. We compared with several state-of-the-art algorithms on two publicly available data sets to evaluate the effectiveness of this proposed approach. The encouraging results demonstrate that our proposed deep convolutional neural network based regression approach for RUL estimation is not only more efficient but also more accurate.",
"title": ""
},
{
"docid": "a76d5685b383e45778417d5eccdd8b6c",
"text": "The advent of both Cloud computing and Internet of Things (IoT) is changing the way of conceiving information and communication systems. Generally, we talk about IoT Cloud to indicate a new type of distributed system consisting of a set of smart devices interconnected with a remote Cloud infrastructure, platform, or software through the Internet and able to provide IoT as a Service (IoTaaS). In this paper, we discuss the near future evolution of IoT Clouds towards federated ecosystems, where IoT providers cooperate to offer more flexible services. Moreover, we present a general three-layer IoT Cloud Federation architecture, highlighting new business opportunities and challenges.",
"title": ""
},
{
"docid": "68a537c35da5546f60672c38f968b97a",
"text": "Visual analysis of human motion is currently one of the most active research topics in computer vision. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Human motion analysis concerns the detection, tracking and recognition of people, and more generally, the understanding of human behaviors, from image sequences involving humans. This paper provides a comprehensive survey of research on computer-vision-based human motion analysis. The emphasis is on three major issues involved in a general human motion analysis system, namely human detection, tracking and activity understanding. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are discussed. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1262b017b1d7f4ce516e247662d8ad27",
"text": "Detecting deception is an ancient problem that continues to find relevance today in many areas, such as border control, financial auditing, testimony assessment, and Internet fraud. Over the last century, research in this area has focused mainly on discovering physiological signals and psychological behaviors that are associated with lying. Using verbal cues (i.e., words and language structure) is not entirely new. But in recent years, data-driven and machine learning frameworks, which are now ubiquitous in the natural language processing (NLP) community, has brought new light to this old field. This highly accessible book puts last decade’s research in verbal deception in the context of traditional methods. It is a valuable resource to anyone with a basic understanding of machine learning looking to make inroads or break new ground in the subspecialty of detecting verbal deception. The book consists of five chapters organized into three parts—background on nonverbal cues, statistical NLP approaches, and future directions. The introductory chapter concisely defines the problem and relates verbal cues to the behavioral ones. It also provides an intuition of why patterns in language would be effective and provides an estimated state-of-the-art performance. Chapter 2 provides background on a behavioral approach to identifying deception. The first section gets readers acquainted with terms used by nonverbal cues to deception. These include physiological signals (which are the basis of well-known lie detection methods such as polygraphy and thermography), vocal cues (such as speech disfluencies), and body and facial expressions (e.g., pupil dilation). Although seemingly detached from the focus of the book, this preliminary material is an interesting introduction that also serves as a terminology reference later. The remaining chapter covers two topics: psychology of deception and applied criminal justice. The part on psychology presents a literature review of two definitive meta-analysis of the literature in the twentieth century. It first gives a theoretical account of deceptive behavior, such as motivation to lie and emotional states of liars. Next, it reports the experimental effectiveness of measurable cues, whether objectively or subjectively, such as complexity and amount of information. The second meta-analysis examines conditions that tend to make lying behavior more obvious, for example, interrogation. Although seemingly unrelated to NLP, I expect these reviews to be a source of inspiration for novel feature and model engineering. Because the material is very comprehensive and possibly foreign to the NLP community, I would like to see this part organized by the type of behavior cues (in the same vein as the preliminary material on physiological",
"title": ""
}
] |
scidocsrr
|
e7be5b1afbdd1dc1508cb26ec85748f9
|
Compatible and Diverse Fashion Image Inpainting
|
[
{
"docid": "1e67d66e3b2a02c8acb8c4734dd7104b",
"text": "We introduce a new dataset of 293,008 high definition (1360 x 1360 pixels) fashion images paired with item descriptions provided by professional stylists. Each item is photographed from a variety of angles. We provide baseline results on 1) high-resolution image generation, and 2) image generation conditioned on the given text descriptions. We invite the community to improve upon these baselines. In this paper we also outline the details of a challenge that we are launching based upon this dataset.",
"title": ""
},
{
"docid": "9f57198ed66a66cfba5b6551bd384e5f",
"text": "Instance-level human parsing towards real-world human analysis scenarios is still under-explored due to the absence of sufficient data resources and technical difficulty in parsing multiple instances in a single pass. Several related works all follow the “parsing-by-detection” pipeline that heavily relies on separately trained detection models to localize instances and then performs human parsing for each instance sequentially. Nonetheless, two discrepant optimization targets of detection and parsing lead to suboptimal representation learning and error accumulation for final results. In this work, we make the first attempt to explore a detection-free Part Grouping Network (PGN) for efficiently parsing multiple people in an image in a single pass. Our PGN reformulates instance-level human parsing as two twinned sub-tasks that can be jointly learned and mutually refined via a unified network: 1) semantic part segmentation for assigning each pixel as a human part (e.g ., face, arms); 2) instance-aware edge detection to group semantic parts into distinct person instances. Thus the shared intermediate representation would be endowed with capabilities in both characterizing fine-grained parts and inferring instance belongings of each part. Finally, a simple instance partition process is employed to get final results during inference. We conducted experiments on PASCAL-Person-Part dataset and our PGN outperforms all state-of-the-art methods. Furthermore, we show its superiority on a newly collected multi-person parsing dataset (CIHP) including 38,280 diverse images, which is the largest dataset so far and can facilitate more advanced human analysis. The CIHP benchmark and our source code are available at http://sysu-hcp.net/lip/.",
"title": ""
},
{
"docid": "2da7166b9ec1ca7da168ac4fc5f056e6",
"text": "Can an algorithm create original and compelling fashion designs to serve as an inspirational assistant? To help answer this question, we design and investigate different image generation models associated with different loss functions to boost creativity in fashion generation. The dimensions of our explorations include: (i) different Generative Adversarial Networks architectures that start from noise vectors to generate fashion items, (ii) novel loss functions that encourage creativity, inspired from Sharma-Mittal divergence, a generalized mutual information measure for the widely used relative entropies such as Kullback-Leibler, and (iii) a generation process following the key elements of fashion design (disentangling shape and texture components). A key challenge of this study is the evaluation of generated designs and the retrieval of best ones, hence we put together an evaluation protocol associating automatic metrics and human experimental studies that we hope will help ease future research. We show that our proposed creativity losses yield better overall appreciation than the one employed in Creative Adversarial Networks. In the end, about 61% of our images are thought to be created by human designers rather than by a computer while also being considered original per our human subject experiments, and our proposed loss scores the highest compared to existing losses in both novelty and likability. Figure 1: Training generative adversarial models with appropriate losses leads to realistic and creative 512× 512 fashion images.",
"title": ""
},
{
"docid": "6ad90319d07abce021eda6f3a1d3886e",
"text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.",
"title": ""
}
] |
[
{
"docid": "376943ca96470be14dd8ee821a59e0ee",
"text": "Interoperability in the Internet of Things is critical for emerging services and applications. In this paper we advocate the use of IoT `hubs' to aggregate things using web protocols, and suggest a staged approach to interoperability. In the context of a UK government funded project involving 8 IoT sub-projects to address cross-domain IoT interoperability, we introduce the HyperCat IoT catalogue specification. We then describe the tools and techniques we developed to adapt an existing data portal and IoT platform to this specification, and provide an IoT hub focused on the highways industry called `Smart Streets'. Based on our experience developing this large scale IoT hub, we outline lessons learned which we hope will contribute to ongoing efforts to create an interoperable global IoT ecosystem.",
"title": ""
},
{
"docid": "9955e99d9eba166458f5551551ab05e3",
"text": "Every day, millions of tons of temperature sensitive goods are produced, transported, stored or distributed worldwide. For all these products the control of temperature is essential. The term “cold chain” describes the series of interdependent equipment and processes employed to ensure the temperature preservation of perishables and other temperaturecontrolled products from the production to the consumption end in a safe, wholesome, and good quality state (Zhang, 2007). In other words, it is a supply chain of temperature sensitive products. So temperature-control is the key point in cold chain operation and the most important factor when prolonging the practical shelf life of produce. Thus, the major challenge is to ensure a continuous ‘cold chain’ from producer to consumer in order to guaranty prime condition of goods (Ruiz-Garcia et al., 2007).These products can be perishable items like fruit, vegetables, flowers, fish, meat and dairy products or medical products like drugs, blood, vaccines, organs, plasma and tissues. All of them can have their properties affected by temperature changes. Also some chemicals and electronic components like microchips are temperature sensitive.",
"title": ""
},
{
"docid": "4b6ed0a45faa2760ba1dd3e0494cb0f7",
"text": "This monograph discusses research, theory, and practice relevant to how children learn to read English. After an initial overview of writing systems, the discussion summarizes research from developmental psychology on children's language competency when they enter school and on the nature of early reading development. Subsequent sections review theories of learning to read, the characteristics of children who do not learn to read (i.e., who have developmental dyslexia), research from cognitive psychology and cognitive neuroscience on skilled reading, and connectionist models of learning to read. The implications of the research findings for learning to read and teaching reading are discussed. Next, the primary methods used to teach reading (phonics and whole language) are summarized. The final section reviews laboratory and classroom studies on teaching reading. From these different sources of evidence, two inescapable conclusions emerge: (a) Mastering the alphabetic principle (that written symbols are associated with phonemes) is essential to becoming proficient in the skill of reading, and (b) methods that teach this principle directly are more effective than those that do not (especially for children who are at risk in some way for having difficulty learning to read). Using whole-language activities to supplement phonics instruction does help make reading fun and meaningful for children, but ultimately, phonics instruction is critically important because it helps beginning readers understand the alphabetic principle and learn new words. Thus, elementary-school teachers who make the alphabetic principle explicit are most effective in helping their students become skilled, independent readers.",
"title": ""
},
{
"docid": "c3b07d5c9a88c1f9430615d5e78675b6",
"text": "Two new algorithms and associated neuron-like network architectures are proposed for solving the eigenvalue problem in real-time. The first approach is based on the solution of a set of nonlinear algebraic equations by employing optimization techniques. The second approach employs a multilayer neural network with linear artificial neurons and it exploits the continuous-time error back-propagation learning algorithm. The second approach enables us to find all the eigenvalues and the associated eigenvectors simultaneously by training the network to match some desired patterns, while the first approach is suitable to find during one run only one particular eigenvalue (e.g. an extreme eigenvalue) and the corresponding eigenvector in realtime. In order to find all eigenpairs the optimization process must be repeated in this case many times for different initial conditions. The performance and convergence behaviour of the proposed neural network architectures are investigated by extensive computer simulations.",
"title": ""
},
{
"docid": "73e8436e0b55227794d166d1fc355881",
"text": "Recent advances in signal analysis have engendered EEG with the status of a true brain mapping and brain imaging method capable of providing spatio-temporal information regarding brain (dys)function. Because of the increasing interest in the temporal dynamics of brain networks, and because of the straightforward compatibility of the EEG with other brain imaging techniques, EEG is increasingly used in the neuroimaging community. However, the full capability of EEG is highly underestimated. Many combined EEG-fMRI studies use the EEG only as a spike-counter or an oscilloscope. Many cognitive and clinical EEG studies use the EEG still in its traditional way and analyze grapho-elements at certain electrodes and latencies. We here show that this way of using the EEG is not only dangerous because it leads to misinterpretations, but it is also largely ignoring the spatial aspects of the signals. In fact, EEG primarily measures the electric potential field at the scalp surface in the same way as MEG measures the magnetic field. By properly sampling and correctly analyzing this electric field, EEG can provide reliable information about the neuronal activity in the brain and the temporal dynamics of this activity in the millisecond range. This review explains some of these analysis methods and illustrates their potential in clinical and experimental applications.",
"title": ""
},
{
"docid": "9e11005f60aa3f53481ac3543a18f32f",
"text": "Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.",
"title": ""
},
{
"docid": "06998586aa57d1f9b11f7ff37cae0afb",
"text": "Solar cell designs with complex metallization geometries such as metal wrap through (MWT), interdigitated-back-contact (IBC) cells, and metal grids with non-ideal features like finger breaks, finger striations and non-uniform contact resistance, are not amenable to simple series resistance (Rs) analysis based on small unit cells. In order to accurately simulate these cells, we developed a program that captures the cell metallization geometry from rastered images/CAD files, and efficiently meshes the cell plane for finite element analysis, yielding standard data such as the I-V curve, voltage and Rs distribution. The program also features a powerful post processor that predicts the rate of change in efficiency with respect to incremental changes in the metallization pattern, opening up the possibility of intelligent computer aided design procedures.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
},
{
"docid": "bc225b3c33c397d823fef39e44f58f8d",
"text": "A development of a clinical group decision support system (CGDSS) has been carried out for diagnosing both neurosis and personality disorders. The knowledge, stored in the knowledge base, were generated from the aggregated preferences given by decision makers. Two types of preferences used here, i.e. the preferences of a mental evidence by a mental condition; and the preferences of a mental disorder by mental condition. Ordered weighted averaging operator was adopted to aggregate those preferences. This aggregation process was carried out after transforming the selected subset to fuzzy preference relation format. Then the Bayesian theorem was adopted to compute the probability of evidence given a particular disorder. After developing the knowledge base, the next step is to develop an inference engine. The method used for developing an inference engine is multiattribute decision making concept, this is because of the system was directed to choose the best disorder when a particular condition was given. Many methods have been developed to solve MADM problem, however only the SAW, WP, and TOPSIS were appropriate to solve problem here. In this knowledge base, the relation between each disorder and evidence were represented X matrix (m x n) that consist of probability value. Where the Xij was probability of jth mental evidence given ith mental disorder; i=1,2,...,m; and j=1,2,...,n. Sensitivity analysis process was to compute the sensitivity degree of each attribute to the ranking outcome in each method. The sensitivity analysis was aimed to determine the degree of sensitivity of each attribute to the ranking outcome of each method. This degree implies that there were a relevant between an attribute and a ranking outcome. This relevant attribute can be emitted by influence degree of attribute Cj to ranking outcome fj. Then, relation between sensitivity degree and influence degree for each attribute, can be found by computing the Pearsonpsilas correlation coefficient. The biggest correlation coefficient shows as the best result. This research shows that TOPSIS method always has the highest correlation coefficient, and it is getting higher if the change of the ranking is increased. The experimental results shows that that TOPSIS is the appropriate method for the clinical group decision support system for the above purposes.",
"title": ""
},
{
"docid": "bfeb6e598c727c91e45fa92ad52cc46c",
"text": "Object detection is an important step in any video analysis. Difficulties of the object detection are finding hidden objects and finding unrecognized objects. Although many algorithms have been developed to avoid them as outliers, occlusion boundaries could potentially provide useful information about the scene’s structure and composition. A novel framework for blob based occluded object detection is proposed. A technique that can be used to detect occlusion is presented. It detects and tracks the occluded objects in video sequences captured by a fixed camera in crowded environment with occlusions. Initially the background subtraction is modeled by a Mixture of Gaussians technique (MOG). Pedestrians are detected using the pedestrian detector by computing the Histogram of Oriented Gradients descriptors (HOG), using a linear Support Vector Machine (SVM) as the classifier. In this work, a recognition and tracking system is built to detect the abandoned objects in the public transportation area such as train stations, airports etc. Several experiments were conducted to demonstrate the effectiveness of the proposed approach. The results show the robustness and effectiveness of the proposed method. Keyword: Occlusion, Histograms of Oriented Gradients descriptors, Support Vector Machine, mixture of Gaussians techniques, Blob, abandoned object.",
"title": ""
},
{
"docid": "e8b5f7f67b5095873419df4984c19333",
"text": "A series of fluorescent pH probes based on the spiro-cyclic rhodamine core, aminomethylrhodamines (AMR), was synthesized and the effect of cycloalkane ring size on the acid/base properties of the AMR system was explored. The study involved a series of rhodamine 6G (cAMR6G) and rhodamine B (cAMR) pH probes with cycloalkane ring sizes from C-3 to C-6 on the spiro-cyclic amino group. It is known that the pKa value of cycloalkylamines can be tuned by different ring sizes in accordance with the Baeyer ring strain theory. Smaller ring amines have lower pKa value, i.e., they are less basic, such that the relative order in cycloalkylamine basicity is: cyclohexyl > cyclopentyl > cyclobutyl > cyclopropyl. Herein, it was found that the pKa values of the cAMR and cAMR6G systems can also be predicted by Baeyer ring strain theory. The pKa values for the cAMR6G series were shown to be higher than the cAMR series by a value of approximately 1.",
"title": ""
},
{
"docid": "42b3f0283456b381344c3587486e72d3",
"text": "With rapidly growing IPTV market and IPTV standardization activity, multicasting starts to gain a lot of attention again. The use of multicasting conserves the bandwidth of a network and this becomes especially important with wireless channels with limited throughput. But, multicasting over wireless channel is facing many challenges. In this paper, we discuss the challenges for the multicast streaming to provide IPTV service over WLAN and present system architecture to model the wireless IPTV service environment. And we apply a novel CC-FEC (Cross- correlated Forwarding Error Correction) scheme to the model and analyze the efficiency of the scheme.",
"title": ""
},
{
"docid": "92d8627323588e272cc9c02836cc2512",
"text": "OBJECTIVE\nThe American Academy of Pediatrics recommends forensic evidence collection when sexual abuse has occurred within 72 hours, or when there is bleeding or acute injury. It is not known whether these recommendations are appropriate for prepubertal children, because few data exist regarding the utility of forensic evidence collection in cases of child sexual assault. This study describes the epidemiology of forensic evidence findings in prepubertal victims of sexual assault.\n\n\nMETHODS\nThe medical records of 273 children <10 years old who were evaluated in hospital emergency departments in Philadelphia, Pennsylvania, and had forensic evidence processed by the Philadelphia Police Criminalistics Laboratory were retrospectively reviewed for history, physical examination findings, forensic evidence collection, and forensic results.\n\n\nRESULTS\nSome form of forensic evidence was identified in 24.9% of children, all of whom were examined within 44 hours of their assault. Over 90% of children with positive forensic evidence findings were seen within 24 hours of their assault. The majority of forensic evidence (64%) was found on clothing and linens, yet only 35% of children had clothing collected for analysis. After 24 hours, all evidence, with the exception of 1 pubic hair, was recovered from clothing or linens. No swabs taken from the child's body were positive for blood after 13 hours or sperm/semen after 9 hours. A minority of children (23%) had genital injuries. Genital injury and a history of ejaculation provided by the child were associated with an increased likelihood of identifying forensic evidence, but several children had forensic evidence found that was unanticipated by the child's history.\n\n\nCONCLUSIONS\nThe general guidelines for forensic evidence collection in cases of acute sexual assault are not well-suited for prepubertal victims. The decision to collect evidence is best made by the timing of the examination. Swabbing the child's body for evidence is unnecessary after 24 hours. Clothing and linens yield the majority of evidence and should be pursued vigorously for analysis.",
"title": ""
},
{
"docid": "50d3bd37eb32f8085e9b53245fc74dc8",
"text": "To meet the required huge data analysis, organization, and storage demand, the hashing technique has got a lot of attention as it aims to learn an efficient binary representation from the original high-dimensional data. In this paper, we focus on the unsupervised spectral hashing due to its effective manifold embedding. Existing spectral hashing methods mainly suffer from two problems, i.e., the inefficient spectral candidate and intractable binary constraint for spectral analysis. To overcome these two problems, we propose to employ spectral rotation to seek a better spectral solution and adopt the alternating projection algorithm to settle the complex code constraints, which are therefore named as Spectral Hashing with Spectral Rotation and Alternating Discrete Spectral Hashing, respectively. To enjoy the merits of both methods, the spectral rotation technique is finally combined with the original spectral objective, which aims to simultaneously learn better spectral solution and more efficient discrete codes and is called as Discrete Spectral Hashing. Furthermore, the efficient optimization algorithms are also provided, which just take comparable time complexity to existing hashing methods. To evaluate the proposed three methods, extensive comparison experiments and studies are conducted on four large-scale data sets for the image retrieval task, and the noticeable performance beats several state-of-the-art spectral hashing methods on different evaluation metrics.",
"title": ""
},
{
"docid": "523c6a68d4462fe365c21eeb93c73c16",
"text": "We study the exposure of the US corporate bond returns to liquidity shocks of stocks and Treasury bonds over the period 1973–2007 in a regime-switching model. In one regime, liquidity shocks have mostly insignificant effects on bond prices, whereas in another regime, a rise in illiquidity produces significant but conflicting effects: Prices of investment-grade bonds rise while prices of speculative-grade (junk) bonds fall substantially (relative to the market). Relating the probability of these regimes to macroeconomic conditions we find that the second regime can be predicted by economic conditions that are characterized as “stress.” These effects, which are robust to controlling for other systematic risks (term and default), suggest the existence of time-varying liquidity risk of corporate bond returns conditional on episodes of flight to liquidity. Our model can predict the out-of-sample bond returns for the stress years 2008–2009. We find a similar pattern for stocks classified by high or low book-to-market ratio, where again, liquidity shocks play a special role in periods characterized by adverse economic conditions. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c72e0e79f83b59af58e5d8bc7d9244d5",
"text": "A novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge. End-to-end training was performed for XmasNet, with data augmentation done through 3D rotation and slicing, in order to incorporate the 3D information of the lesion. XmasNet outperformed traditional machine learning models based on engineered features, for both train and test data. For the test data, XmasNet outperformed 69 methods from 33 participating groups and achieved the second highest AUC (0.84) in the PROSTATEx challenge. This study shows the great potential of deep learning for cancer imaging.",
"title": ""
},
{
"docid": "26fad325410424982d29577e49797159",
"text": "How do the statements made by people in online political discussions affect other people's willingness to express their own opinions, or argue for them? And how does group interaction ultimately shape individual opinions? We examine carefully whether and how patterns of group discussion shape (a) individuals' expressive behavior within those discussions and (b) changes in personal opinions. This research proposes that the argumentative \"climate\" of group opinion indeed affects postdiscussion opinions, and that a primary mechanism responsible for this effect is an intermediate influence on individual participants' own expressions during the online discussions. We find support for these propositions in data from a series of 60 online group discussions, involving ordinary citizens, about the tax plans offered by rival U.S. presidential candidates George W. Bush and Al Gore in 2000. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/99 Normative and Informational Influences in Online Political Discussions Vincent Price, Lilach Nir, & Joseph N. Cappella 1 Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104 6220 2 Department of Communication and the Department of Political Science, Hebrew University of Jerusalem, Jerusalem, Israel, 91905 How do the statements made by people in online political discussions affect other peo ple’s willingness to express their own opinions, or argue for them? And how does group interaction ultimately shape individual opinions? We examine carefully whether and how patterns of group discussion shape (a) individuals’ expressive behavior within those discussions and (b) changes in personal opinions. This research proposes that the argu mentative ‘‘climate’’ of group opinion indeed affects postdiscussion opinions, and that a primary mechanism responsible for this effect is an intermediate influence on individ ual participants’ own expressions during the online discussions. We find support for these propositions in data from a series of 60 online group discussions, involving ordi nary citizens, about the tax plans offered by rival U.S. presidential candidates George W. Bush and Al Gore in 2000. Investigations of social influence and public opinion go hand in hand. Opinions may exist as psychological phenomena in individual minds, but the processes that shape these opinions—at least, public opinions—are inherently social–psychological. The notion that group interaction can influence individual opinions is widely accepted. Indeed, according to many participatory theories of democracy, lively exchanges among citizens are deemed central to the formation of sound or ‘‘true’’ public opinion, which is forged in the fire of group discussion. This truly public opinion is commonly contrasted with mass or ‘‘pseudo’’-opinion developed in isolation by disconnected media consumers responding individually to the news (e.g., Blumer, 1946; Fishkin, 1991, 1995; Graber, 1982). Although discussion is celebrated in democratic theory as a critical element of proper opinion formation, it also brings with it a variety of potential downsides. These include a possible tyranny of the majority (e.g., de Tocqueville, 1835/1945), distorted expression of opinions resulting from fear of social isolation (Noelle-Neumann, 1984), or shifts of opinion to more extreme positions than most individuals might actually prefer (see, e.g., Janis, 1972, on dangerous forms of ‘‘group think,’’ or more recently Sunstein, 2001, on the polarizing effects of ‘‘enclave’’ communication on the Web). The problem of how to foster productive social interaction while avoiding potential dysfunctions of group influence has occupied a large place in normative writings on public opinion and democracy. Modern democracies guarantee freedom of association and public expression; they also employ systems and procedures aimed at protecting collective decision making from untoward social pressure, including not only the use of secret ballots in elections but also more generally republican legislatures and executive and judicial offices that by design are insulated from too much democracy, that is, from direct popular control (e.g., Madison, 1788/1966). However, steady advances in popular education and growth of communication media have enlarged expectations of the ordinary citizen and brought calls for more direct, popular participation in government. In particular, dramatic technological changes over the past several decades—and especially the rise of interactive forms of electronic communication enabled by the Internet and World Wide Web—have fueled hopes for new, expansive, and energized forms of ‘‘teledemocracy’’ (e.g., Arterton, 1987). Online political discussion is thus of considerable interest to students of public opinion and political communication. It has been credited with creating vital spaces for public conversation, opening in a new ‘‘public sphere’’ of the sort envisioned by Habermas (1962/1989), (see, e.g., Papacharissi, 2004; Poor, 2005; Poster, 1997). Though still not a routine experience for citizens, it has been steadily growing in prevalence and likely import for popular opinion formation. Recent surveys indicate that close to a third of Internet users regularly engage with groups online, with nearly 10% reporting that they joined online discussions about the 2004 presidential election (Pew Research Center, 2005). Online political discussion offers new and potentially quite powerful modes of scientific observation as well. Despite continuous methodological improvements, the mainstay of public opinion research, the general-population survey, has always consisted of randomly sampled, one-on-one, respondent-to-interviewer ‘‘conversations’’ aimed at extracting precoded responses or short verbal answers to structured questionnaires. Web-based technologies, however, may now permit randomly constituted respondent-withrespondent group conversations. The conceptual fit between such conversations and the phenomenon of public opinion, itself grounded in popular discussion, renders it quite appealing. Developments in electronic data storage and retrieval, and telecommunication networks of increasing channel capacity, now make possible an integration of general-population survey techniques and more qualitative research approaches, such as focus group methods, that have become popular in large part owing to the sense that they offer a more refined understanding of popular thought than might be gained from structured surveys (e.g., Morgan, 1997). Perhaps most important, the study of online discussion opens new theoretical avenues for public opinion research. Understanding online citizen interactions calls for bringing together several strands of theory in social psychology, smallgroup decision making, and political communication that have heretofore been disconnected (Price, 1992). Social influence in opinion formation Certainly, the most prominent theory of social influence in public opinion research has been Noelle-Neumann’s (1984) spiral of silence. Citing early research on group conformity processes, such as that of Asch (1956), Noelle-Neumann argued that media depictions of the normative ‘‘climate of opinion’’ have a silencing effect on those who hold minority viewpoints. The reticence of minorities to express their views contributes to the appearance of a solid majority opinion, which, in turn, produces a spiral of silence that successively emboldens the majority and enervates the minority. Meta-analytic evaluations of research on the hypothetical silencing effect of the mediated climate of opinion suggest that such effects, if they indeed exist, appear to be fairly small (Glynn, Hayes, & Shanahan, 1997); nevertheless, the theory has garnered considerable empirical attention and remains influential. In experimental social psychology, group influence has been the object of systematic study for over half a century. Although no single theoretical framework is available for explaining how social influence operates, some important organizing principles and concepts have emerged over time (Price & Oshagan, 1995). One of the most useful heuristics, proposed by Deutsch and Gerard (1955), distinguishes two broad forms of social influence (see also Kaplan & Miller, 1987). Normative social influence occurs when someone is motivated by a desire to conform to the positive expectations of other people. Motivations for meeting these normative expectations lie in the various rewards that might accrue (self-esteem or feelings of social approval) or possible negative sanctions that might result from deviant behavior (alienation, excommunication, or social isolation). Normative social influence is clearly the basis of Noelle-Neumann’s (1984) theorizing about minorities silencing themselves in the face of majority pressure. Informational social influence, in contrast, occurs when people accept the words, opinions, and deeds of others as valid evidence about reality. People learn about the world, in part, from discovering that they disagree (e.g., Burnstein & Vinokur, 1977; Vinokur & Burnstein, 1974). They are influenced by groups not only because of group norms, but also because of arguments that arise in groups, through a comparison of their views to those expressed by others (see also the distinction between normative and comparative functions of reference groups in sociology, e.g., Hyman & Singer, 1968; Kelley, 1952). Although the distinction between informational and normative influence has proven useful and historically important in small-group research, it can become cloudy in many instances. This is so because normative pressure and persuasive information operate in similar ways within groups, and often with similar effects. For example, the tendency of groups to polarize—that is, to move following discussion to extreme positions in the direction that group members were initially inc",
"title": ""
},
{
"docid": "08f7a03b2c0edc512c7d4165c340714b",
"text": "The Internet of Things (IoT) is envisioned to be a large-scale system that interconnects sensors, mundane objects, and other physical devices via an effective communication infrastructure. Given the heterogeneous and large-scale nature of the IoT, security has emerged as a key challenge. This challenge is further exacerbated by the fact that security solutions for the IoT must account for the limited computational capabilities of the IoT's nodes. That makes enhancing the security at the physical layer level an attractive solution for IoT networks. In this paper, a novel anti- jamming mechanism is proposed to enable a fusion center to defend the IoT from a malicious radio jamming attack. The problem is formulated as a Colonel Blotto game in which the fusion center, acting as defender, aims to detect the jamming attack by increasing the number of bits allocated to certain nodes for reporting their measured interference level, while the jammer aims to disturb the network performance and still be undetected. To solve this game, an algorithm based on fictitious play is proposed to reach the equilibrium of the game. Simulation results show that the proposed mechanism outperforms the mechanism of allocating the available bits in a random manner for two different cases of network architecture.",
"title": ""
},
{
"docid": "836ac0267a67fd2e7657a5893975b023",
"text": "Managing trust efficiently and effectively is critical to facilitating cooperation or collaboration and decision making tasks in tactical networks while meeting system goals such as reliability, availability, or scalability. Delay tolerant networks are often encountered in military network environments where end-to-end connectivity is not guaranteed due to frequent disconnection or delay. This work proposes a provenance-based trust framework for efficiency in resource consumption as well as effectiveness in trust evaluation. Provenance refers to the history of ownership of a valued object or information. We adopt the concept of provenance in that trustworthiness of an information provider affects that of information, and vice-versa. The proposed trust framework takes a data-driven approach to reduce resource consumption in the presence of selfish or malicious nodes. This work adopts a model-based method to evaluate the proposed trust framework using Stochastic Petri Nets. The results show that the proposed trust framework achieves desirable accuracy of trust evaluation of nodes compared with an existing scheme while consuming significantly less communication overhead.",
"title": ""
},
{
"docid": "d4896aa12be18aea9a6639422ee12d92",
"text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.",
"title": ""
}
] |
scidocsrr
|
7b759c86d2bdee3deb215499d076f94b
|
"Killing Spree": Exploring the Connection Between Competitive Game Play and Aggressive Cognition
|
[
{
"docid": "eded90c762031357c1f5366fefca007c",
"text": "The authors examined whether the nature of the opponent (computer, friend, or stranger) influences spatial presence, emotional responses, and threat and challenge appraisals when playing video games. In a within-subjects design, participants played two different video games against a computer, a friend, and a stranger. In addition to self-report ratings, cardiac interbeat intervals (IBIs) and facial electromyography (EMG) were measured to index physiological arousal and emotional valence. When compared to playing against a computer, playing against another human elicited higher spatial presence, engagement, anticipated threat, post-game challenge appraisals, and physiological arousal, as well as more positively valenced emotional responses. In addition, playing against a friend elicited greater spatial presence, engagement, and self-reported and physiological arousal, as well as more positively valenced facial EMG responses, compared to playing against a stranger. The nature of the opponent influences spatial presence when playing video games, possibly through the mediating influence on arousal and attentional processes.",
"title": ""
},
{
"docid": "20d96905880332d6ef5a33b4dd0d8827",
"text": "In spite of the fact that equal opportunities for men and women have been a priority in many countries, enormous gender differences prevail in most competitive high-ranking positions. We conduct a series of controlled experiments to investigate whether women might react differently than men to competitive incentive schemes commonly used in job evaluation and promotion. We observe no significant gender difference in mean performance when participants are paid proportional to their performance. But in the competitive environment with mixed gender groups we observe a significant gender difference: the mean performance of men has a large and significant, that of women is unchanged. This gap is not due to gender differences in risk aversion. We then run the same test with homogeneous groups, to investigate whether women under-perform only when competing against men. Women do indeed increase their performance and gender differences in mean performance are now insignificant. These results may be due to lower skill of women, or more likely to the fact that women dislike competition, or alternatively that they feel less competent than their male competitors, which depresses their performance in mixed tournaments. Our last experiment provides support for this hypothesis.",
"title": ""
}
] |
[
{
"docid": "04e4c1b80bcf1a93cafefa73563ea4d3",
"text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.",
"title": ""
},
{
"docid": "0dfcbae479f0af59236a5213cb37983a",
"text": "The objective of this work is to detect the use of automated programs, known as game bots, based on social interactions in MMORPGs. Online games, especially MMORPGs, have become extremely popular among internet users in the recent years. Not only the popularity but also security threats such as the use of game bots and identity theft have grown manifold. As bot players can obtain unjustified assets without corresponding efforts, the gaming community does not allow players to use game bots. However, the task of identifying game bots is not an easy one because of the velocity and variety of their evolution in mimicking human behavior. Existing methods for detecting game bots have a few drawbacks like reducing immersion of players, low detection accuracy rate, and collision with other security programs. We propose a novel method for detecting game bots based on the fact that humans and game bots tend to form their social network in contrasting ways. In this work we focus particularly on the in game mentoring network from amongst several social networks. We construct a couple of new features based on eigenvector centrality to capture this intuition and establish their importance for detecting game bots. The results show a significant increase in the classification accuracy of various classifiers with the introduction of these features.",
"title": ""
},
{
"docid": "f752f66cbd7a43c3d45940a8fbec0dbf",
"text": "ChEMBL is an Open Data database containing binding, functional and ADMET information for a large number of drug-like bioactive compounds. These data are manually abstracted from the primary published literature on a regular basis, then further curated and standardized to maximize their quality and utility across a wide range of chemical biology and drug-discovery research problems. Currently, the database contains 5.4 million bioactivity measurements for more than 1 million compounds and 5200 protein targets. Access is available through a web-based interface, data downloads and web services at: https://www.ebi.ac.uk/chembldb.",
"title": ""
},
{
"docid": "5f3dc141b69eb50e17bdab68a2195e13",
"text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.",
"title": ""
},
{
"docid": "37f55e03f4d1ff3b9311e537dc7122b5",
"text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.",
"title": ""
},
{
"docid": "aa64bd9576044ec5e654c9f29c4f7d84",
"text": "BACKGROUND\nSocial media are dynamic and interactive computer-mediated communication tools that have high penetration rates in the general population in high-income and middle-income countries. However, in medicine and health care, a large number of stakeholders (eg, clinicians, administrators, professional colleges, academic institutions, ministries of health, among others) are unaware of social media's relevance, potential applications in their day-to-day activities, as well as the inherent risks and how these may be attenuated and mitigated.\n\n\nOBJECTIVE\nWe conducted a narrative review with the aim to present case studies that illustrate how, where, and why social media are being used in the medical and health care sectors.\n\n\nMETHODS\nUsing a critical-interpretivist framework, we used qualitative methods to synthesize the impact and illustrate, explain, and provide contextual knowledge of the applications and potential implementations of social media in medicine and health care. Both traditional (eg, peer-reviewed) and nontraditional (eg, policies, case studies, and social media content) sources were used, in addition to an environmental scan (using Google and Bing Web searches) of resources.\n\n\nRESULTS\nWe reviewed, evaluated, and synthesized 76 articles, 44 websites, and 11 policies/reports. Results and case studies are presented according to 10 different categories of social media: (1) blogs (eg, WordPress), (2) microblogs (eg, Twitter), (3) social networking sites (eg, Facebook), (4) professional networking sites (eg, LinkedIn, Sermo), (5) thematic networking sites (eg, 23andMe), (6) wikis (eg, Wikipedia), (7) mashups (eg, HealthMap), (8) collaborative filtering sites (eg, Digg), (9) media sharing sites (eg, YouTube, Slideshare), and others (eg, SecondLife). Four recommendations are provided and explained for stakeholders wishing to engage with social media while attenuating risk: (1) maintain professionalism at all times, (2) be authentic, have fun, and do not be afraid, (3) ask for help, and (4) focus, grab attention, and engage.\n\n\nCONCLUSIONS\nThe role of social media in the medical and health care sectors is far reaching, and many questions in terms of governance, ethics, professionalism, privacy, confidentiality, and information quality remain unanswered. By following the guidelines presented, professionals have a starting point to engage with social media in a safe and ethical manner. Future research will be required to understand the synergies between social media and evidence-based practice, as well as develop institutional policies that benefit patients, clinicians, public health practitioners, and industry alike.",
"title": ""
},
{
"docid": "47baa10f94368bc056bbca3dd4caec0c",
"text": "We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.",
"title": ""
},
{
"docid": "92699fa23a516812c7fcb74ba38f42c6",
"text": "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.",
"title": ""
},
{
"docid": "58677916e11e6d5401b7396d117a517b",
"text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.",
"title": ""
},
{
"docid": "104fa95b500df05a052a230e80797f59",
"text": "Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.",
"title": ""
},
{
"docid": "fe2ef685733bae2737faa04e8a10087d",
"text": "Federal health agencies are currently developing regulatory strategies for Artificial Intelligence based medical products. Regulatory regimes need to account for the new risks and benefits that come with modern AI, along with safety concerns and potential for continual autonomous learning that makes AI non-static and dramatically different than the drugs and products that agencies are used to regulating. Currently, the U.S. Food and Drug Administration (FDA) and other regulatory agencies treat AI-enabled products as medical devices. Alternatively, we propose that AI regulation in the medical domain can analogously adopt aspects of the models used to regulate medical providers.",
"title": ""
},
{
"docid": "2d4357831f83de026759776e019934da",
"text": "Mapping the physical location of nodes within a wireless sensor network (WSN) is critical in many applications such as tracking and environmental sampling. Passive RFID tags pose an interesting solution to localizing nodes because an outside reader, rather than the tag, supplies the power to the tag. Thus, utilizing passive RFID technology allows a localization scheme to not be limited to objects that have wireless communication capability because the technique only requires that the object carries a RFID tag. This paper illustrates a method in which objects can be localized without the need to communicate received signal strength information between the reader and the tagged item. The method matches tag count percentage patterns under different signal attenuation levels to a database of tag count percentages, attenuations and distances from the base station reader.",
"title": ""
},
{
"docid": "660465cbd4bd95108a2381ee5a97cede",
"text": "In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "257f00fc5a4b2a0addbd7e9cc2bf6fec",
"text": "Security experts have demonstrated numerous risks imposed by Internet of Things (IoT) devices on organizations. Due to the widespread adoption of such devices, their diversity, standardization obstacles, and inherent mobility, organizations require an intelligent mechanism capable of automatically detecting suspicious IoT devices connected to their networks. In particular, devices not included in a white list of trustworthy IoT device types (allowed to be used within the organizational premises) should be detected. In this research, Random Forest, a supervised machine learning algorithm, was applied to features extracted from network traffic data with the aim of accurately identifying IoT device types from the white list. To train and evaluate multi-class classifiers, we collected and manually labeled network traffic data from 17 distinct IoT devices, representing nine types of IoT devices. Based on the classification of 20 consecutive sessions and the use of majority rule, IoT device types that are not on the white list were correctly detected as unknown in 96% of test cases (on average), and white listed device types were correctly classified by their actual types in 99% of cases. Some IoT device types were identified quicker than others (e.g., sockets and thermostats were successfully detected within five TCP sessions of connecting to the network). Perfect detection of unauthorized IoT device types was achieved upon analyzing 110 consecutive sessions; perfect classification of white listed types required 346 consecutive sessions, 110 of which resulted in 99.49% accuracy. Further experiments demonstrated the successful applicability of classifiers trained in one location and tested on another. In addition, a discussion is provided regarding the resilience of our machine learning-based IoT white listing method to adversarial attacks.",
"title": ""
},
{
"docid": "adc51e9fdbbb89c9a47b55bb8823c7fe",
"text": "State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI. In this article, we present a new exhaustive DPLL algorithm with a formal semantics, a proof of correctness, and a modular design. The modular design is based on the separation of the core model counting algorithm from SAT solving techniques. We also show that the trace of our algorithm belongs to the language of Sentential Decision Diagrams (SDDs), which is a subset of Decision-DNNFs, the trace of existing state-of-the-art model counters. Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.",
"title": ""
},
{
"docid": "b1a538752056e91fd5800911f36e6eb0",
"text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.",
"title": ""
},
{
"docid": "84f7b499cd608de1ee7443fcd7194f19",
"text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.",
"title": ""
},
{
"docid": "adb17811b539f419285779d62932736d",
"text": "This paper proposes a high efficiency low cost AC/DC converter for adapter application. In order to achieve high efficiency and low cost for adapter with universal AC input, a single stage bridgeless flyback PFC converter with peak current clamping technique was proposed. Compared with conventional flyback PFC converter, the conduction loss is reduced due to bridgeless structure. And the size of transformer can also be significantly reduced due to lower peak current, which results in lower cost and higher power density. Detailed operation principles and design considerations are illustrated. Experimental results from a 90W prototype with universal input and 20V/4.5A output are presented to verify the operation and performance of the proposed converter. The minimum efficiency at full load is above 91% over the entire input range.",
"title": ""
},
{
"docid": "c9a04b21e60e971908e02e2804962283",
"text": "We used a dynamically scaled model insect to measure the rotational forces produced by a flapping insect wing. A steadily translating wing was rotated at a range of constant angular velocities, and the resulting aerodynamic forces were measured using a sensor attached to the base of the wing. These instantaneous forces were compared with quasi-steady estimates based on translational force coefficients. Because translational and rotational velocities were constant, the wing inertia was negligible, and any difference between measured forces and estimates based on translational force coefficients could be attributed to the aerodynamic effects of wing rotation. By factoring out the geometry and kinematics of the wings from the rotational forces, we determined rotational force coefficients for a range of angular velocities and different axes of rotation. The measured coefficients were compared with a mathematical model developed for two-dimensional motions in inviscid fluids, which we adapted to the three-dimensional case using blade element theory. As predicted by theory, the rotational coefficient varied linearly with the position of the rotational axis for all angular velocities measured. The coefficient also, however, varied with angular velocity, in contrast to theoretical predictions. Using the measured rotational coefficients, we modified a standard quasi-steady model of insect flight to include rotational forces, translational forces and the added mass inertia. The revised model predicts the time course of force generation for several different patterns of flapping kinematics more accurately than a model based solely on translational force coefficients. By subtracting the improved quasi-steady estimates from the measured forces, we isolated the aerodynamic forces due to wake capture.",
"title": ""
}
] |
scidocsrr
|
97157edb1e8fcf3f199674a155f58d29
|
Twitter data analysis for studying communities of practice in the media industry
|
[
{
"docid": "c9c4ed4a7e8e6ef8ca2bcf146001d2e5",
"text": "Microblogging services such as Twitter are said to have the potential for increasing political participation. Given the feature of 'retweeting' as a simple yet powerful mechanism for information diffusion, Twitter is an ideal platform for users to spread not only information in general but also political opinions through their networks as Twitter may also be used to publicly agree with, as well as to reinforce, someone's political opinions or thoughts. Besides their content and intended use, Twitter messages ('tweets') also often convey pertinent information about their author's sentiment. In this paper, we seek to examine whether sentiment occurring in politically relevant tweets has an effect on their retweetability (i.e., how often these tweets will be retweeted). Based on a data set of 64,431 political tweets, we find a positive relationship between the quantity of words indicating affective dimensions, including positive and negative emotions associated with certain political parties or politicians, in a tweet and its retweet rate. Furthermore, we investigate how political discussions take place in the Twitter network during periods of political elections with a focus on the most active and most influential users. Finally, we conclude by discussing the implications of our results.",
"title": ""
},
{
"docid": "e59d1a3936f880233001eb086032d927",
"text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.",
"title": ""
},
{
"docid": "7641f8f3ed2afd0c16665b44c1216e79",
"text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.",
"title": ""
},
{
"docid": "eae92d06d00d620791e6b247f8e63c36",
"text": "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.",
"title": ""
}
] |
[
{
"docid": "bb7ac8c753d09383ecbf1c8cd7572d05",
"text": "Skills learned through (deep) reinforcement learning often generalizes poorly across domains and re-training is necessary when presented with a new task. We present a framework that combines techniques in formal methods with reinforcement learning (RL). The methods we provide allows for convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards, and construct new skills from existing ones with little to no additional exploration. We evaluate the proposed methods in a simple grid world simulation as well as a more complicated kitchen environment in AI2Thor (Kolve et al. [2017]).",
"title": ""
},
{
"docid": "4825e492dc1b7b645a5b92dde0c766cd",
"text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.",
"title": ""
},
{
"docid": "2949191659d01de73abdc749d5e51ca7",
"text": "BACKGROUND\nIsolated infraspinatus muscle atrophy is common in overhead athletes, who place significant and repetitive stresses across their dominant shoulders. Studies on volleyball and baseball players report infraspinatus atrophy in 4% to 34% of players; however, the prevalence of infraspinatus atrophy in professional tennis players has not been reported.\n\n\nPURPOSE\nTo investigate the incidence of isolated infraspinatus atrophy in professional tennis players and to identify any correlations with other physical examination findings, ranking performance, and concurrent shoulder injuries.\n\n\nSTUDY DESIGN\nCross-sectional study; Level of evidence, 3.\n\n\nMETHODS\nA total of 125 professional female tennis players underwent a comprehensive preparticipation physical health status examination. Two orthopaedic surgeons examined the shoulders of all players and obtained digital goniometric measurements of range of motion (ROM). Infraspinatus atrophy was defined as loss of soft tissue bulk in the infraspinatus scapula fossa (and increased prominence of dorsal scapular bony anatomy) of the dominant shoulder with clear asymmetry when compared with the contralateral side. Correlations were examined between infraspinatus atrophy and concurrent shoulder disorders, clinical examination findings, ROM, glenohumeral internal rotation deficit, singles tennis ranking, and age.\n\n\nRESULTS\nThere were 65 players (52%) with evidence of infraspinatus atrophy in their dominant shoulders. No wasting was noted in the nondominant shoulder of any player. No statistically significant differences were seen in mean age, left- or right-hand dominance, height, weight, or body mass index for players with or without atrophy. Of the 77 players ranked in the top 100, 58% had clinical infraspinatus atrophy, compared with 40% of players ranked outside the top 100. No associations were found with static physical examination findings (scapular dyskinesis, ROM glenohumeral internal rotation deficit, postural abnormalities), concurrent shoulder disorders, or compromised performance when measured by singles ranking.\n\n\nCONCLUSION\nThis study reports a high level of clinical infraspinatus atrophy in the dominant shoulder of elite female tennis players. Infraspinatus atrophy was associated with a higher performance ranking, and no functional deficits or associations with concurrent shoulder disorders were found. Team physicians can be reassured that infraspinatus atrophy is a common finding in high-performing tennis players and, if asymptomatic, does not appear to significantly compromise performance.",
"title": ""
},
{
"docid": "b771737351b984881e0fce7f9bb030e8",
"text": "BACKGROUND\nConsidering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone.\n\n\nMATERIAL/METHODS\nIn this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing.\n\n\nRESULTS\nThe Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group.\n\n\nCONCLUSIONS\nThis study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.",
"title": ""
},
{
"docid": "15beeb4b1e8c07ce8953b1873816a1f6",
"text": "Neuromorphic systems increasingly attract research interest owing to their ability to provide biologically inspired methods of computing, alternative to the classic von Neumann architecture. In these systems, computing relies on spike-based communication between neurons, and memory is represented by evolving states of the synaptic interconnections. In this work, we first demonstrate how spike-timing-dependent plasticity (STDP) based synapses can be realized using the crystal-growth dynamics of phase-change memristors. Then, we present a novel learning architecture comprising an integrate-and-fire neuron and an array of phase-change synapses that is capable of detecting temporal correlations in parallel input streams. We demonstrate a continuous re-learning operation on a sequence of binary 20×20 pixel images in the presence of significant background noise. Experimental results using an array of phase-change cells as synaptic elements confirm the functionality and performance of the proposed learning architecture.",
"title": ""
},
{
"docid": "a0b2219d315b9ee35af9e412a174875b",
"text": "VP-ellipsis generally requires a syntactically matching antecedent. However, many documented examples exist where the antecedent is not appropriate. Kehler (2000, 2002) proposed an elegant theory which predicts a syntactic antecedent for an elided VP is required only for a certain discourse coherence relation (resemblance) not for cause-effect relations. Most of the data Kehler used to motivate his theory come from corpus studies and thus do not consist of true minimal pairs. We report five experiments testing predictions of the coherence theory, using standard minimal pair materials. The results raise questions about the empirical basis for coherence theory because parallelism is preferred for all coherence relations, not just resemblance relations. Further, strict identity readings, which should not be available when a syntactic antecedent is required, are influenced by parallelism per se, holding the discourse coherence relation constant. This draws into question the causal role of coherence relations in processing VP ellipsis.",
"title": ""
},
{
"docid": "2f84b44cdce52068b7e692dad7feb178",
"text": "Two stage PCR has been used to introduce single amino acid substitutions into the EF hand structures of the Ca(2+)-activated photoprotein aequorin. Transcription of PCR products, followed by cell free translation of the mRNA, allowed characterisation of recombinant proteins in vitro. Substitution of D to A at position 119 produced an active photoprotein with a Ca2+ affinity reduced by a factor of 20 compared to the wild type recombinant aequorin. This recombinant protein will be suitable for measuring Ca2+ inside the endoplasmic reticulum, the mitochondria, endosomes and the outside of live cells.",
"title": ""
},
{
"docid": "af461e1a81e234f5ea61652f97d03f18",
"text": "In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers.",
"title": ""
},
{
"docid": "c38a6685895c23620afb6570be4c646b",
"text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.",
"title": ""
},
{
"docid": "522a3c3287e65eb7fac83c30993c1156",
"text": "Numerical simulations are performed with a two-dimensional 2D fully nonlinear potential flow FNPF model for tsunami generation by two idealized types of submarine mass failure SMF : underwater slides and slumps. These simulations feature rigid or deforming SMFs with a Gaussian cross section, translating down a plane slope. In each case, the SMF center of mass motion is expressed as a function of geometric, hydrodynamic, and material parameters, following a simple wavemaker formalism, and prescribed as a boundary condition in the FNPF model. Tsunami amplitudes and runup are obtained from computed free surface elevations. Model results are experimentally validated for a rigid 2D slide. Sensitivity studies are performed to estimate the effects of SMF–shape, type, and initial submergence depth—on the generated tsunamis. A strong SMF deformation during motion is shown to significantly enhance tsunami generation, particularly in the far-field. Typical slumps are shown to generate smaller tsunamis than corresponding slides. Both tsunami amplitude and runup are shown to depend strongly on initial SMF submergence depth. For the selected SMF idealized geometry, this dependence is simply expressed by power laws. Other sensitivity analyses are presented in a companion paper, and results from numerical simulations are converted into empirical curve fits predicting characteristic tsunami amplitudes as functions of nondimensional governing parameters. It should be stressed that these empirical formulas are only valid in the vicinity of the tsunami sources and, because of the complexity of the problem, many simplifications were necessary. It is further shown in the companion paper how 2D results can be modified to account for three-dimensional tsunami generation and used for quickly estimating tsunami hazard or for performing simple",
"title": ""
},
{
"docid": "059b8861a00bb0246a07fa339b565079",
"text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.",
"title": ""
},
{
"docid": "e08bc715d679ba0442883b4b0e481998",
"text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality",
"title": ""
},
{
"docid": "4b8f4a8b6c303ea0a20c48840f677ea7",
"text": "PURPOSE/OBJECTIVES\nTo identify subgroups of outpatients with cancer based on their experiences with the symptoms of fatigue, sleep disturbance, depression, and pain; to explore whether patients in the subgroups differed on selected demographic, disease, and treatment characteristics; and to determine whether patients in the subgroups differed on two important patient outcomes: functional status and quality of life (QOL).\n\n\nDESIGN\nDescriptive, correlational study.\n\n\nSETTING\nFour outpatient oncology practices in northern California.\n\n\nSAMPLE\n191 outpatients with cancer receiving active treatment.\n\n\nMETHODS\nPatients completed a demographic questionnaire, Karnofsky Performance Status scale, Lee Fatigue Scale, General Sleep Disturbance Scale, Center for Epidemiological Studies Depression Scale, Multidimensional Quality-of-Life Scale Cancer, and a numeric rating scale of worst pain intensity. Medical records were reviewed for disease and treatment information. Cluster analysis was used to identify patient subgroups based on patients symptom experiences. Differences in demographic, disease, and treatment characteristics as well as in outcomes were evaluated using analysis of variance and chi square analysis.\n\n\nMAIN RESEARCH VARIABLES\nSubgroup membership, fatigue, sleep disturbance, depression, pain, functional status, and QOL.\n\n\nFINDINGS\nFour relatively distinct patient subgroups were identified based on patients experiences with four highly prevalent and related symptoms.\n\n\nCONCLUSIONS\nThe subgroup of patients who reported low levels of all four symptoms reported the best functional status and QOL.\n\n\nIMPLICATIONS FOR NURSING\nThe findings from this study need to be replicated before definitive clinical practice recommendations can be made. Until that time, clinicians need to assess patients for the occurrence of multiple symptoms that may place them at increased risk for poorer outcomes.",
"title": ""
},
{
"docid": "a2e2117e3d2a01f2f28835350ba1d732",
"text": "Previously, several natural integral transforms of Minkowski question mark function F (x) were introduced by the author. Each of them is uniquely characterized by certain regularity conditions and the functional equation, thus encoding intrinsic information about F (x). One of them the dyadic period function G(z) was defined via certain transcendental integral. In this paper we introduce a family of “distributions” Fp(x) for R p ≥ 1, such that F1(x) is the question mark function and F2(x) is a discrete distribution with support on x = 1. Thus, all the aforementioned integral transforms are calculated for such p. As a consequence, the generating function of moments of F p(x) satisfies the three term functional equation. This has an independent interest, though our main concern is the information it provides about F (x). This approach yields certain explicit series for G(z). This also solves the problem in expressing the moments of F (x) in closed form.",
"title": ""
},
{
"docid": "46c754d52ccda0e334cd691e10f8aeac",
"text": "This study examines the development of technology, pedagogy, and content knowledge (TPACK) in four in-service secondary science teachers as they participated in a professional development program focusing on technology integration into K-12 classrooms to support science as inquiry teaching. In the program, probeware, mind-mapping tools (CMaps), and Internet applications ― computer simulations, digital images, and movies — were introduced to the science teachers. A descriptive multicase study design was employed to track teachers’ development over the yearlong program. Data included interviews, surveys, classroom observations, teachers’ technology integration plans, and action research study reports. The program was found to have positive impacts to varying degrees on teachers’ development of TPACK. Contextual factors and teachers’ pedagogical reasoning affected teachers’ ability to enact in their classrooms what they learned in the program. Suggestions for designing effective professional development programs to improve science teachers’ TPACK are discussed. Contemporary Issues in Technology and Teacher Education, 9(1) 26 Science teaching is such a complex, dynamic profession that it is difficult for a teacher to stay up-to-date. For a teacher to grow professionally and become better as a teacher of science, a special, continuous effort is required (Showalter, 1984, p. 21). To better prepare students for the science and technology of the 21st century, the current science education reforms ask science teachers to integrate technology and inquiry-based teaching into their instruction (American Association for the Advancement of Science, 1993; National Research Council [NRC], 1996, 2000). The National Science Education Standards (NSES) define inquiry as “the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work” (NRC, 1996, p. 23). The NSES encourage teachers to apply “a variety of technologies, such as hand tools, measuring instruments, and calculators [as] an integral component of scientific investigations” to support student inquiry (p.175). Utilizing technology tools in inquiry-based science classrooms allows students to work as scientists (Novak & Krajcik, 2006, p. 76). Teaching science as emphasized in the reform documents, however, is not easy. Science teachers experience various constraints, such as lack of time, equipment, pedagogical content knowledge, and pedagogical skills in implementing reform-based teaching strategies (Crawford, 1999, 2000; Roehrig & Luft, 2004, 2006). One way to overcome the barriers and to reform teaching is to participate in professional development programs that provide opportunities for social, personal, and professional development (Bell & Gilbert, 2004). Professional development programs in which teachers collaborate with other teachers, reflect on their classroom practices, and receive support and feedback have been shown to foster teachers’ professional development (Grossman, Wineburg, & Woolworth, 2001; Huffman, 2006; Loucks-Horsley, Love, Stiles, Mundry, & Hewson, 2003). In this light, the professional development program, Technology Enhanced Communities (TEC), which is presented in this paper, was designed to create a learning community where science teachers can learn to integrate technology into their teaching to support student inquiry. TEC has drawn heavily on situated learning theory, which defines learning as situated, social, and distributed (Brown, Collins, & Duguid, 1989; Lave & Wenger, 1991; Putnam & Borko, 2000). Since a situated learning environment supports collaboration among participants (Brown et al., 1989; Lave & Wenger, 1991; Putnam & Borko, 2000), and the collaboration among teachers enhances teacher learning (CochranSmith & Lytle, 1999; Krajcik, Blumenfeld, Marx, & Soloway, 1994; Little, 1990), TEC was designed to provide teachers with opportunities to build a community that enables learning and is distributed among teachers. The situated learning theory was used as a design framework for TEC, but technology, pedagogy, and content knowledge (TPACK) was employed as a theoretical framework for the present study. Since the concept of TPACK has emerged recently, there has been no consensus on the nature and development of TPACK among researchers and teacher educators. As suggested by many authors in the Handbook of Technological Pedagogical Content Knowledge (AACTE Committee on Innovation and Technology, 2008), more research needs to examine the role of teacher preparation programs teachers’ beliefs (Niess, 2008), and specific student and school contexts (McCrory, 2008) regarding the nature and development of TPACK. Thus, this study was conducted to investigate the effects of an in-service teacher education program (TEC) on science teachers’ development of Contemporary Issues in Technology and Teacher Education, 9(1) 27 TPACK. The research question guiding this study was: How does the professional development program, TEC, enhance science teachers’ TPACK? Review of the Relevant Literature Technology Integration Into Science Classrooms Educational technology tools such as computers, probeware, data collection and analysis software, digital microscopes, hypermedia/multimedia, student response systems, and interactive white boards can help students actively engage in the acquisition of scientific knowledge and development of the nature of science and inquiry. When educational technology tools are used appropriately and effectively in science classrooms, students actively engage in their knowledge construction and improve their thinking and problem solving skills (Trowbridge, Bybee, & Powell, 2008). Many new educational technology tools are now available for science teachers. However, integrating technology into instruction is still challenging for most teachers (Norris, Sullivan, Poirot, & Soloway, 2003; Office of Technology Assessment [OTA], 1995). The existing studies demonstrate that technology integration is a long-term process requiring commitment (Doering, Hughes, & Huffman, 2003; Hughes, Kerr, & Ooms, 2005; Sandholtz, Ringstaff, & Dwyer, 1997). Teachers need ongoing support while they make efforts to develop and sustain effective technology integration. Professional learning communities, where teachers collaborate with other teachers to improve and support their learning and teaching, are effective for incorporating technology into teaching (Krajcik et al., 1994; Little, 1990). As a part of a community, teachers share their knowledge, practices, and experiences; discuss issues related to student learning; and critique and support each others’ knowledge and pedagogical growth while they are learning about new technologies (Hughes et al., 2005). Technology integration is most commonly associated with professional development opportunities. The need for participant-driven professional development programs in which teachers engage in inquiry and reflect on their practices to improve their learning about technology has been emphasized by many researchers (Loucks-Horsley et al., 2003; Zeichner, 2003). Zeichner, for example, argued that teacher action research is an important aspect of effective professional development. According to Zeichner, to improve their learning and practices, teachers should become teacher researchers, conduct self-study research, and engage in teacher research groups. These collaborative groups provide teachers with support and opportunities to deeply analyze their learning and practices. Pedagogical Content Knowledge Shulman (1987) defined seven knowledge bases for teachers: content knowledge, general pedagogical knowledge, curriculum knowledge, pedagogical content knowledge (PCK), knowledge of learners and their characteristics, knowledge of educational context, and knowledge of educational ends, goals, and values. According to Shulman, among these knowledge bases, PCK plays the most important role in effective teaching. He argued that teachers should develop PCK, which is “the particular form of content knowledge that embodies the aspects of content most germane to its teachability” (Shulman, 1986, p. 9). PCK is not only a special form of content knowledge but also a “blending of content and pedagogy into an understanding of how particular topics, problems, or issues are Contemporary Issues in Technology and Teacher Education, 9(1) 28 organized, presented, and adapted to the diverse interests and abilities of learners, and presented for instruction” (Shulman, 1987, p. 8). Shulman argued that teachers not only need to know their content but also need to know how to present it effectively. Good teaching “begins with an act of reason, continues with a process of reasoning, culminates in performances of imparting, eliciting, involving, or enticing, and is then thought about some more until the process begins again” (Shulman, 1987, p. 13). Thus, to make effective pedagogical decisions about what to teach and how to teach it, teachers should develop both their PCK and pedagogical reasoning skills. Since Shulman’s initial conceptualization of PCK, researchers have developed new forms and components of PCK (e.g., Cochran, DeRuiter, & King, 1993; Grossman, 1990; Marks, 1990; Magnusson, Borko, & Krajcik, 1994; Tamir, 1988). Some researchers while following Shulman’s original classification have added new components (Grossman, 1990; Marks 1990; Fernandez-Balboa & Stiehl, 1995), while others have developed different conceptions of PCK and argued about the blurry borders between PCK and content knowledge (Cochran et al., 1993). Building on Shulman’s groundbreaking work, these researchers have generated a myriad of versions of PCK. In a recent review of the PCK literature, Lee, Brown, Luft, and Roehrig (2007) identified a consensus among researchers on the following two components of PCK: (a) teachers’ knowledge of student learning to translate and transform content to",
"title": ""
},
{
"docid": "83728a9b746c7d3c3ea1e89ef01f9020",
"text": "This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.",
"title": ""
},
{
"docid": "f0f2cdccd8f415cbd3fffcea4509562a",
"text": "Textual inference is an important component in many applications for understanding natural language. Classical approaches to textual inference rely on logical representations for meaning, which may be regarded as “external” to the natural language itself. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, which correspond closely to language structure. In many cases, such approaches lack a principled meaning representation and inference framework. We describe an inference formalism that operates directly on language-based structures, particularly syntactic parse trees. New trees are generated by applying inference rules, which provide a unified representation for varying types of inferences. We use manual and automatic methods to generate these rules, which cover generic linguistic structures as well as specific lexical-based inferences. We also present a novel packed data-structure and a corresponding inference algorithm that allows efficient implementation of this formalism. We proved the correctness of the new algorithm and established its efficiency analytically and empirically. The utility of our approach was illustrated on two tasks: unsupervised relation extraction from a large corpus, and the Recognizing Textual Entailment (RTE) benchmarks.",
"title": ""
},
{
"docid": "6afdf8c4f509de6481bf4cf8d28c77a4",
"text": "We propose a Learning from Demonstration (LfD) algorithm which leverages expert data, even if they are very few or inaccurate. We achieve this by using both expert data, as well as reinforcement signals gathered through trial-and-error interactions with the environment. The key idea of our approach, Approximate Policy Iteration with Demonstration (APID), is that expert’s suggestions are used to define linear constraints which guide the optimization performed by Approximate Policy Iteration. We prove an upper bound on the Bellman error of the estimate computed by APID at each iteration. Moreover, we show empirically that APID outperforms pure Approximate Policy Iteration, a state-of-the-art LfD algorithm, and supervised learning in a variety of scenarios, including when very few and/or suboptimal demonstrations are available. Our experiments include simulations as well as a real robot path-finding task.",
"title": ""
},
{
"docid": "dd2819d0413a1d41c602aef4830888a4",
"text": "Presented here is a fast method that combines curve matching techniques with a surface matching algorithm to estimate the positioning and respective matching error for the joining of three-dimensional fragmented objects. Furthermore, this paper describes how multiple joints are evaluated and how the broken artefacts are clustered and transformed to form potential solutions of the assemblage problem. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
d2543b85272f5e88a5ddeea39bcfc6ce
|
Knock x knock: the design and evaluation of a unified authentication management system
|
[
{
"docid": "51a2d48f43efdd8f190fd2b6c9a68b3c",
"text": "Textual passwords are often the only mechanism used to authenticate users of a networked system. Unfortunately, many passwords are easily guessed or cracked. In an attempt to strengthen passwords, some systems instruct users to create mnemonic phrase-based passwords. A mnemonic password is one where a user chooses a memorable phrase and uses a character (often the first letter) to represent each word in the phrase.In this paper, we hypothesize that users will select mnemonic phrases that are commonly available on the Internet, and that it is possible to build a dictionary to crack mnemonic phrase-based passwords. We conduct a survey to gather user-generated passwords. We show the majority of survey respondents based their mnemonic passwords on phrases that can be found on the Internet, and we generate a mnemonic password dictionary as a proof of concept. Our 400,000-entry dictionary cracked 4% of mnemonic passwords; in comparison, a standard dictionary with 1.2 million entries cracked 11% of control passwords. The user-generated mnemonic passwords were also slightly more resistant to brute force attacks than control passwords. These results suggest that mnemonic passwords may be appropriate for some uses today. However, mnemonic passwords could become more vulnerable in the future and should not be treated as a panacea.",
"title": ""
}
] |
[
{
"docid": "d8eee79312660f4da03a29372fc87d7e",
"text": "Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children’s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.1",
"title": ""
},
{
"docid": "9cc524d3b55c9522c6e9e89b2caeb787",
"text": "Operative and nonoperative treatment methods of burst fractures were compared regarding canal remodeling. The entire series consisted of 18 patients, with seven in the operative treatment group and 11 in the nonoperative treatment group. All fractures were studied with computed tomography (CT) at the postoperative (operative treatment group) or postinjury (nonoperative treatment group) and the latest follow-up. All patients were followed up for > or = 18 months. There was no statistical difference between postoperative and postinjury canal areas (p = 0.0859). However, a significant difference was found between the rates of remodeling (p = 0.0059). Although spinal canal remodeling occurred in both groups, the resorption of retropulsed fragments was less favorable in nonoperative treatment group.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
},
{
"docid": "0b92a5c5b842e68a8ae3276ebfdff9f7",
"text": "Tasks involving the analysis of geometric (graph-and manifold-structured) data have recently gained prominence in the machine learning community, giving birth to a rapidly developing field of geometric deep learning. In this work, we leverage graph neural networks to improve signal detection in the IceCube neutrino observatory. The IceCube detector array is modeled as a graph, where vertices are sensors and edges are a learned function of the sensors spatial coordinates. As only a subset of IceCubes sensors is active during a given observation, we note the adaptive nature of our GNN, wherein computation is restricted to the input signal support. We demonstrate the effectiveness of our GNN architecture on a task classifying IceCube events, where it outperforms both a traditional physics-based method as well as classical 3D convolution neural networks.",
"title": ""
},
{
"docid": "f591ae6217c769d3bca2c15a021125cc",
"text": "Recent years have witnessed an explosive growth of mobile devices. Mobile devices are permeating every aspect of our daily lives. With the increasing usage of mobile devices and intelligent applications, there is a soaring demand for mobile applications with machine learning services. Inspired by the tremendous success achieved by deep learning in many machine learning tasks, it becomes a natural trend to push deep learning towards mobile applications. However, there exist many challenges to realize deep learning in mobile applications, including the contradiction between the miniature nature of mobile devices and the resource requirement of deep neural networks, the privacy and security concerns about individuals' data, and so on. To resolve these challenges, during the past few years, great leaps have been made in this area. In this paper, we provide an overview of the current challenges and representative achievements about pushing deep learning on mobile devices from three aspects: training with mobile data, efficient inference on mobile devices, and applications of mobile deep learning. The former two aspects cover the primary tasks of deep learning. Then, we go through our two recent applications that apply the data collected by mobile devices to inferring mood disturbance and user identification. Finally, we conclude this paper with the discussion of the future of this area.",
"title": ""
},
{
"docid": "fd5134b4ada93d754fa8e0dd56da1bbb",
"text": "Consumer selection of retail patronage mode has been widely researched by marketing scholars. Several researchers have segmented consumers by shopping orientation. However, few have applied such methods to the Internet shopper. Despite the widespread belief that Internet shoppers are primarily motivated by convenience, the authors show empirically that consumers’ fundamental shopping orientations have no signi®cant impact on their proclivity to purchase products online. Factors that are more likely to in ̄uence purchase intention include product type, prior purchase, and, to a lesser extent, gender. Literature examining electronic commerce tends either to discuss the size and potential of the phenomenon or to indicate problems associated with it. For example, Forrester Research recently reported that worldwide Internet commerce ± both business to business (B2B) and both business to customer (B2C) ± would reach $6.8 trillion in 2004. At the same time, reports of business failures are increasing, as it is evident that the corporate sector is not satis®ed with Internet performance (Wolff, 1998). Despite these two apparently contradictory positions, many observers note an absence of research into consumer motivation to purchase via the Internet and other aspects of consumer behaviour with regard to the medium (Donthu and Garcia, 1999; Hagel and Armstrong, 1997; Korgaonkar and Wolin, 1999). Literature falls into two categories: usage ± by which we mean rate, purpose or quantity bought ± and advertising response (McDonald, 1993). Common to these two streams is the a ̄owo research of Hoffman and Novak (1996) which suggests that the Internet is a very different medium requiring new means of segmentation. The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at http://www .emeraldinsight .com/researchregister http:// www.emeraldinsigh t.com/0309-056 6.htm Research funded by Grif®th University School of Marketing. EJM 37,11/12",
"title": ""
},
{
"docid": "a40e71e130f31450ce1e60d9cd4a96be",
"text": "Progering® is the only intravaginal ring intended for contraception therapies during lactation. It is made of silicone and releases progesterone through the vaginal walls. However, some drawbacks have been reported in the use of silicone. Therefore, ethylene vinyl acetate copolymer (EVA) was tested in order to replace it. EVA rings were produced by a hot-melt extrusion procedure. Swelling and degradation assays of these matrices were conducted in different mixtures of ethanol/water. Solubility and partition coefficient of progesterone were measured, together with the initial hormone load and characteristic dimensions. A mathematical model was used to design an EVA ring that releases the hormone at specific rate. An EVA ring releasing progesterone in vitro at about 12.05 ± 8.91 mg day−1 was successfully designed. This rate of release is similar to that observed for Progering®. In addition, it was observed that as the initial hormone load or ring dimension increases, the rate of release also increases. Also, the device lifetime was extended with a rise in the initial amount of hormone load. EVA rings could be designed to release progesterone in vitro at a rate of 12.05 ± 8.91 mg day−1. This ring would be used in contraception therapies during lactation. The use of EVA in this field could have initially several advantages: less initial and residual hormone content in rings, no need for additional steps of curing or crosslinking, less manufacturing time and costs, and the possibility to recycle the used rings.",
"title": ""
},
{
"docid": "7c4a0bcdad82d36e3287f8b7e812f501",
"text": "In this paper, a face and hand gesture recognition system which can be applied to a smart TV interaction system is proposed. Human face and natural hand gesture are the key component to interact with smart TV system. The face recognition system is used in viewer authentication and the hand gesture recognition in control of smart TV, for example, volume up/down, channel changing. Personalized service such as favorite channels recommendation or parental guidance can be provided using face recognition. We show that the face recognition detection rate is about 99% and the face recognition rate is about 97% by using DGIST database. Also, hand detection rate is about 98% at distance of 1 meter, 1.5 meter, and 2 meter, respectively. Overall 5 type hand gesture recognition rate is about 80% using support vector machine (SVM).",
"title": ""
},
{
"docid": "5b21b248dc51b027fa3919514c346b94",
"text": "How will we view schizophrenia in 2030? Schizophrenia today is a chronic, frequently disabling mental disorder that affects about one per cent of the world’s population. After a century of studying schizophrenia, the cause of the disorder remains unknown. Treatments, especially pharmacological treatments, have been in wide use for nearly half a century, yet there is little evidence that these treatments have substantially improved outcomes for most people with schizophrenia. These current unsatisfactory outcomes may change as we approach schizophrenia as a neurodevelopmental disorder with psychosis as a late, potentially preventable stage of the illness. This ‘rethinking’ of schizophrenia as a neurodevelopmental disorder, which is profoundly different from the way we have seen this illness for the past century, yields new hope for prevention and cure over the next two decades.",
"title": ""
},
{
"docid": "f5d58660137891111a009bc841950ad2",
"text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).",
"title": ""
},
{
"docid": "76b86602a4d7394b4a76cc25f7145229",
"text": "As a shared economy platform, Airbnb allows customers to collaborate and guides them to hosts’ rooms. Based on the records and ratings, it attaches great significance to infer users’ satisfaction with their rooms. Several essential problems arise when evaluating satisfaction and matching. Data confidence and prediction bias influence the inference performance of the satisfaction. When two users stay in one room, their joint satisfaction also deserves particular research because of the roommate effect. In this paper, a matching model is built based on the inferred satisfaction considering confidence and prediction uncertainties. The satisfaction with the confidence uncertainty is modeled using a normalized variance of the Beta distribution. The algorithms for inferring satisfaction with the prediction uncertainties are divided into two parts: a weighted matrix factorization-based algorithm for individuals and a preference similarity-based algorithm for pairs. Two matching algorithms are proposed with constraints. Finally, extensive experiments using real-world data show the effectiveness and accuracy of the proposed method.",
"title": ""
},
{
"docid": "f733125d8cd3d90ac7bf463ae93ca24a",
"text": "Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers).",
"title": ""
},
{
"docid": "968965ddb9aa26b041ea688413935e86",
"text": "Lightweight photo sharing, particularly via mobile devices, is fast becoming a common communication medium used for maintaining a presence in the lives of friends and family. How should such systems be designed to maximize this social presence while maintaining simplicity? An experimental photo sharing system was developed and tested that, compared to current systems, offers highly simplified, group-centric sharing, automatic and persistent people-centric organization, and tightly integrated desktop and mobile sharing and viewing. In an experimental field study, the photo sharing behaviors of groups of family or friends were studied using their normal photo sharing methods and with the prototype sharing system. Results showed that users found photo sharing easier and more fun, shared more photos, and had an enhanced sense of social presence when sharing with the experimental system. Results are discussed in the context of design principles for the rapidly increasing number of lightweight photo sharing systems.",
"title": ""
},
{
"docid": "875508043025aeb1e99214bdad269c22",
"text": "Article: Emotions and art are intimately related (Tan, 2000). From ancient to modern times, theories of aesthetics have emphasized the role of art in evoking, shaping, and modifying human feelings. The experimental study of preferences, evaluations, and feelings related to art has a long history in psychology. Aesthetics is one of the oldest areas of psychological research, dating to Fechner's (1876) landmark work. Psychology has had a steady interest in aesthetic problems since then, but art has never received as much attention as one would expect (see Berlyne, 1971a; Tan, 2000; Valentine, 1962). The study of art and the study of emotions, as areas of scientific inquiry, both languished during much of the last century. It is not surprising that the behavioral emphasis on observable action over inner experience would lead to a neglect of research on aesthetics. In an interesting coincidence, both art and emotion resurfaced in psychology at about the same time. As emotion psychologists began developing theories of basic emotions (Ekman & Friesen, 1971; Izard, 1971; Tomkins, 1962), experimental psychologists began tackling hedonic qualities of art (Berlyne, 1971a, 1972, 1974). Since then, the psychology of emotion and the psychology of art have had little contact (see Silvia, in press-b; Tan, 2000).",
"title": ""
},
{
"docid": "6e36af79c99fb4a0faaed374ac5b3545",
"text": "In this study, with Singapore as an example, we demonstrate how we can use mobile phone call detail record (CDR) data, which contains millions of anonymous users, to extract individual mobility networks comparable to the activity-based approach. Such an approach is widely used in the transportation planning practice to develop urban micro simulations of individual daily activities and travel; yet it depends highly on detailed travel survey data to capture individual activity-based behavior. We provide an innovative data mining framework that synthesizes the state-of-the-art techniques in extracting mobility patterns from raw mobile phone CDR data, and design a pipeline that can translate the massive and passive mobile phone records to meaningful spatial human mobility patterns readily interpretable for urban and transportation planning purposes. With growing ubiquitous mobile sensing, and shrinking labor and fiscal resources in the public sector globally, the method presented in this research can be used as a low-cost alternative for transportation and planning agencies to understand the human activity patterns in cities, and provide targeted plans for future sustainable development.",
"title": ""
},
{
"docid": "aac94dec9aacac522f0d3fd05b71a92d",
"text": "Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistical techniques exist to handle such data, these techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. The ART relies on a preprocessing step that \"aligns\" data before applying averaged ranks, after which point common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. Unlike most articles on the ART, which only address two factors, we generalize the ART to N factors. We also provide ARTool and ARTweb, desktop and Web-based programs for aligning and ranking data. Our re-examination of some published HCI results exhibits advantages of the ART.",
"title": ""
},
{
"docid": "f52d387faf03421bd97500494addd260",
"text": "OBJECTIVE\nTo test the association of behavioral and psychosocial health domains with contextual variables and perceived health in ethnically and economically diverse postpartum women.\n\n\nDESIGN\nMail survey of a stratified random sample.\n\n\nSETTING\nSouthwestern community in Texas.\n\n\nPARTICIPANTS\nNon-Hispanic White, African American, and Hispanic women (N = 168).\n\n\nMETHODS\nA questionnaire was sent to a sample of 600 women. The adjusted response rate was 32.8%. The questionnaire covered behavioral (diet, physical activity, smoking, and alcohol use) and psychosocial (depression symptoms and body image) health, contextual variables (race/ethnicity, income, perceived stress, and social support), and perceived health. Hypotheses were tested using linear and logistic regression.\n\n\nRESULTS\nBody image, dietary behaviors, physical activity behaviors, and depression symptoms were all significantly correlated (Spearman ρ = -.15 to .47). Higher income was associated with increased odds of higher alcohol use (more than 1 drink on 1 to 4 days in a 14-day period). African American ethnicity was correlated with less healthy dietary behaviors and Hispanic ethnicity with less physical activity. In multivariable regressions, perceived stress was associated with less healthy dietary behaviors, increased odds of depression, and decreased odds of higher alcohol use, whereas social support was associated with less body image dissatisfaction, more physical activity, and decreased odds of depression. All behavioral and psychosocial domains were significantly correlated with perceived health, with higher alcohol use related to more favorable perceived health. In regressions analyses, perceived stress was a significant contextual predictor of perceived health.\n\n\nCONCLUSION\nStress and social support had more consistent relationships to behavioral and psychosocial variables than race/ethnicity and income level.",
"title": ""
},
{
"docid": "429eea5acf13bd4e19b4f34ef4c79fe7",
"text": "We present a study where human neurophysiological signals are used as implicit feedback to alter the behavior of a deep learning based autonomous driving agent in a simulated virtual environment.",
"title": ""
},
{
"docid": "15d932b1344d48f13dfbb5e7625b22ad",
"text": "Predictive modeling of human or humanoid movement becomes increasingly complex as the dimensionality of those movements grows. Dynamic Movement Primitives (DMP) have been shown to be a powerful method of representing such movements, but do not generalize well when used in configuration or task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AE-DMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalize. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit. To further improve the model for multiple movements, sparsity is added for the feature layer neurons; therefore, various movements can be observed clearly in the feature space. After training, the model finds a single hidden neuron from the sparsity that can efficiently generate new movements. Our experiments clearly demonstrate the efficiency of missing data imputation using 50-dimensional human movement data.",
"title": ""
}
] |
scidocsrr
|
a51b226da1008a52c9ad1870f0497e60
|
UiLog: Improving Log-Based Fault Diagnosis by Log Analysis
|
[
{
"docid": "4dc9360837b5793a7c322f5b549fdeb1",
"text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering",
"title": ""
}
] |
[
{
"docid": "f333bc03686cf85aee0a65d4a81e8b34",
"text": "A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.",
"title": ""
},
{
"docid": "5e0898aa58d092a1f3d64b37af8cf838",
"text": "In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images. It leverages the large learning capacity of deep networks, as well as the problem-specific expertise that was hardly incorporated in the past design of deep architectures. For the latter, we take into consideration both the prior knowledge of the JPEG compression scheme, and the successful practice of the sparsity-based dual-domain approach. We further design the One-Step Sparse Inference (1-SI) module, as an efficient and lightweighted feed-forward approximation of sparse coding. Extensive experiments verify the superiority of the proposed D3 model over several state-of-the-art methods. Specifically, our best model is capable of outperforming the latest deep model for around 1 dB in PSNR, and is 30 times faster.",
"title": ""
},
{
"docid": "645faf32f40732d291e604d7240f0546",
"text": "Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.",
"title": ""
},
{
"docid": "d4a4c4a1d933488ab686097e18b4373a",
"text": "Psychological stress is an important factor for the development of irritable bowel syndrome (IBS). More and more clinical and experimental evidence showed that IBS is a combination of irritable bowel and irritable brain. In the present review we discuss the potential role of psychological stress in the pathogenesis of IBS and provide comprehensive approaches in clinical treatment. Evidence from clinical and experimental studies showed that psychological stresses have marked impact on intestinal sensitivity, motility, secretion and permeability, and the underlying mechanism has a close correlation with mucosal immune activation, alterations in central nervous system, peripheral neurons and gastrointestinal microbiota. Stress-induced alterations in neuro-endocrine-immune pathways acts on the gut-brain axis and microbiota-gut-brain axis, and cause symptom flare-ups or exaggeration in IBS. IBS is a stress-sensitive disorder, therefore, the treatment of IBS should focus on managing stress and stress-induced responses. Now, non-pharmacological approaches and pharmacological strategies that target on stress-related alterations, such as antidepressants, antipsychotics, miscellaneous agents, 5-HT synthesis inhibitors, selective 5-HT reuptake inhibitors, and specific 5-HT receptor antagonists or agonists have shown a critical role in IBS management. A integrative approach for IBS management is a necessary.",
"title": ""
},
{
"docid": "5cd6debed0333d480aeafe406f526d2b",
"text": "In the coming advanced age society, an innovative technology to assist the activities of daily living of elderly and disabled people and the heavy work in nursing is desired. To develop such a technology, an actuator safe and friendly for human is required. It should be small, lightweight and has to provide a proper softness. A pneumatic rubber artificial muscle is available as such actuators. We have developed some types of pneumatic rubber artificial muscles and applied them to wearable power assist devices. A wearable power assist device is equipped to the human body to assist the muscular force, which supports activities of daily living, rehabilitation, heavy working, training and so on. In this paper, some types of pneumatic rubber artificial muscles developed in our laboratory are introduced. Further, two kinds of wearable power assist devices driven with the rubber artificial muscles are described. Some evaluations can clarify the effectiveness of pneumatic rubber artificial muscle for such an innovative human assist technology.",
"title": ""
},
{
"docid": "79cdd24d14816f45b539f31606a3d5ee",
"text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.",
"title": ""
},
{
"docid": "da694b74b3eaae46d15f589e1abef4b8",
"text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7c98ac06ea8cb9b83673a9c300fb6f4c",
"text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.",
"title": ""
},
{
"docid": "302079b366d2bc0c951e3c7d8eb30815",
"text": "The rapid traffic growth and ubiquitous access requirements make it essential to explore the next generation (5G) wireless communication networks. In the current 5G research area, non-orthogonal multiple access has been proposed as a paradigm shift of physical layer technologies. Among all the existing non-orthogonal technologies, the recently proposed sparse code multiple access (SCMA) scheme is shown to achieve a better link level performance. In this paper, we extend the study by proposing an unified framework to analyze the energy efficiency of SCMA scheme and a low complexity decoding algorithm which is critical for prototyping. We show through simulation and prototype measurement results that SCMA scheme provides extra multiple access capability with reasonable complexity and energy consumption, and hence, can be regarded as an energy efficient approach for 5G wireless communication systems.",
"title": ""
},
{
"docid": "d81fb36cad466df8629fada7e7f7cc8d",
"text": "The limitations of each security technology combined with the growth of cyber attacks impact the efficiency of information security management and increase the activities to be performed by network administrators and security staff. Therefore, there is a need for the increase of automated auditing and intelligent reporting mechanisms for the cyber trust. Intelligent systems are emerging computing systems based on intelligent techniques that support continuous monitoring and controlling plant activities. Intelligence improves an individual’s ability to make better decisions. This paper presents a proposed architecture of an Intelligent System for Information Security Management (ISISM). The objective of this system is to improve security management processes such as monitoring, controlling, and decision making with an effect size that is higher than an expert in security by providing mechanisms to enhance the active construction of knowledge about threats, policies, procedures, and risks. We focus on requirements and design issues for the basic components of the intelligent system.",
"title": ""
},
{
"docid": "2a8f464e709dcae4e34f73654aefe31f",
"text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.",
"title": ""
},
{
"docid": "f70c07e15c4070edf75e8846b4dff0b3",
"text": "Polyphenols, including flavonoids, phenolic acids, proanthocyanidins and resveratrol, are a large and heterogeneous group of phytochemicals in plant-based foods, such as tea, coffee, wine, cocoa, cereal grains, soy, fruits and berries. Growing evidence indicates that various dietary polyphenols may influence carbohydrate metabolism at many levels. In animal models and a limited number of human studies carried out so far, polyphenols and foods or beverages rich in polyphenols have attenuated postprandial glycemic responses and fasting hyperglycemia, and improved acute insulin secretion and insulin sensitivity. The possible mechanisms include inhibition of carbohydrate digestion and glucose absorption in the intestine, stimulation of insulin secretion from the pancreatic beta-cells, modulation of glucose release from the liver, activation of insulin receptors and glucose uptake in the insulin-sensitive tissues, and modulation of intracellular signalling pathways and gene expression. The positive effects of polyphenols on glucose homeostasis observed in a large number of in vitro and animal models are supported by epidemiological evidence on polyphenol-rich diets. To confirm the implications of polyphenol consumption for prevention of insulin resistance, metabolic syndrome and eventually type 2 diabetes, human trials with well-defined diets, controlled study designs and clinically relevant end-points together with holistic approaches e.g., systems biology profiling technologies are needed.",
"title": ""
},
{
"docid": "2b2cd290f12d98667d6a4df12697a05e",
"text": "The chapter proposes three ways of integration of the two different worlds of relational and NoSQL databases: native, hybrid, and reducing to one option, either relational or NoSQL. The native solution includes using vendors’ standard APIs and integration on the business layer. In a relational environment, APIs are based on SQL standards, while the NoSQL world has its own, unstandardized solutions. The native solution means using the APIs of the individual systems that need to be connected, leaving to the businesslayer coding the task of linking and separating data in extraction and storage operations. A hybrid solution introduces an additional layer that provides SQL communication between the business layer and the data layer. The third integration solution includes vendors’ effort to foresee functionalities of “opposite” side, thus convincing developers’ community that their solution is sufficient.",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "986df17e2fe07cf2c70c37391f99a5da",
"text": "This paper is the last in a series of 16 which have explored current uses of information communications technology (ICT) in all areas of dentistry in general, and in dental education in particular. In this paper the authors explore current developments, referring back to the previous 15 papers, and speculate on how ICT should increasingly contribute to dental education in the future. After describing a vision of dental education in the next 50 years, the paper considers how ICT can help to fulfil the vision. It then takes a brief look at three aspects of the use of ICT in the world in general and speculates how dentistry can learn from other areas of human endeavour. Barriers to the use of ICT in dental education are then discussed. The final section of the paper outlines new developments in haptics, immersive environments, the semantic web, the IVIDENT project, nanotechnology and ergonometrics. The paper concludes that ICT will offer great opportunities to dental education but questions whether or not human limitations will allow it to be used to maximum effect.",
"title": ""
},
{
"docid": "a8858713a7040ce6dd25706c9b72b45c",
"text": "A new type of wearable button antenna for wireless local area network (WLAN) applications is proposed. The antenna is composed of a button with a diameter of circa 16 mm incorporating a patch on top of a dielectric disc. The button is located on top of a textile substrate and a conductive textile ground that are to be incorporated in clothing. The main characteristic feature of this antenna is that it shows two different types of radiation patterns, a monopole type pattern in the 2.4 GHz band for on-body communications and a broadside type pattern in the 5 GHz band for off-body communications. A very high efficiency of about 90% is obtained, which is much higher than similar full textile solutions in the literature. A prototype has been fabricated and measured. The effect of several real-life situations such as a tilted button and bending of the textile ground have been studied. Measurements agree very well with simulations.",
"title": ""
},
{
"docid": "43c49bb7d9cebb8f476079ac9dd0af27",
"text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.",
"title": ""
},
{
"docid": "ca095eee8abefd4aef9fd8971efd7fb5",
"text": "A radio-frequency identification (RFID) tag is a small, inexpensive microchip that emits an identifier in response to a query from a nearby reader. The price of these tags promises to drop to the range of $0.05 per unit in the next several years, offering a viable and powerful replacement for barcodes. The challenge in providing security for low-cost RFID tags is that they are computationally weak devices, unable to perform even basic symmetric-key cryptographic operations. Security researchers often therefore assume that good privacy protection in RFID tags is unattainable. In this paper, we explore a notion of minimalist cryptography suitable for RFID tags. We consider the type of security obtainable in RFID devices with a small amount of rewritable memory, but very limited computing capability. Our aim is to show that standard cryptography is not necessary as a starting point for improving security of very weak RFID devices. Our contribution is threefold: 1. We propose a new formal security model for authentication and privacy in RFID tags. This model takes into account the natural computational limitations and the likely attack scenarios for RFID tags in real-world settings. It represents a useful divergence from standard cryptographic security modeling, and thus a new view of practical formalization of minimal security requirements for low-cost RFID-tag security. 2. We describe protocol that provably achieves the properties of authentication and privacy in RFID tags in our proposed model, and in a good practical sense. Our proposed protocol involves no computationally intensive cryptographic operations, and relatively little storage. 3. Of particular practical interest, we describe some reduced-functionality variants of our protocol. We show, for instance, how static pseudonyms may considerably enhance security against eavesdropping in low-cost RFID tags. Our most basic static-pseudonym proposals require virtually no increase in existing RFID tag resources.",
"title": ""
},
{
"docid": "fcd0c523e74717c572c288a90c588259",
"text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.",
"title": ""
},
{
"docid": "dd84b653de8b3b464c904a988a622a39",
"text": "We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24% on a held-out set of relations. The code and the dataset to replicate the experiments are made available at https://github.com/ukplab.",
"title": ""
}
] |
scidocsrr
|
8d5f60dd08e3d1f5fee9bf9912cdc382
|
A deliberate practice account of typing proficiency in everyday typists.
|
[
{
"docid": "420a3d0059a91e78719955b4cc163086",
"text": "The superior skills of experts, such as accomplished musicians and chess masters, can be amazing to most spectators. For example, club-level chess players are often puzzled by the chess moves of grandmasters and world champions. Similarly, many recreational athletes find it inconceivable that most other adults – regardless of the amount or type of training – have the potential ever to reach the performance levels of international competitors. Especially puzzling to philosophers and scientists has been the question of the extent to which expertise requires innate gifts versus specialized acquired skills and abilities. One of the most widely used and simplest methods of gathering data on exceptional performance is to interview the experts themselves. But are experts always capable of describing their thoughts, their behaviors, and their strategies in a manner that would allow less-skilled individuals to understand how the experts do what they do, and perhaps also understand how they might reach expert level through appropriate training? To date, there has been considerable controversy over the extent to which experts are capable of explaining the nature and structure of their exceptional performance. Some pioneering scientists, such as Binet (1893 / 1966), questioned the validity of the experts’ descriptions when they found that some experts gave reports inconsistent with those of other experts. To make matters worse, in those rare cases that allowed verification of the strategy by observing the performance, discrepancies were found between the reported strategies and the observations (Watson, 1913). Some of these discrepancies were explained, in part, by the hypothesis that some processes were not normally mediated by awareness/attention and that the mere act of engaging in self-observation (introspection) during performance changed the content of ongoing thought processes. These problems led most psychologists in first half of the 20th century to reject all types of introspective verbal reports as valid scientific evidence, and they focused almost exclusively on observable behavior (Boring, 1950). In response to the problems with the careful introspective analysis of images and perceptions, investigators such as John B.",
"title": ""
}
] |
[
{
"docid": "d2b45d76e93f07ededbab03deee82431",
"text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.",
"title": ""
},
{
"docid": "86ce47260d84ddcf8558a0e5e4f2d76f",
"text": "We present the definition and computational algorithms for a new class of surfaces which are dual to the isosurface produced by the widely used marching cubes (MC) algorithm. These new isosurfaces have the same separating properties as the MC surfaces but they are comprised of quad patches that tend to eliminate the common negative aspect of poorly shaped triangles of the MC isosurfaces. Based upon the concept of this new dual operator, we describe a simple, but rather effective iterative scheme for producing smooth separating surfaces for binary, enumerated volumes which are often produced by segmentation algorithms. Both the dual surface algorithm and the iterative smoothing scheme are easily implemented.",
"title": ""
},
{
"docid": "82be3cafe24185b1f3c58199031e41ef",
"text": "UNLABELLED\nFamily-based therapy (FBT) is regarded as best practice for the treatment of eating disorders in children and adolescents. In FBT, parents play a vital role in bringing their child or adolescent to health; however, a significant minority of families do not respond to this treatment. This paper introduces a new model whereby FBT is enhanced by integrating emotion-focused therapy (EFT) principles and techniques with the aims of helping parents to support their child's refeeding and interruption of symptoms. Parents are also supported to become their child's 'emotion coach'; and to process any emotional 'blocks' that may interfere with their ability to take charge of recovery. A parent testimonial is presented to illustrate the integration of the theory and techniques of EFT in the FBT model. EFFT (Emotion-Focused Family Therapy) is a promising model of therapy for those families who require a more intense treatment to bring about recovery of an eating disorder.\n\n\nKEY PRACTITIONER MESSAGE\nMore intense therapeutic models exist for treatment-resistant eating disorders in children and adolescents. Emotion is a powerful healing tool in families struggling with an eating disorder. Working with parent's emotions and emotional reactions to their child's struggles has the potential to improve child outcomes.",
"title": ""
},
{
"docid": "72226ba8d801a3db776cf40d5243c521",
"text": "Hyperspectral image (HSI) classification is one of the most widely used methods for scene analysis from hyperspectral imagery. In the past, many different engineered features have been proposed for the HSI classification problem. In this paper, however, we propose a feature learning approach for hyperspectral image classification based on convolutional neural networks (CNNs). The proposed CNN model is able to learn structured features, roughly resembling different spectral band-pass filters, directly from the hyperspectral input data. Our experimental results, conducted on a commonly-used remote sensing hyperspectral dataset, show that the proposed method provides classification results that are among the state-of-the-art, without using any prior knowledge or engineered features.",
"title": ""
},
{
"docid": "950fe0124f830a63f528aa5905116c82",
"text": "One of the main barriers to immersivity during object manipulation in virtual reality is the lack of realistic haptic feedback. Our goal is to convey compelling interactions with virtual objects, such as grasping, squeezing, pressing, lifting, and stroking, without requiring a bulky, world-grounded kinesthetic feedback device (traditional haptics) or the use of predetermined passive objects (haptic retargeting). To achieve this, we use a pair of finger-mounted haptic feedback devices that deform the skin on the fingertips to convey cutaneous force information from object manipulation. We show that users can perceive differences in virtual object weight and that they apply increasing grasp forces when lifting virtual objects as rendered mass is increased. Moreover, we show how naive users perceive changes of a virtual object's physical properties when we use skin deformation to render objects with varying mass, friction, and stiffness. These studies demonstrate that fingertip skin deformation devices can provide a compelling haptic experience appropriate for virtual reality scenarios involving object manipulation.",
"title": ""
},
{
"docid": "c0d7cd54a947d9764209e905a6779d45",
"text": "The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.",
"title": ""
},
{
"docid": "bdbbe079493bbfec7fb3cb577c926997",
"text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"title": ""
},
{
"docid": "6717e438376a78cb177bfc3942b6eec6",
"text": "Decisions are often guided by generalizing from past experiences. Fundamental questions remain regarding the cognitive and neural mechanisms by which generalization takes place. Prior data suggest that generalization may stem from inference-based processes at the time of generalization. By contrast, generalization may emerge from mnemonic processes occurring while premise events are encoded. Here, participants engaged in a two-phase learning and generalization task, wherein they learned a series of overlapping associations and subsequently generalized what they learned to novel stimulus combinations. Functional MRI revealed that successful generalization was associated with coupled changes in learning-phase activity in the hippocampus and midbrain (ventral tegmental area/substantia nigra). These findings provide evidence for generalization based on integrative encoding, whereby overlapping past events are integrated into a linked mnemonic representation. Hippocampal-midbrain interactions support the dynamic integration of experiences, providing a powerful mechanism for building a rich associative history that extends beyond individual events.",
"title": ""
},
{
"docid": "b0727e320a1c532bd3ede4fd892d8d01",
"text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.",
"title": ""
},
{
"docid": "932c66caf9665e9dea186732217d4313",
"text": "Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).",
"title": ""
},
{
"docid": "f2d1f05292ddb0df8fa92fe1992852ab",
"text": "In this paper, we study the design of omnidirectional mobile robots with Active-Caster RObotic drive with BAll Transmission (ACROBAT). ACROBAT system has been developed by the authors group which realizes mechanical coordination of wheel and steering motions for creating caster behaviors without computer calculations. A motion in the specific direction relative to a robot body is fully depends on the motion of a specific motor. This feature gives a robot designer to build an omnidirectional mobile robot propelled by active-casters with no redundant actuation with a simple control. A controller of the robot becomes as simple as that for omni-wheeled robotic bases. Namely 3DOF of the omnidirectional robot is controlled by three motors using a simple and constant kinematics. ACROBAT includes a unique dual-ball transmission to transmit traction power to rotate and orient a drive wheel with distributing velocity components to wheel and steering axes in an appropriate ratio. Therefore a sensor for measuring a wheel orientation and calculations for velocity distributions are totally removed from a conventional control system. To build an omnidirectional vehicle by ACROBAT, the significant feature is some multiple drive shafts can be driven by a common motor which realizes non-redundant actuation of the robotic platform. A kinematic model of the proposed robot with ACROBAT is analyzed and a mechanical condition for realizing a non-redundant actuation is derived. Based on the kinematic model and the mechanical condition, computer simulations of the mechanism are performed. A prototype two-wheeled robot with two ACROBATs is designed and built to verify the availability of the proposed system. In the experiments, the prototype robot shows successful omnidirectional motions with a simple and constant kinematics based control.",
"title": ""
},
{
"docid": "4d0b04f546ab5c0d79bb066b1431ff51",
"text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b1fbc91a501ea25c7d3d20780a2be74",
"text": "STUDY DESIGN\nA systematic quantitative review of the literature.\n\n\nOBJECTIVE\nTo compare combined anterior-posterior surgery versus posterior surgery for thoracolumbar fractures in order to identify better treatments.\n\n\nSUMMARY OF BACKGROUND DATA\nAxial load of the anterior and middle column of the spine can lead to a burst fracture in the vertebral body. The management of thoracolumbar burst fractures remains controversial. The goals of operative treatment are fracture reduction, fixation and decompressing the neural canal. For this, different operative methods are developed, for instance, the posterior and the combined anterior-posterior approach. Recent systematic qualitative reviews comparing these methods are lacking.\n\n\nMETHODS\nWe conducted an electronic search of MEDLINE, EMBASE, LILACS and the Cochrane Central Register for Controlled Trials.\n\n\nRESULTS\nFive observational comparative studies and no randomized clinical trials comparing the combined anteriorposterior approach with the posterior approach were retrieved. The total enrollment of patients in these studies was 755 patients. The results were expressed as relative risk (RR) for dichotomous outcomes and weighted mean difference (WMD) for continuous outcomes with 95% confidence intervals (CI).\n\n\nCONCLUSIONS\nA small significantly higher kyphotic correction and improvement of vertebral height (sagittal index) observed for the combined anterior-posterior group is cancelled out by more blood loss, longer operation time, longer hospital stay, higher costs and a possible higher intra- and postoperative complication rate requiring re-operation and the possibility of a worsened Hannover spine score. The surgeons' choices regarding the operative approach are biased: worse cases tended to undergo the combined anterior-posterior approach.",
"title": ""
},
{
"docid": "50795998e83dafe3431c3509b9b31235",
"text": "In this study, the daily movement directions of three frequently traded stocks (GARAN, THYAO and ISCTR) in Borsa Istanbul were predicted using deep neural networks. Technical indicators obtained from individual stock prices and dollar-gold prices were used as features in the prediction. Class labels indicating the movement direction were found using daily close prices of the stocks and they were aligned with the feature vectors. In order to perform the prediction process, the type of deep neural network, Convolutional Neural Network, was trained and the performance of the classification was evaluated by the accuracy and F-measure metrics. In the experiments performed, using both price and dollar-gold features, the movement directions in GARAN, THYAO and ISCTR stocks were predicted with the accuracy rates of 0.61, 0.578 and 0.574 respectively. Compared to using the price based features only, the use of dollar-gold features improved the classification performance.",
"title": ""
},
{
"docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b",
"text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.",
"title": ""
},
{
"docid": "ec130c42c43a2a0ba8f33cd4a5d0082b",
"text": "Support vector machine (SVM) has appeared as a powerful tool for forecasting forex market and demonstrated better performance over other methods, e.g., neural network or ARIMA based model. SVM-based forecasting model necessitates the selection of appropriate kernel function and values of free parameters: regularization parameter and ε– insensitive loss function. In this paper, we investigate the effect of different kernel functions, namely, linear, polynomial, radial basis and spline on prediction error measured by several widely used performance metrics. The effect of regularization parameter is also studied. The prediction of six different foreign currency exchange rates against Australian dollar has been performed and analyzed. Some interesting results are presented.",
"title": ""
},
{
"docid": "207bb3922ad45daa1023b70e1a18baf7",
"text": "The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.",
"title": ""
},
{
"docid": "c5d74c69c443360d395a8371055ef3e2",
"text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.",
"title": ""
},
{
"docid": "b5dc5268c2eb3b216aa499a639ddfbf9",
"text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter",
"title": ""
},
{
"docid": "e37f707ac7a86f287fbbfe9b8a4b1e31",
"text": "We survey distributed deep learning models for training or inference without accessing raw data from clients. These methods aim to protect confidential patterns in data while still allowing servers to train models. The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks. We study their benefits, limitations and trade-offs with regards to computational resources, data leakage and communication efficiency and also share our anticipated future trends.",
"title": ""
}
] |
scidocsrr
|
3efaabcd2607368d2952f28610f436b4
|
Concept Hierarchy Extraction from Textbooks
|
[
{
"docid": "9d918a69a2be2b66da6ecf1e2d991258",
"text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.",
"title": ""
},
{
"docid": "74d45402acc9e05c6a8734f114253eea",
"text": "Name ambiguity problem has raised an urgent demand for efficient, high-quality named entity disambiguation methods. The key problem of named entity disambiguation is to measure the similarity between occurrences of names. The traditional methods measure the similarity using the bag of words (BOW) model. The BOW, however, ignores all the semantic relations such as social relatedness between named entities, associative relatedness between concepts, polysemy and synonymy between key terms. So the BOW cannot reflect the actual similarity. Some research has investigated social networks as background knowledge for disambiguation. Social networks, however, can only capture the social relatedness between named entities, and often suffer the limited coverage problem.\n To overcome the previous methods' deficiencies, this paper proposes to use Wikipedia as the background knowledge for disambiguation, which surpasses other knowledge bases by the coverage of concepts, rich semantic information and up-to-date content. By leveraging Wikipedia's semantic knowledge like social relatedness between named entities and associative relatedness between concepts, we can measure the similarity between occurrences of names more accurately. In particular, we construct a large-scale semantic network from Wikipedia, in order that the semantic knowledge can be used efficiently and effectively. Based on the constructed semantic network, a novel similarity measure is proposed to leverage Wikipedia semantic knowledge for disambiguation. The proposed method has been tested on the standard WePS data sets. Empirical results show that the disambiguation performance of our method gets 10.7% improvement over the traditional BOW based methods and 16.7% improvement over the traditional social network based methods.",
"title": ""
}
] |
[
{
"docid": "a7c79045bcbd9fac03015295324745e3",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "1e82d6acef7e5b5f0c2446d62cf03415",
"text": "The purpose of this research is to characterize and model the self-heating effect of multi-finger n-channel MOSFETs. Self-heating effect (SHE) does not need to be analyzed for single-finger bulk CMOS devices. However, it should be considered for multi-finger n-channel MOSFETs that are mainly used for RF-CMOS applications. The SHE mechanism was analyzed based on a two-dimensional device simulator. A compact model, which is a BSIM6 model with additional equations, was developed and implemented in a SPICE simulator with Verilog-A language. Using the proposed model and extracted parameters excellent agreements have been obtained between measurements and simulations in DC and S-parameter domain whereas the original BSIM6 shows inconsistency between static DC and small signal AC simulations due to the lack of SHE. Unlike the generally-used sub-circuits based SHE models including in BSIMSOI models, the proposed SHE model can converge in large scale circuits.",
"title": ""
},
{
"docid": "2e4c1818d7174be02306c5059379337b",
"text": "Mid-level or semi-local features learnt using class-level information are potentially more distinctive than the traditional low-level local features constructed in a purely bottom-up fashion. At the same time they preserve some of the robustness properties with respect to occlusions and image clutter. In this paper we propose a new and effective scheme for extracting mid-level features for image classification, based on relevant pattern mining. In particular, we mine relevant patterns of local compositions of densely sampled low-level features. We refer to the new set of obtained patterns as Frequent Local Histograms or FLHs. During this process, we pay special attention to keeping all the local histogram information and to selecting the most relevant reduced set of FLH patterns for classification. The careful choice of the visual primitives and an extension to exploit both local and global spatial information allow us to build powerful bag-of-FLH-based image representations. We show that these bag-of-FLHs are more discriminative than traditional bag-of-words and yield state-of-the-art results on various image classification benchmarks, including Pascal VOC.",
"title": ""
},
{
"docid": "39861e2759b709883f3d37a65d13834b",
"text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.",
"title": ""
},
{
"docid": "d6d30dbba9153bcc86ed8a4337821b78",
"text": "Multiplayer video streaming scenario can be seen everywhere today as the video traffic is becoming the “killer” traffic over the Internet. The Quality of Experience fairness is critical for not only the users but also the content providers and ISP. Consequently, a QoE fairness adaptive method of multiplayer video streaming is of great importance. Previous studies focus on client-side solutions without network global view or network-assisted solution with extra reaction to client. In this paper, a pure network-based architecture using SDN is designed for monitoring network global performance information. With the flexible programming and network mastery capacity of SDN, we propose an online Q-learning-based dynamic bandwidth allocation algorithm Q-FDBA with the goal of QoE fairness. The results show the Q-FDBA could adaptively react to high frequency of bottleneck bandwidth switches and achieve better QoE fairness within a certain time dimension.",
"title": ""
},
{
"docid": "05622842ebd89777570d7dc3c36a0693",
"text": "Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities. While prior work has proposed automated methods to identify hostile comments in online discussions, these methods work retrospectively on comments that have already been posted, making it difficult to intervene before an interaction escalates. In this paper we instead consider the problem of forecasting future hostilities in online discussions, which we decompose into two tasks: (1) given an initial sequence of non-hostile comments in a discussion, predict whether some future comment will contain hostility; and (2) given the first hostile comment in a discussion, predict whether this will lead to an escalation of hostility in subsequent comments. Thus, we aim to forecast both the presence and intensity of hostile comments based on linguistic and social features from earlier comments. To evaluate our approach, we introduce a corpus of over 30K annotated Instagram comments from over 1,100 posts. Our approach is able to predict the appearance of a hostile comment on an Instagram post ten or more hours in the future with an AUC of .82 (task 1), and can furthermore distinguish between high and low levels of future hostility with an AUC of .91 (task 2).",
"title": ""
},
{
"docid": "16b78e470af247cc65fd1ef4e17ace4b",
"text": "OBJECTIVES\nTo examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information.\n\n\nDESIGN\nTo obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken.\n\n\nSETTING\nBarts and the London School of Medicine and Dentistry, University of London.\n\n\nSUBJECTS\n50 second- and third-year medical students.\n\n\nRESULTS\nRecall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%).\n\n\nCONCLUSION\nMind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.",
"title": ""
},
{
"docid": "028cdddc5d61865d0ea288180cef91c0",
"text": "This paper investigates the use of Convolutional Neural Networks for classification of painted symbolic road markings. Previous work on road marking recognition is mostly based on either template matching or on classical feature extraction followed by classifier training which is not always effective and based on feature engineering. However, with the rise of deep neural networks and their success in ADAS systems, it is natural to investigate the suitability of CNN for road marking recognition. Unlike others, our focus is solely on road marking recognition and not detection; which has been extensively explored and conventionally based on MSER feature extraction of the IPM images. We train five different CNN architectures with variable number of convolution/max-pooling and fully connected layers, and different resolution of road mark patches. We use a publicly available road marking data set and incorporate data augmentation to enhance the size of this data set which is required for training deep nets. The augmented data set is randomly partitioned in 70% and 30% for training and testing. The best CNN network results in an average recognition rate of 99.05% for 10 classes of road markings on the test set.",
"title": ""
},
{
"docid": "fbc0784d94e09cab75ee5a970786c30b",
"text": "Adequate conservation and management of shark populations is becoming increasingly important on a global scale, especially because many species are exceptionally vulnerable to overfishing. Yet, reported catch statistics for sharks are incomplete, and mortality estimates have not been available for sharks as a group. Here, the global catch and mortality of sharks from reported and unreported landings, discards, and shark finning are being estimated at 1.44 million metric tons for the year 2000, and at only slightly less in 2010 (1.41 million tons). Based on an analysis of average shark weights, this translates into a total annual mortality estimate of about 100 million sharks in 2000, and about 97 million sharks in 2010, with a total range of possible values between 63 and 273 million sharks per year. Further, the exploitation rate for sharks as a group was calculated by dividing two independent mortality estimates by an estimate of total global biomass. As an alternative approach, exploitation rates for individual shark populations were compiled and averaged from stock assessments and other published sources. The resulting three independent estimates of the average exploitation rate ranged between 6.4% and 7.9% of sharks killed per year. This exceeds the average rebound rate for many shark populations, estimated from the life history information on 62 shark species (rebound rates averaged 4.9% per year), and explains the ongoing declines in most populations for which data exist. The consequences of these unsustainable catch and mortality rates for marine ecosystems could be substantial. Global total shark mortality, therefore, needs to be reduced drastically in order to rebuild depleted populations and restore marine ecosystems with functional top predators. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8683c83a7983d33242d46c16f6f06f72",
"text": "Many engineering activities, including mechatronic design, require that a multidomain or ‘multi-physics’ system and its control system be designed as an integrated system. This contribution discusses the background and tools for a port-based approach to integrated modeling and simulation of physical systems and their controllers, with parameters that are directly related to the real-world system, thus improving insight and direct feedback on modeling decisions.",
"title": ""
},
{
"docid": "093465aba11b82b768e4213b23c5911b",
"text": "This paper describes the generation of large deformation diffeomorphisms phi:Omega=[0,1]3<-->Omega for landmark matching generated as solutions to the transport equation dphi(x,t)/dt=nu(phi(x,t),t),epsilon[0,1] and phi(x,0)=x, with the image map defined as phi(.,1) and therefore controlled via the velocity field nu(.,t),epsilon[0,1]. Imagery are assumed characterized via sets of landmarks {xn, yn, n=1, 2, ..., N}. The optimal diffeomorphic match is constructed to minimize a running smoothness cost parallelLnu parallel2 associated with a linear differential operator L on the velocity field generating the diffeomorphism while simultaneously minimizing the matching end point condition of the landmarks. Both inexact and exact landmark matching is studied here. Given noisy landmarks xn matched to yn measured with error covariances Sigman, then the matching problem is solved generating the optimal diffeomorphism phi;(x,1)=integral0(1)nu(phi(x,t),t)dt+x where nu(.)=argmin(nu.)integral1(0) integralOmega parallelLnu(x,t) parallel2dxdt +Sigman=1N[yn-phi(xn,1)] TSigman(-1)[yn-phi(xn,1)]. Conditions for the existence of solutions in the space of diffeomorphisms are established, with a gradient algorithm provided for generating the optimal flow solving the minimum problem. Results on matching two-dimensional (2-D) and three-dimensional (3-D) imagery are presented in the macaque monkey.",
"title": ""
},
{
"docid": "ca1c232e84e7cb26af6852007f215715",
"text": "Word embedding-based methods have received increasing attention for their flexibility and effectiveness in many natural language-processing (NLP) tasks, including Word Similarity (WS). However, these approaches rely on high-quality corpus and neglect prior knowledge. Lexicon-based methods concentrate on human’s intelligence contained in semantic resources, e.g., Tongyici Cilin, HowNet, and Chinese WordNet, but they have the drawback of being unable to deal with unknown words. This article proposes a three-stage framework for measuring the Chinese word similarity by incorporating prior knowledge obtained from lexicons and statistics into word embedding: in the first stage, we utilize retrieval techniques to crawl the contexts of word pairs from web resources to extend context corpus. In the next stage, we investigate three types of single similarity measurements, including lexicon similarities, statistical similarities, and embedding-based similarities. Finally, we exploit simple combination strategies with math operations and the counter-fitting combination strategy using optimization method. To demonstrate our system’s efficiency, comparable experiments are conducted on the PKU-500 dataset. Our final results are 0.561/0.516 of Spearman/Pearson rank correlation coefficient, which outperform the state-of-the-art performance to the best of our knowledge. Experiment results on Chinese MC-30 and SemEval-2012 datasets show that our system also performs well on other Chinese datasets, which proves its transferability. Besides, our system is not language-specific and can be applied to other languages, e.g., English.",
"title": ""
},
{
"docid": "ddc556ae150e165dca607e4a674583ae",
"text": "Increasing patient numbers, changing demographics and altered patient expectations have all contributed to the current problem with 'overcrowding' in emergency departments (EDs). The problem has reached crisis level in a number of countries, with significant implications for patient safety, quality of care, staff 'burnout' and patient and staff satisfaction. There is no single, clear definition of the cause of overcrowding, nor a simple means of addressing the problem. For some hospitals, the option of ambulance diversion has become a necessity, as overcrowded waiting rooms and 'bed-block' force emergency staff to turn patients away. But what are the options when ambulance diversion is not possible? Christchurch Hospital, New Zealand is a tertiary level facility with an emergency department that sees on average 65,000 patients per year. There are no other EDs to whom patients can be diverted, and so despite admission rates from the ED of up to 48%, other options need to be examined. In order to develop a series of unified responses, which acknowledge the multifactorial nature of the problem, the Emergency Department Cardiac Analogy model of ED flow, was developed. This model highlights the need to intervene at each of three key points, in order to address the issue of overcrowding and its associated problems.",
"title": ""
},
{
"docid": "af754985968db6b59b2c4f6affd370c6",
"text": "Many real networks that are collected or inferred from data are incomplete due to missing edges. Missing edges can be inherent to the dataset (Facebook friend links will never be complete) or the result of sampling (one may only have access to a portion of the data). The consequence is that downstream analyses that \"consume\" the network will often yield less accurate results than if the edges were complete. Community detection algorithms, in particular, often suffer when critical intra-community edges are missing. We propose a novel consensus clustering algorithm to enhance community detection on incomplete networks. Our framework utilizes existing community detection algorithms that process networks imputed by our link prediction based sampling algorithm and merges their multiple partitions into a final consensus output. On average our method boosts performance of existing algorithms by 7% on artificial data and 17% on ego networks collected from Facebook.",
"title": ""
},
{
"docid": "3c2b68ac95f1a9300585b73ca4b83122",
"text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.",
"title": ""
},
{
"docid": "1fd87c65968630b6388985a41b7890ce",
"text": "Cyber Defense Exercises have received much attention in recent years, and are increasingly becoming the cornerstone for ensuring readiness in this new domain. Crossed Swords is an exercise directed at training Red Team members for responsive cyber defense. However, prior iterations have revealed the need for automated and transparent real-time feedback systems to help participants improve their techniques and understand technical challenges. Feedback was too slow and players did not understand the visibility of their actions. We developed a novel and modular open-source framework to address this problem, dubbed Frankenstack. We used this framework during Crossed Swords 2017 execution and evaluated its effectiveness by interviewing participants and conducting an online survey. Due to the novelty of Red Team-centric exercises, very little academic research exists on providing real-time feedback during such exercises. Thus, this paper serves as a first foray into a novel research field.",
"title": ""
},
{
"docid": "9075024a29f1c0c9ca3f2cc90059b7f1",
"text": "Users often wish to participate in online groups anonymously, but misbehaving users may abuse this anonymity to spam or disrupt the group. Messaging protocols such as Mix-nets and DC-nets leave online groups vulnerable to denial-of-service and Sybil attacks, while accountable voting protocols are unusable or inefficient for general anonymous messaging. We present the first general messaging protocol that offers provable anonymity with accountability for moderate-size groups, and efficiently handles unbalanced loads where few members have much data to transmit in a given round. The N group members first cooperatively shuffle an N ×N matrix of pseudorandom seeds, then use these seeds in N “preplanned” DC-nets protocol runs. Each DC-nets run transmits the variable-length bulk data comprising one member’s message, using the minimum number of bits required for anonymity under our attack model. The protocol preserves message integrity and one-to-one correspondence between members and messages, makes denial-of-service attacks by members traceable to the culprit, and efficiently handles large and unbalanced message loads. A working prototype demonstrates the protocol’s practicality for anonymous messaging in groups of 40+ member nodes.",
"title": ""
},
{
"docid": "a87c60deb820064abaa9093398937ff3",
"text": "Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of selected features using genetic algorithms (GA) to the classification of PVC arrhythmia. The objective of this study is to apply GA as a feature selection method to select the best feature subset from 200 time series features and to integrate these best features to recognize PVC forms. Neural networks, support vector machines and k-nearest neighbour classification algorithms were used. Findings were expressed in terms of accuracy, sensitivity, and specificity for the MIT-BIH Arrhythmia Database. The results showed that the proposed model achieved higher accuracy rates than those of other works on this topic.",
"title": ""
},
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
},
{
"docid": "df09834abe25199ac7b3205d657fffb2",
"text": "In modern wireless communications products it is required to incorporate more and more different functions to comply with current market trends. A very attractive function with steadily growing market penetration is local positioning. To add this feature to low-cost mass-market devices without additional power consumption, it is desirable to use commercial communication chips and standards for localization of the wireless units. In this paper we present a concept to measure the distance between two IEEE 802.15.4 (ZigBee) compliant devices. The presented prototype hardware consists of a low- cost 2.45 GHz ZigBee chipset. For localization we use standard communication packets as transmit signals. Thus simultaneous data transmission and transponder localization is feasible. To achieve high positioning accuracy even in multipath environments, a coherent synthesis of measurements in multiple channels and a special signal phase evaluation concept is applied. With this technique the full available ISM bandwidth of 80 MHz is utilized. In first measurements with two different frequency references-a low-cost oscillator and a temperatur-compensated crystal oscillator-a positioning bias error of below 16 cm and 9 cm was obtained. The standard deviation was less than 3 cm and 1 cm, respectively. It is demonstrated that compared to signal correlation in time, the phase processing technique yields an accuracy improvement of roughly an order of magnitude.",
"title": ""
}
] |
scidocsrr
|
c2998fed4e899382b5d39ff452daddc4
|
REINFORCED CONCRETE WALL RESPONSE UNDER UNI-AND BI-DIRECTIONAL LOADING
|
[
{
"docid": "7a06c1b73662a377875da0ea2526c610",
"text": "a Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 515, Station 18, CH – 1015 Lausanne, Switzerland b Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 504, Station 18, CH – 1015 Lausanne, Switzerland",
"title": ""
}
] |
[
{
"docid": "4b7e71b412770cbfe059646159ec66ca",
"text": "We present empirical evidence to demonstrate that there is little or no difference between the Java Virtual Machine and the .NET Common Language Runtime, as regards the compilation and execution of object-oriented programs. Then we give details of a case study that proves the superiority of the Common Language Runtime as a target for imperative programming language compilers (in particular GCC).",
"title": ""
},
{
"docid": "76f9b2059a99eb9cc1ed2d9dc5686724",
"text": "This paper surveys the results of various studies on 3-D image coding. Themes are focused on efficient compression and display-independent representation of 3-D images. Most of the works on 3-D image coding have been concentrated on the compression methods tuned for each of the 3-D image formats (stereo pairs, multi-view images, volumetric images, holograms and so on). For the compression of stereo images, several techniques concerned with the concept of disparity compensation have been developed. For the compression of multi-view images, the concepts of disparity compensation and epipolar plane image (EPI) are the efficient ways of exploiting redundancies between multiple views. These techniques, however, heavily depend on the limited camera configurations. In order to consider many other multi-view configurations and other types of 3-D images comprehensively, more general platform for the 3-D image representation is introduced, aiming to outgrow the framework of 3-D “image” communication and to open up a novel field of technology, which should be called the “spatial” communication. Especially, the light ray based method has a wide range of application, including efficient transmission of the physical world, as well as integration of the virtual and physical worlds. key words: 3-D image coding, stereo images, multi-view images, panoramic images, volumetric images, holograms, displayindependent representation, light rays, spatial communication",
"title": ""
},
{
"docid": "9490f117f153a16152237a5a6b08c0a3",
"text": "Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy.",
"title": ""
},
{
"docid": "fc62b094df3093528c6846e405f55e39",
"text": "Correctly classifying a skin lesion is one of the first steps towards treatment. We propose a novel convolutional neural network (CNN) architecture for skin lesion classification designed to learn based on information from multiple image resolutions while leveraging pretrained CNNs. While traditional CNNs are generally trained on a single resolution image, our CNN is composed of multiple tracts, where each tract analyzes the image at a different resolution simultaneously and learns interactions across multiple image resolutions using the same field-of-view. We convert a CNN, pretrained on a single resolution, to work for multi-resolution input. The entire network is fine-tuned in a fully learned end-to-end optimization with auxiliary loss functions. We show how our proposed novel multi-tract network yields higher classification accuracy, outperforming state-of-the-art multi-scale approaches when compared over a public skin lesion dataset.",
"title": ""
},
{
"docid": "c7405ff209148bcba4283e57c91f63f9",
"text": "Differential search algorithm (DS) is a relatively new evolutionary algorithm inspired by the Brownian-like random-walkmovement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS) is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.",
"title": ""
},
{
"docid": "0cf9ef0e5e406509f35c0dcd7ea598af",
"text": "This paper proposes a method to reduce cogging torque of a single side Axial Flux Permanent Magnet (AFPM) motor according to analysis results of finite element analysis (FEA) method. First, the main cause of generated cogging torque will be studied using three dimensional FEA method. In order to reduce the cogging torque, a dual layer magnet step skewed (DLMSS) method is proposed to determine the shape of dual layer magnets. The skewed angle of magnetic poles between these two layers is determined using equal air gap flux of inner and outer layers. Finally, a single-sided AFPM motor based on the proposed methods is built as experimental platform to verify the effectiveness of the design. Meanwhile, the differences between design and tested results will be analyzed for future research and improvement.",
"title": ""
},
{
"docid": "4016ad494a953023f982b8a4876bc8c1",
"text": "Visual tracking is one of the most important field of computer vision. It has immense number of applications ranging from surveillance to hi-fi military applications. This paper is based on the application developed for automatic visual tracking and fire control system for anti-aircraft machine gun (AAMG). Our system mainly consists of camera, as visual sensor; mounted on a 2D-moving platform attached with 2GHz embedded system through RS-232 and AAMG mounted on the same moving platform. Camera and AAMG are both bore-sighted. Correlation based template matching algorithm has been used for automatic visual tracking. This is the algorithm used in civilian and military automatic target recognition, surveillance and tracking systems. The algorithm does not give robust performance in different environments, especially in clutter and obscured background, during tracking. So, motion and prediction algorithms have been integrated with it to achieve robustness and better performance for real-time tracking. Visual tracking is also used to calculate lead angle, which is a vital component of such fire control systems. Lead is angular correction needed to compensate for the target motion during the time of flight of the projectile, to accurately hit the target. Although at present lead computation is not robust due to some limitation as lead calculation mostly relies on gunner intuition. Even then by the integrated implementation of lead angle with visual tracking and control algorithm for moving platform, we have been able to develop a system which detects tracks and destroys the target of interest.",
"title": ""
},
{
"docid": "12f717b4973a5290233d6f03ba05626b",
"text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.",
"title": ""
},
{
"docid": "002f49b0aa994b286a106d6b75ec8b2a",
"text": "We introduce a library of geometric voxel features for CAD surface recognition/retrieval tasks. Our features include local versions of the intrinsic volumes (the usual 3D volume, surface area, integrated mean and Gaussian curvature) and a few closely related quantities. We also compute Haar wavelet and statistical distribution features by aggregating raw voxel features. We apply our features to object classification on the ESB data set and demonstrate accurate results with a small number of shallow decision trees.",
"title": ""
},
{
"docid": "8cddb1fed30976de82d62de5066a5ce6",
"text": "Today, more and more people have their virtual identities on the web. It is common that people are users of more than one social network and also their friends may be registered on multiple websites. A facility to aggregate our online friends into a single integrated environment would enable the user to keep up-to-date with their virtual contacts more easily, as well as to provide improved facility to search for people across different websites. In this paper, we propose a method to identify users based on profile matching. We use data from two popular social networks to study the similarity of profile definition. We evaluate the importance of fields in the web profile and develop a profile comparison tool. We demonstrate the effectiveness and efficiency of our tool in identifying and consolidating duplicated users on different websites.",
"title": ""
},
{
"docid": "482bc3d151948bad9fbfa02519fbe61a",
"text": "Evolution has resulted in highly developed abilities in many natural intelligences to quickly and accurately predict mechanical phenomena. Humans have successfully developed laws of physics to abstract and model such mechanical phenomena. In the context of artificial intelligence, a recent line of work has focused on estimating physical parameters based on sensory data and use them in physical simulators to make long-term predictions. In contrast, we investigate the effectiveness of a single neural network for end-to-end long-term prediction of mechanical phenomena. Based on extensive evaluation, we demonstrate that such networks can outperform alternate approaches having even access to ground-truth physical simulators, especially when some physical parameters are unobserved or not known a-priori. Further, our network outputs a distribution of outcomes to capture the inherent uncertainty in the data. Our approach demonstrates for the first time the possibility of making actionable long-term predictions from sensor data without requiring to explicitly model the underlying physical laws.",
"title": ""
},
{
"docid": "dfb83ad16854797137e34a5c7cb110ae",
"text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.",
"title": ""
},
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
},
{
"docid": "12b115e3b759fcb87956680d6e89d7aa",
"text": "The calibration system presented in this article enables to calculate optical parameters i.e. intrinsic and extrinsic of both thermal and visual cameras used for 3D reconstruction of thermal images. Visual cameras are in stereoscopic set and provide a pair of stereo images of the same object which are used to perform 3D reconstruction of the examined object [8]. The thermal camera provides information about temperature distribution on the surface of an examined object. In this case the term of 3D reconstruction refers to assigning to each pixel of one of the stereo images (called later reference image) a 3D coordinate in the respective camera reference frame [8]. The computed 3D coordinate is then re-projected on to the thermograph and thus to the known 3D position specific temperature is assigned. In order to remap the 3D coordinates on to thermal image it is necessary to know the position of thermal camera against visual camera and therefore a calibration of the set of the three cameras must be performed. The presented calibration system includes special calibration board (fig.1) whose characteristic points of well known position are recognizable both by thermal and visual cameras. In order to detect calibration board characteristic points’ image coordinates, especially in thermal camera, a new procedure was designed.",
"title": ""
},
{
"docid": "79465d290ab299b9d75e9fa617d30513",
"text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.",
"title": ""
},
{
"docid": "e112af9e35690b64acc7242611b39dd2",
"text": "Body sensor network systems can help people by providing healthcare services such as medical monitoring, memory enhancement, medical data access, and communication with the healthcare provider in emergency situations through the SMS or GPRS [1,2]. Continuous health monitoring with wearable [3] or clothing-embedded transducers [4] and implantable body sensor networks [5] will increase detection of emergency conditions in at risk patients. Not only the patient, but also their families will benefit from these. Also, these systems provide useful methods to remotely acquire and monitor the physiological signals without the need of interruption of the patient’s normal life, thus improving life quality [6,7].",
"title": ""
},
{
"docid": "9121462cf9ac2b2c55b7a1c96261472f",
"text": "The main goal of this chapter is to give characteristics, evaluation methodologies, and research examples of collaborative augmented reality (AR) systems from a perspective of human-to-human communication. The chapter introduces classifications of conventional and 3D collaborative systems as well as typical characteristics and application examples of collaborative AR systems. Next, it discusses design considerations of collaborative AR systems from a perspective of human communication and then discusses evaluation methodologies of human communication behaviors. The next section discusses a variety of collaborative AR systems with regard to display devices used. Finally, the chapter gives conclusion with future directions. This will be a good starting point to learn existing collaborative AR systems, their advantages and limitations. This chapter will also contribute to the selection of appropriate hardware configurations and software designs of a collaborative AR system for given conditions.",
"title": ""
},
{
"docid": "5cd8ee9a938ed087e2a3bc667991557d",
"text": "Expense reimbursement is a time-consuming and labor-intensive process across organizations. In this paper, we present a prototype expense reimbursement system that dramatically reduces the elapsed time and costs involved, by eliminating paper from the process life cycle. Our complete solution involves (1) an electronic submission infrastructure that provides multi- channel image capture, secure transport and centralized storage of paper documents; (2) an unconstrained data mining approach to extracting relevant named entities from un-structured document images; (3) automation of auditing procedures that enables automatic expense validation with minimum human interaction.\n Extracting relevant named entities robustly from document images with unconstrained layouts and diverse formatting is a fundamental technical challenge to image-based data mining, question answering, and other information retrieval tasks. In many applications that require such capability, applying traditional language modeling techniques to the stream of OCR text does not give satisfactory result due to the absence of linguistic context. We present an approach for extracting relevant named entities from document images by combining rich page layout features in the image space with language content in the OCR text using a discriminative conditional random field (CRF) framework. We integrate this named entity extraction engine into our expense reimbursement solution and evaluate the system performance on large collections of real-world receipt images provided by IBM World Wide Reimbursement Center.",
"title": ""
},
{
"docid": "4775bf71a5eea05b77cafa53daefcff9",
"text": "There is mounting empirical evidence that interacting with nature delivers measurable benefits to people. Reviews of this topic have generally focused on a specific type of benefit, been limited to a single discipline, or covered the benefits delivered from a particular type of interaction. Here we construct novel typologies of the settings, interactions and potential benefits of people-nature experiences, and use these to organise an assessment of the benefits of interacting with nature. We discover that evidence for the benefits of interacting with nature is geographically biased towards high latitudes and Western societies, potentially contributing to a focus on certain types of settings and benefits. Social scientists have been the most active researchers in this field. Contributions from ecologists are few in number, perhaps hindering the identification of key ecological features of the natural environment that deliver human benefits. Although many types of benefits have been studied, benefits to physical health, cognitive performance and psychological well-being have received much more attention than the social or spiritual benefits of interacting with nature, despite the potential for important consequences arising from the latter. The evidence for most benefits is correlational, and although there are several experimental studies, little as yet is known about the mechanisms that are important for delivering these benefits. For example, we do not know which characteristics of natural settings (e.g., biodiversity, level of disturbance, proximity, accessibility) are most important for triggering a beneficial interaction, and how these characteristics vary in importance among cultures, geographic regions and socio-economic groups. These are key directions for future research if we are to design landscapes that promote high quality interactions between people and nature in a rapidly urbanising world.",
"title": ""
},
{
"docid": "d1eed1d7875930865944c98fbab5f7e1",
"text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.",
"title": ""
}
] |
scidocsrr
|
5daf5fcdb0977d9e2ddda86370a14e35
|
Relaxed Quantization for Discretized Neural Networks
|
[
{
"docid": "4b54cf876d3ab7c7277605125055c6c3",
"text": "We propose a practical method for L0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since the L0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L0 regularized objective is differentiable with respect to the distribution parameters. We further propose the hard concrete distribution for the gates, which is obtained by “stretching” a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.",
"title": ""
},
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
},
{
"docid": "2639f5d735abed38ed4f7ebf11072087",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
},
{
"docid": "7a5d22ae156d6a62cfd080c2a58103d2",
"text": "Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we “back-propagate” through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation, where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful.",
"title": ""
}
] |
[
{
"docid": "76d2ba510927bd7f56155e1cf1cbbc52",
"text": "As the first part of a study that aims to propose tools to take into account some electromagnetic compatibility aspects, we have developed a model to predict the electric and magnetic fields emitted by a device. This model is based on a set of equivalent sources (electric and magnetic dipoles) obtained from the cartographies of the tangential components of electric and magnetic near fields. One of its features is to be suitable for a commercial electromagnetic simulation tool based on a finite element method. This paper presents the process of modeling and the measurement and calibration procedure to obtain electromagnetic fields necessary for the model; the validation and the integration of the model into a commercial electromagnetic simulator are then performed on a Wilkinson power divider.",
"title": ""
},
{
"docid": "8c1005abba5be8b7d67dcd3397a356d1",
"text": "Forecasting short-term cycle time (CT) of wafer lots is crucial for production planning and control in the wafer manufacturing. A novel recurrent neural network called “bilateral long short-term memory (bilateral LSTM)” is proposed to model a short-term cycle time forecasting (CTF) of each re-entrant period of a wafer lot. First, a two-dimensional (2-D) architecture is designed to transmit the wafer and layer correlations by using wafer and layer connections. Subsequently, aiming to store various error signals caused by the diverse CT data, a multiply memory structure is presented to extend the capacity of constant error carousel (CEC) in the LSTM model. The experiment results indicate that the proposed model outperforms conventional models in the accuracy and stability for the short-term CTF. Further comparative experiments reveal that the 2-D architecture can enhance the prediction accuracy and the multi-CEC structure can improve the forecasting stability for the short-term CTF of wafer lots.",
"title": ""
},
{
"docid": "14d5fe4a4af7c6d2e530eae57d359a9f",
"text": "The new formulation of the stochastic vortex particle method has been presented. Main elements of the algorithms: the construction of the particles, governing equations, stretching modeling and boundary condition enforcement are described. The test case is the unsteady flow past a spherical body. Sample results concerning patterns in velocity and vorticity fields, streamlines, pressure and aerodynamic forces are presented.",
"title": ""
},
{
"docid": "6e94a736aa913fa4585025ec52675f48",
"text": "A benefit of model-driven engineering relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well designed, and the model transformations should be trustworthy. Because of the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported, and a set of mutation operators has been defined. However, it currently requires considerable manual work and suffers from the test data set improvement activity. This activity is a difficult and time-consuming job and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators and a dedicated algorithm allow to automatically or semi-automatically produce improved test models. The approach is validated and illustrated in two case studies written in Kermeta. Copyright © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "a4202b9db211de06341a7f3dcd5e67f3",
"text": "Induction industrial welding is one of the most straightforward application for the power inverter. Practical industrial welders require at the same time power levels up to 1 MW while frequencies are in the range between 200–500 kHz, depending on the characteristics of the tube to be weld. SiC devices, as per its superior high frequency characteristics, are ideal components for this type of high power, high frequency inverters, but several aspects as the long term reliability, short circuit capability, cost issues of SiC material and others have to be considered in the use of this material in industrial welders. Another important issue in todays IH inverters is smartness and their capability to be integrated in a Industry 4.0 network, as actual complex IH systems require advanced communication features The paper aims to give practical technical considerations, based on experimental results, for the realistic use of SiC devices in resonant inverters for this type of industrial material processing application as well as some ideas of providing intelligence to IH converters and advanced services for the integration in an Industry 4.0 network.",
"title": ""
},
{
"docid": "38db17ce89e1a046d7d37213b59c8163",
"text": "Cardinality estimation has a wide range of applications and is of particular importance in database systems. Various algorithms have been proposed in the past, and the HyperLogLog algorithm is one of them. In this paper, we present a series of improvements to this algorithm that reduce its memory requirements and significantly increase its accuracy for an important range of cardinalities. We have implemented our proposed algorithm for a system at Google and evaluated it empirically, comparing it to the original HyperLogLog algorithm. Like HyperLogLog, our improved algorithm parallelizes perfectly and computes the cardinality estimate in a single pass.",
"title": ""
},
{
"docid": "713ade80a6c2e0164a0d6fe6ef07be37",
"text": "We review recent work on the role of intrinsic amygdala networks in the regulation of classically conditioned defensive behaviors, commonly known as conditioned fear. These new developments highlight how conditioned fear depends on far more complex networks than initially envisioned. Indeed, multiple parallel inhibitory and excitatory circuits are differentially recruited during the expression versus extinction of conditioned fear. Moreover, shifts between expression and extinction circuits involve coordinated interactions with different regions of the medial prefrontal cortex. However, key areas of uncertainty remain, particularly with respect to the connectivity of the different cell types. Filling these gaps in our knowledge is important because much evidence indicates that human anxiety disorders results from an abnormal regulation of the networks supporting fear learning.",
"title": ""
},
{
"docid": "0737e99613b83104bc9390a46fbc4aeb",
"text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "0e8ed09f9575975562a0774244e5eafe",
"text": "Map Algebra is a collection of functions for handling continuous spatial data, which allows modeling of different problems and getting new information from the existing data. There is an established set of map algebra functions in the GIS literature, originally proposed by Dana Tomlin. However, the question whether his proposal is complete is still an open problem in GIScience. This paper describes the design of a map algebra that generalizes Tomlin’s map algebra by incorporating topological and directional spatial predicates. Our proposal enables operations that are not directly expressible by Tomlin’s proposal. One of the important results of our paper is to show that Tomlin’s Map Algebra can be defined as an application of topological predicates to coverages. This paper points to a convergence between these two approaches and shows that it is possible to develop a foundational theory for GIScience where topological predicates are the heart of both object-based algebras and field-based algebras.",
"title": ""
},
{
"docid": "040e5e800895e4c6f10434af973bec0f",
"text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.",
"title": ""
},
{
"docid": "c0b71e1120a65af5b71935bd4daa88fc",
"text": "In a last few decades, development in power electronics systems has created its necessity in industrial and domestic applications like electric drives, UPS, solar and wind power conversion and many more. This paper presents the design, simulation, analysis and fabrication of a three phase, two-level inverter. The Space Vector Pulse Width Modulation (SVPWM) technique is used for the generation of gating signals for the three phase inverter. The proposed work is about real time embedded code generation technique that can be implemented using any microprocessor or microcontroller board of choice. The proposed technique reduces the analogue circuitry and eliminates the need of coding for generation of pulses, thereby making it simple and easy to implement. Control structure of SVPWM is simulated in MATLAB Simulink environment for analysis of different parameters of inverter. Comparative analysis of simulation results and hardware results is presented which shows that embedded code generation technique is very reliable and accurate.",
"title": ""
},
{
"docid": "d83a771852fe065cd376b60966f29972",
"text": "Coupled microring arrangements with balanced gain and loss, also known as parity-time symmetric systems, are investigated both analytically and experimentally. In these configurations, stable single-mode lasing can be achieved at pump powers well above threshold. This self-adaptive mode management technique is broadband and robust to small fabrication imperfections. The results presented in this paper provide a new avenue in designing mode-selective chip-scale in-plane semiconductor lasers by utilizing the complex dynamics of coupled gain/loss cavities.",
"title": ""
},
{
"docid": "9b6cf3a04f2c2b9446ea0cacf6968866",
"text": "This paper discusses an approximate solution to the weighted graph matching prohlem (WGMP) for both undirected and directed graphs. The WGMP is the problem of finding the optimum matching between two weighted graphs, which are graphs with weights at each arc. The proposed method employs an analytic, instead of a combinatorial or iterative, approach to the optimum matching problem of such graphs. By using the eigendecompoyitions of the adjacency matrices (in the case of the undirected graph matching problem) or some Hermitian matrices derived from the adjacency matrices (in the case of the directed graph matching problem), a matching close to the optimum one can be found efficiently when the graphs are sufficiently close to each other. Simulation experiments are also given to evaluate the performance of the proposed method. Index Termy-Eigendecomposition, inexact matching, structural description, structural pattern recognition, weighted graph matching.",
"title": ""
},
{
"docid": "f709153cdc958cc636ac6d68405bc2b0",
"text": "While enormous progress has been made to Variational Autoencoder (VAE) in recent years, similar to other deep networks, VAE with deep networks suffers from the problem of degeneration, which seriously weakens the correlation between the input and the corresponding latent codes, deviating from the goal of the representation learning. To investigate how degeneration affects VAE from a theoretical perspective, we illustrate the information transmission in VAE and analyze the intermediate layers of the encoders/decoders. Specifically, we propose a Fisher Information measure for the layer-wise analysis. With such measure, we demonstrate that information loss is ineluctable in feed-forward networks and causes the degeneration in VAE. We show that skip connections in VAE enable the preservation of information without changing the model architecture. We call this class of VAE equipped with skip connections as SCVAE and perform a range of experiments to show its advantages in information preservation and degeneration mitigation.",
"title": ""
},
{
"docid": "5ab7fc76fa82de483c96a6301456c93c",
"text": "Businesses are more and more modernizing the legacy systems they developed with Rapid Application Development (RAD), so that they can benefit from the new platforms and technologies. In these systems, the Graphical User Interface (GUI) layout is implicitly given by the position of the GUI elements (i.e. coordinates). However, taking advantage of current features of GUI technologies often requires an explicit, high-level layout model. We propose a Model-Driven Engineering process to perform reverse engineering of RAD-built GUIs, which is focused on discovering the implicit layout, and produces a GUI model where the layout is explicit. Based on the information we obtain, other reengineering activities can be performed, for example, to adapt the GUI for mobile device screens.",
"title": ""
},
{
"docid": "480f940bf5a2226b659048d9840582d9",
"text": "Vulnerability assessment is a requirement of NERC's cybersecurity standards for electric power systems. The purpose is to study the impact of a cyber attack on supervisory control and data acquisition (SCADA) systems. Compliance of the requirement to meet the standard has become increasingly challenging as the system becomes more dispersed in wide areas. Interdependencies between computer communication system and the physical infrastructure also become more complex as information technologies are further integrated into devices and networks. This paper proposes a vulnerability assessment framework to systematically evaluate the vulnerabilities of SCADA systems at three levels: system, scenarios, and access points. The proposed method is based on cyber systems embedded with the firewall and password models, the primary mode of protection in the power industry today. The impact of a potential electronic intrusion is evaluated by its potential loss of load in the power system. This capability is enabled by integration of a logic-based simulation method and a module for the power flow computation. The IEEE 30-bus system is used to evaluate the impact of attacks launched from outside or from within the substation networks. Countermeasures are identified for improvement of the cybersecurity.",
"title": ""
},
{
"docid": "25ccf513d39fca38cbff8f9e302a4c9c",
"text": "Social link identification, that is to identify accounts across different online social networks that belong to the same user, is an important task in social network applications. Most existing methods to solve this problem directly applied machine learning classifiers on features extracted from user’s rich information. In practice, however, only some limited user information can be obtained because of privacy concerns. In addition, we observe that the existing methods cannot handle huge amount of potential account pairs from different online social networks. In this paper, we propose an effective method to address the above two challenges by expanding known anchor links (seed account pairs belonging to the same person). In particular, we leverage potentially useful information possessed by the existing anchor link and then develop a local expansion propagation model to identify new social links, which are taken as a generated anchor link to be used for iteratively identifying additional new social link. We evaluate our method on two most popular Chinese social networks. Experimental results show our proposed method can quickly find most of identity account pairs across different online social networks.",
"title": ""
},
{
"docid": "25216b9a56bca7f8503aa6b2e5b9d3a9",
"text": "The study at hand is the first of its kind that aimed to provide a comprehensive analysis of the determinants of foreign direct investment (FDI) in Mongolia by analyzing their short-run, long-run, and Granger causal relationships. In doing so, we methodically used a series of econometric methods to ensure reliable and robust estimation results that included the augmented Dickey-Fuller and Phillips-Perron unit root tests, the most recently advanced autoregressive distributed lag (ARDL) bounds testing approach to cointegration, fully modified ordinary least squares, and the Granger causality test within the vector error-correction model (VECM) framework. Our findings revealed domestic market size and human capital to have a U-shaped relationship with FDI inflows, with an initial positive impact on FDI in the short-run, which then turns negative in the long-run. Macroeconomic instability was found to deter FDI inflows in the long-run. In terms of the impact of trade on FDI, imports were found to have a complementary relationship with FDI; while exports and FDI were found to be substitutes in the short-run. Financial development was also found to induce a deterring effect on FDI inflows in both the shortand long-run; thereby also revealing a substitutive relationship between the two. Infrastructure level was not found to have a significant impact on FDI on any conventional level, in either the shortor long-run. Furthermore, the results have exhibited significant Granger causal relationships between the variables; thereby, ultimately stressing the significance of policy choice in not only attracting FDI inflows, but also in translating their positive spill-over benefits into long-run economic growth. © 2017 AESS Publications. All Rights Reserved.",
"title": ""
}
] |
scidocsrr
|
3c07b7f7bd1c49589aeb7400d7c88da0
|
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
|
[
{
"docid": "dba73424d6215af4a696765ddf03c09d",
"text": "We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images dataset. When training a convolutional DBN, one must decide what to do with the edge pixels of teh images. As the pixels near the edge of an image contribute to the fewest convolutional lter outputs, the model may see it t to tailor its few convolutional lters to better model the edge pixels. This is undesirable becaue it usually comes at the expense of a good model for the interior parts of the image. We investigate several ways of dealing with the edge pixels when training a convolutional DBN. Using a combination of locally-connected convolutional units and globally-connected units, as well as a few tricks to reduce the e ects of over tting, we achieve state-of-the-art performance in the classi cation task of the CIFAR-10 subset of the tiny images dataset.",
"title": ""
}
] |
[
{
"docid": "8d6cb15882c3a08ce8e2726ed65bf3cb",
"text": "Natural language processing systems (NLP) that extract clinical information from textual reports were shown to be effective for limited domains and for particular applications. Because an NLP system typically requires substantial resources to develop, it is beneficial if it is designed to be easily extendible to multiple domains and applications. This paper describes multiple extensions of an NLP system called MedLEE, which was originally developed for the domain of radiological reports of the chest, but has subsequently been extended to mammography, discharge summaries, all of radiology, electrocardiography, echocardiography, and pathology.",
"title": ""
},
{
"docid": "524914f80055ef1f3f974720577aeb5d",
"text": "Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.",
"title": ""
},
{
"docid": "9a881f70dcc1725c057817df81112f33",
"text": "Haptics is a valuable tool in minimally invasive surgical simulation and training. We discuss important aspects of haptics in MISST, such as haptic rendering and haptic recording and playback. Minimally invasive surgery has revolutionized many surgical procedures over the last few decades. MIS is performed using a small video camera, a video display, and a few customized surgical tools. In procedures such as gall bladder removal (laparoscopic cholesystectomy), surgeons insert a camera and long slender tools into the abdomen through small skin incisions to explore the internal cavity and manipulate organs from outside the body as they view their actions on a video display. Because the development of minimally invasive techniques has reduced the sense of touch compared to open surgery, surgeons must rely more on the feeling of net forces resulting from tool-tissue interactions and need more training to successfully operate on patients.",
"title": ""
},
{
"docid": "0c86d5f2e0159fc84aae66ff0695d714",
"text": "We have analyzed the properties of the HSV (Hue, Saturation and Value) color space with emphasis on the visual perception of the variation in Hue, Saturation and Intensity values of an image pixel. We extract pixel features by either choosing the Hue or the Intensity as the dominant property based on the Saturation value of a pixel. The feature extraction method has been applied for both image segmentation as well as histogram generation applications – two distinct approaches to content based image retrieval (CBIR). Segmentation using this method shows better identification of objects in an image. The histogram retains a uniform color transition that enables us to do a window-based smoothing during retrieval. The results have been compared with those generated using the RGB color space.",
"title": ""
},
{
"docid": "6a7839b42c549e31740f70aa0079ad46",
"text": "Deep learning has improved performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new multitask question answering network (MQAN) that jointly learns all tasks in decaNLP without any task-specific modules or parameters more effectively than sequence-to-sequence and reading comprehension baselines. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN’s multi-pointer-generator decoder is key to this success and that performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We also release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.",
"title": ""
},
{
"docid": "365cadf5f980e7c99cc3c2416ca36ba1",
"text": "Epidemiologic studies from numerous disparate populations reveal that individuals with the habit of daily moderate wine consumption enjoy significant reductions in all-cause and particularly cardiovascular mortality when compared with individuals who abstain or who drink alcohol to excess. Researchers are working to explain this observation in molecular and nutritional terms. Moderate ethanol intake from any type of beverage improves lipoprotein metabolism and lowers cardiovascular mortality risk. The question now is whether wine, particularly red wine with its abundant content of phenolic acids and polyphenols, confers additional health benefits. Discovering the nutritional properties of wine is a challenging task, which requires that the biological actions and bioavailability of the >200 individual phenolic compounds be documented and interpreted within the societal factors that stratify wine consumption and the myriad effects of alcohol alone. Further challenge arises because the health benefits of wine address the prevention of slowly developing diseases for which validated biomarkers are rare. Thus, although the benefits of the polyphenols from fruits and vegetables are increasingly accepted, consensus on wine is developing more slowly. Scientific research has demonstrated that the molecules present in grapes and in wine alter cellular metabolism and signaling, which is consistent mechanistically with reducing arterial disease. Future research must address specific mechanisms both of alcohol and of polyphenolic action and develop biomarkers of their role in disease prevention in individuals.",
"title": ""
},
{
"docid": "6cfee185a7438811aafd16a03fb75852",
"text": "The Internet-of-Things (IoT) envisions a world where billions of everyday objects and mobile devices communicate using a large number of interconnected wired and wireless networks. Maximizing the utilization of this paradigm requires fine-grained QoS support for differentiated application requirements, context-aware semantic information retrieval, and quick and easy deployment of resources, among many other objectives. These objectives can only be achieved if components of the IoT can be dynamically managed end-to-end across heterogeneous objects, transmission technologies, and networking architectures. Software-defined Networking (SDN) is a new paradigm that provides powerful tools for addressing some of these challenges. Using a software-based control plane, SDNs introduce significant flexibility for resource management and adaptation of network functions. In this article, we study some promising solutions for the IoT based on SDN architectures. Particularly, we analyze the application of SDN in managing resources of different types of networks such as Wireless Sensor Networks (WSN) and mobile networks, the utilization of SDN for information-centric networking, and how SDN can leverage Sensing-as-a-Service (SaaS) as a key cloud application in the IoT.",
"title": ""
},
{
"docid": "d0cf952865b72f25d9b8b049f717d976",
"text": "In this paper, we consider the problem of estimating the relative expertise score of users in community question and answering services (CQA). Previous approaches typically only utilize the explicit question answering relationship between askers and an-swerers and apply link analysis to address this problem. The im-plicit pairwise comparison between two users that is implied in the best answer selection is ignored. Given a question and answering thread, it's likely that the expertise score of the best answerer is higher than the asker's and all other non-best answerers'. The goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users. Formally, we treat each pairwise comparison between two users as a two-player competition with one winner and one loser. Two competition models are proposed to estimate user expertise from pairwise comparisons. Using the NTCIR-8 CQA task data with 3 million questions and introducing answer quality prediction based evaluation metrics, the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users. Furthermore, it's shown that pairwise comparison based competi-tion models have better discriminative power than other methods. It's also found that answer quality (best answer) is an important factor to estimate user expertise.",
"title": ""
},
{
"docid": "2245750e94df2d3e9eff8596a1d63193",
"text": "This work studies automatic recognition of paralinguistic properties of speech. The focus is on selection of the most useful acoustic features for three classification tasks: 1) recognition of autism spectrum developmental disorders from child speech, 2) classification of speech into different affective categories, and 3) recognizing the level of social conflict from speech. The feature selection is performed using a new variant of random subset sampling methods with k-nearest neighbors (kNN) as a classifier. The experiments show that the proposed system is able to learn a set of important features for each recognition task, clearly exceeding the performance of the same classifier using the original full feature set. However, some effects of overfitting the feature sets to finite data are also observed and discussed.",
"title": ""
},
{
"docid": "885fb29f5189381de351b634f4c7365c",
"text": "The main objectives of this study were to determine the most frequent and the most significant individual and social factors related to students’ academic achievement and motivation for learning. The study was conducted among 740 students from the Faculty of Education and the Faculty of Philosophy in Vojvodina. The participants completed questionnaires measuring students’ dominant individual and social motivational factors, the level of their motivation for learning, the level of their academic achievement and students’ socio-demographic characteristics. The results of this study showed that the students reported that both individual and social factors are related to their academic achievement and motivation for learning. Individual factors – the perceived interest in content and perceived content usefulness for personal development proved to be the most significant predictors of a high level of motivation for learning and academic success, but social motivational factors showed themselves to be the most frequent among students. The results are especially important for university teachers as guidelines for improving students’ motivation.",
"title": ""
},
{
"docid": "4c97621b15b1450fb43762157e2a8bd2",
"text": "Current proposals for classifying female genital anomalies seem to be associated with limitations in effective categorization, creating the need for a new classification system that is as simple as possible, clear and accurate in its definitions, comprehensive, and correlated with patients' clinical presentation, prognosis, and treatment on an evidence-based foundation. Although creating a new classification system is not an easy task, it is feasible when taking into account the experience gained from applying the existing classification systems, mainly that of the American Fertility Society.",
"title": ""
},
{
"docid": "8e465d1434932f21db514c49650863bb",
"text": "Context aware recommender systems (CARS) adapt the recommendations to the specific situation in which the items will be consumed. In this paper we present a novel context-aware recommendation algorithm that extends Matrix Factorization. We model the interaction of the contextual factors with item ratings introducing additional model parameters. The performed experiments show that the proposed solution provides comparable results to the best, state of the art, and more complex approaches. The proposed solution has the advantage of smaller computational cost and provides the possibility to represent at different granularities the interaction between context and items. We have exploited the proposed model in two recommendation applications: places of interest and music.",
"title": ""
},
{
"docid": "8da468bbb923b9d790e633c6a4fd9873",
"text": "Building Information Modeling (BIM) and Lean Thinking have been used separately as key approaches to overall construction projects’ improvement. Their combination, given several scenarios, presents opportunities for improvement as well as challenges in implementation. However, the exploration of eventual interactions and relationships between BIM as a process and Lean Construction principles is recent in research. The objective of this paper is to identify BIM and Lean relationship aspects with a focus on the construction phase and from the perspective of the general contractor (GC). This paper is based on a case study where BIM is already heavily used by the GC and where the integration of Lean practices is recent. We explore areas of improvement and Lean contributions to BIM from two perspectives. First, from Sacks et al.’s (2010) Interaction Matrix perspective, we identify some existing interactions. Second, based on the Capability Maturity Model (CMM) of the National Building Information Modeling Standard (NBIMS), we measure the level of the project’s BIM maturity and highlight areas of improvement for Lean. The main contribution of the paper is concerned with the exploration of the BIM maturity levels that are enhanced by Lean implementation.",
"title": ""
},
{
"docid": "a059fcf7c49db87bfbd3a7f452f0288d",
"text": "This paper investigates the physical layer security of non-orthogonal multiple access (NOMA) in large-scale networks with invoking stochastic geometry. Both single-antenna and multiple-antenna aided transmission scenarios are considered, where the base station (BS) communicates with randomly distributed NOMA users. In the single-antenna scenario, we adopt a protected zone around the BS to establish an eavesdropper-exclusion area with the aid of careful channel ordering of the NOMA users. In the multiple-antenna scenario, artificial noise is generated at the BS for further improving the security of a beamforming-aided system. In order to characterize the secrecy performance, we derive new exact expressions of the security outage probability for both single-antenna and multiple-antenna aided scenarios. For the single-antenna scenario, we perform secrecy diversity order analysis of the selected user pair. The analytical results derived demonstrate that the secrecy diversity order is determined by the specific user having the worse channel condition among the selected user pair. For the multiple-antenna scenario, we derive the asymptotic secrecy outage probability, when the number of transmit antennas tends to infinity. Monte Carlo simulations are provided for verifying the analytical results derived and to show that: 1) the security performance of the NOMA networks can be improved by invoking the protected zone and by generating artificial noise at the BS and 2) the asymptotic secrecy outage probability is close to the exact secrecy outage probability.",
"title": ""
},
{
"docid": "bc2dee76b561bffeead80e74d5b8a388",
"text": "BACKGROUND AND PURPOSE\nCarotid artery stenosis causes up to 10% of all ischemic strokes. Carotid endarterectomy (CEA) was introduced as a treatment to prevent stroke in the early 1950s. Carotid stenting (CAS) was introduced as a treatment to prevent stroke in 1994.\n\n\nMETHODS\nThe Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) is a randomized trial with blinded end point adjudication. Symptomatic and asymptomatic patients were randomized to CAS or CEA. The primary end point was the composite of any stroke, myocardial infarction, or death during the periprocedural period and ipsilateral stroke thereafter, up to 4 years.\n\n\nRESULTS\nThere was no significant difference in the rates of the primary end point between CAS and CEA (7.2% versus 6.8%; hazard ratio, 1.11; 95% CI, 0.81 to 1.51; P=0.51). Symptomatic status and sex did not modify the treatment effect, but an interaction with age and treatment was detected (P=0.02). Outcomes were slightly better after CAS for patients aged <70 years and better after CEA for patients aged >70 years. The periprocedural end point did not differ for CAS and CEA, but there were differences in the components, CAS versus CEA (stroke 4.1% versus 2.3%, P=0.012; and myocardial infarction 1.1% versus 2.3%, P=0.032).\n\n\nCONCLUSIONS\nIn CREST, CAS and CEA had similar short- and longer-term outcomes. During the periprocedural period, there was higher risk of stroke with CAS and higher risk of myocardial infarction with CEA. Clinical Trial Registration-www.clinicaltrials.gov. Unique identifier: NCT00004732.",
"title": ""
},
{
"docid": "16426be05f066e805e48a49a82e80e2e",
"text": "Ontologies have been developed and used by several researchers in different knowledge domains aiming to ease the structuring and management of knowledge, and to create a unique standard to represent concepts of such a knowledge domain. Considering the computer security domain, several tools can be used to manage and store security information. These tools generate a great amount of security alerts, which are stored in different formats. This lack of standard and the amount of data make the tasks of the security administrators even harder, because they have to understand, using their tacit knowledge, different security alerts to make correlation and solve security problems. Aiming to assist the administrators in executing these tasks efficiently, this paper presents the main features of the computer security incident ontology developed to model, using a unique standard, the concepts of the security incident domain, and how the ontology has been evaluated.",
"title": ""
},
{
"docid": "85a09871ca341ca5f70a78b2df8fdc02",
"text": "This paper presents a multi-channel frequency-modulated continuous-wave (FMCW) radar sensor operating in the frequency range from 91 to 97 GHz. The millimeter-wave radar sensor utilizes an SiGe chipset comprising a single signal-generation chip and multiple monostatic transceiver (TRX) chips, which are based on a 200-GHz fT HBT technology. The front end is built on an RF soft substrate in chip-on-board technology and employs a nonuniformly distributed antenna array to improve the angular resolution. The synthesis of ten virtual antennas achieved by a multiple-input multiple-output technique allows the virtual array aperture to be maximized. The fundamental-wave voltage-controlled oscillator achieves a single-sideband phase noise of -88 dBc/Hz at 1-MHz offset frequency. The TX provides a saturated output power of 6.5 dBm, and the mixer within the TRX achieves a gain and a double sideband noise figure of 11.5 and 12 dB, respectively. Possible applications include radar sensing for range and angle detection, material characterization, and imaging.",
"title": ""
},
{
"docid": "75ccea636210f4b4df490a7babdf7790",
"text": "BACKGROUND\nSmartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory.\n\n\nMETHODS\nA sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use.\n\n\nRESULTS\nThe prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations).\n\n\nCONCLUSIONS\nPSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.",
"title": ""
},
{
"docid": "0e644fc1c567356a2e099221a774232c",
"text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.",
"title": ""
},
{
"docid": "fbcbf7d6a53299708ecf6a780cf0834c",
"text": "We present an approach for weakly supervised learning of human actions from video transcriptions. Our system is based on the idea that, given a sequence of input data and a transcript, i.e. a list of the order the actions occur in the video, it is possible to infer the actions within the video stream and to learn the related action models without the need for any frame-based annotation. Starting from the transcript information at hand, we split the given data sequences uniformly based on the number of expected actions. We then learn action models for each class by maximizing the probability that the training video sequences are generated by the action models given the sequence order as defined by the transcripts. The learned model can be used to temporally segment an unseen video with or without transcript. Additionally, the inferred segments can be used as a starting point to train high-level fully supervised models. We evaluate our approach on four distinct activity datasets, namely Hollywood Extended, MPII Cooking, Breakfast and CRIM13. It shows that the proposed system is able to align the scripted actions with the video data, that the learned models localize and classify actions in the datasets, and that they outperform any current state-of-the-art approach for aligning transcripts with video data.",
"title": ""
}
] |
scidocsrr
|
96a4b136d70f0b18e48fdd766340fb01
|
Interacting with Recommenders - Overview and Research Directions
|
[
{
"docid": "c253ecd41a507aa66f5a029a621ab17b",
"text": "A large body of research in recommender systems focuses on optimizing prediction and ranking. However, recent work has highlighted the importance of other aspects of the recommendations, including transparency, control and user experience in general. Building on these aspects, we introduce MoodPlay, a hybrid recommender system music which integrates content and mood-based filtering in an interactive interface. We show how MoodPlay allows the user to explore a music collection by latent affective dimensions, and we explain how to integrate user input at recommendation time with predictions based on a pre-existing user profile. Results of a user study (N=240) are discussed, with four conditions being evaluated with varying degrees of visualization, interaction and control. Results show that visualization and interaction in a latent space improve acceptance and understanding of both metadata and item recommendations. However, too much of either can result in cognitive overload and a negative impact on user experience.",
"title": ""
}
] |
[
{
"docid": "388f4a555c7aa004f081cbdc6bc0f799",
"text": "We present a multi-GPU version of GPUSPH, a CUDA implementation of fluid-dynamics models based on the smoothed particle hydrodynamics (SPH) numerical method. The SPH is a well-known Lagrangian model for the simulation of free-surface fluid flows; it exposes a high degree of parallelism and has already been successfully ported to GPU. We extend the GPU-based simulator to run simulations on multiple GPUs simultaneously, to obtain a gain in speed and overcome the memory limitations of using a single device. The computational domain is spatially split with minimal overlapping and shared volume slices are updated at every iteration of the simulation. Data transfers are asynchronous with computations, thus completely covering the overhead introduced by slice exchange. A simple yet effective load balancing policy preserves the performance in case of unbalanced simulations due to asymmetric fluid topologies. The obtained speedup factor (up to 4.5x for 6 GPUs) closely follows the expected one (5x for 6 GPUs) and it is possible to run simulations with a higher number of particles than would fit on a single device. We use the Karp-Flatt metric to formally estimate the overall efficiency of the parallelization.",
"title": ""
},
{
"docid": "e4570b3894a333da2e2bf23bc90f6920",
"text": "The malaria parasite's chloroquine resistance transporter (CRT) is an integral membrane protein localized to the parasite's acidic digestive vacuole. The function of CRT is not known and the protein was originally described as a transporter simply because it possesses 10 transmembrane domains. In wild-type (chloroquine-sensitive) parasites, chloroquine accumulates to high concentrations within the digestive vacuole and it is through interactions in this compartment that it exerts its antimalarial effect. Mutations in CRT can cause a decreased intravacuolar concentration of chloroquine and thereby confer chloroquine resistance. However, the mechanism by which they do so is not understood. In this paper we present the results of a detailed bioinformatic analysis that reveals that CRT is a member of a previously undefined family of proteins, falling within the drug/metabolite transporter superfamily. Comparisons between CRT and other members of the superfamily provide insight into the possible role of the protein and into the significance of the mutations associated with the chloroquine resistance phenotype. The protein is predicted to function as a dimer and to be oriented with its termini in the parasite cytosol. The key chloroquine-resistance-conferring mutation (K76T) is localized in a region of the protein implicated in substrate selectivity. The mutation is predicted to alter the selectivity of the protein such that it is able to transport the cationic (protonated) form of chloroquine down its steep concentration gradient, out of the acidic vacuole, and therefore away from its site of action.",
"title": ""
},
{
"docid": "3180f7bd813bcd64065780bc9448dc12",
"text": "This paper reports on email classification and filtering, more specifically on spam versus ham and phishing versus spam classification, based on content features. We test the validity of several novel statistical feature extraction methods. The methods rely on dimensionality reduction in order to retain the most informative and discriminative features. We successfully test our methods under two schemas. The first one is a classic classification scenario using a 10-fold cross-validation technique for several corpora, including four ground truth standard corpora: Ling-Spam, SpamAssassin, PU1, and a subset of the TREC 2007 spam corpus, and one proprietary corpus. In the second schema, we test the anticipatory properties of our extracted features and classification models with two proprietary datasets, formed by phishing and spam emails sorted by date, and with the public TREC 2007 spam corpus. The contributions of our work are an exhaustive comparison of several feature selection and extraction methods in the frame of email classification on different benchmarking corpora, and the evidence that especially the technique of biased discriminant analysis offers better discriminative features for the classification, gives stable classification results notwithstanding the amount of features chosen, and robustly retains their discriminative value over time and data setups. These findings are especially useful in a commercial setting, where short profile rules are built based on a limited number of features for filtering emails.",
"title": ""
},
{
"docid": "d30f40e879ae7c5b49b4be94679c7424",
"text": "Java offers the basic infrastructure needed to integrate computers connected to the Internet into a seamless parallel computational resource: a flexible, easily-installed infrastructure for running coarsegrained parallel applications on numerous, anonymous machines. Ease of participation is seen as a key property for such a resource to realize the vision of a multiprocessing environment comprising thousands of computers. We present Javelin, a Java-based infrastructure for global computing. The system is based on Internet software technology that is essentially ubiquitous: Web technology. Its architecture and implementation require participants to have access only to a Java-enabled Web browser. The security constraints implied by this, the resulting architecture, and current implementation are presented. The Javelin architecture is intended to be a substrate on which various programming models may be implemented. Several such models are presented: A Linda Tuple Space, an SPMD programming model with barriers, as well as support for message passing. Experimental results are given in the form of micro-benchmarks and a Mersenne Prime application that runs on a heterogeneous network of several parallel machines, workstations, and PCs.",
"title": ""
},
{
"docid": "fc35e6b990c80b5e8de4fd783999c32f",
"text": "During the last years, Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in image classification. Their architectures have largely drawn inspiration by models of the primate visual system. However, while recent research results of neuroscience prove the existence of non-linear operations in the response of complex visual cells, little effort has been devoted to extend the convolution technique to non-linear forms. Typical convolutional layers are linear systems, hence their expressiveness is limited. To overcome this, various non-linearities have been used as activation functions inside CNNs, while also many pooling strategies have been applied. We address the issue of developing a convolution method in the context of a computational model of the visual cortex, exploring quadratic forms through the Volterra kernels. Such forms, constituting a more rich function space, are used as approximations of the response profile of visual cells. Our proposed second-order convolution is tested on CIFAR-10 and CIFAR-100. We show that a network which combines linear and non-linear filters in its convolutional layers, can outperform networks that use standard linear filters with the same architecture, yielding results competitive with the state-of-the-art on these datasets.",
"title": ""
},
{
"docid": "3ff01763def34800cf8afb9fc5fa9c83",
"text": "The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems. The support vector machine has the advantage that a smaller number of parameters for the model can be identified in a manner that does not require the extent of prior information or heuristic assumptions that some previous techniques require. Furthermore, the optimization method of a support vector machine is quadratic programming, which is a well-studied and understood mathematical programming technique. Support vector machine simulations are carried out on nonlinear problems previously studied by other researchers using neural networks. This allows initial comparison against other techniques to determine the feasibility of using the proposed method for nonlinear detection. Results show that support vector machines perform as well as neural networks on the nonlinear problems investigated. A method is then proposed to introduce decision feedback processing to support vector machines to address the fact that intersymbol interference (ISI) data generates input vectors having temporal correlation, whereas a standard support vector machine assumes independent input vectors. Presenting the problem from the viewpoint of the pattern space illustrates the utility of a bank of support vector machines. This approach yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter. A simulation using a linear system shows that the proposed method performs equally to a conventional decision feedback equalizer for this problem.",
"title": ""
},
{
"docid": "e939e98e090c57e269444ae5d503884b",
"text": "Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.",
"title": ""
},
{
"docid": "b987f831f4174ad5d06882040769b1ac",
"text": "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 1 Summary Application trends, device technologies and the architecture of systems drive progress in information technologies. However,",
"title": ""
},
{
"docid": "03fc07c20a4a87f01c95a463063f6276",
"text": "In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.",
"title": ""
},
{
"docid": "3d045ca677b4bc449e40d8673d1bf6d5",
"text": "We apply simulated method of moments to a dynamic model to infer the magnitude of financing costs. The model features endogenous investment, distributions, leverage, and default. The corporation faces taxation, costly bankruptcy, and linear-quadratic equity flotation costs. For large (small) firms, estimated marginal equity flotation costs start at 5.0% (10.7%) and bankruptcy costs equal to 8.4% (15.1%) of capital. Estimated financing frictions are higher for low-dividend firms and those identified as constrained by the Cleary and Whited-Wu indexes. In simulated data, many common proxies for financing constraints actually decrease when we increase financing cost parameters. ∗Christopher Hennessy is at the University of California, Berkeley, and Toni Whited is at the University of Wisconsin, Madison. An earlier version of this paper was circulated under the title, “Beyond Investment-Cash Flow Sensitivities: Using Indirect Inference to Estimate Costs of External Funds.” We thank seminar participants at Carnegie Mellon University, University of Southern California, Kellogg, University of Colorado, University of Texas at Austin, University of Illinois, New York University, University of Maryland, Rochester, the Stockholm Institute for Financial Research, Instituto Tecnológico Autónomo de México, the University of British Columbia Summer Finance Conference, the Western Finance Association Annual Meetings, and the European Finance Association Annual Meetings. Special thanks to Lisa Kramer, Nathalie Moyen, Michael Roberts, an associate editor, and two anonymous referees for detailed feedback. Corporate finance is primarily the study of financing frictions. After all, Modigliani and Miller (1958) showed that a CFO can neither create nor destroy value through his financing decisions in a world without such frictions. There is little debate about the existence of market imperfections that drive a wedge between the cost of internal and external funds, with a voluminous theoretical literature buttressing the argument that external funds are costly. However, the magnitude of financing frictions is still an open question. Empirical researchers have employed an array of methods to gauge the magnitude of financing frictions. For example, Altinkilic and Hansen (2000) estimate underwriter fee schedules. Asquith and Mullins (1986), amongst many others, measure the indirect costs of external equity by studying announcement effects. Weiss (1990) measures the direct legal costs incurred during Chapter 11 bankruptcies. Andrade and Kaplan (1998) assess the indirect costs of financial distress in a sample of highly levered transactions that became distressed. Another set of studies, for example, Fazzari et al. (1988), attempts to gain a sense of the magnitude of financial frictions using reduced-form investment regressions. In this paper we use observed corporate financing choices in order to infer the magnitude of financing frictions by exploiting simulated method of moments (SMM). We begin by formulating a dynamic structural model of optimal financial and investment policy for a firm facing a broad set of frictions: Corporate and personal taxation, bankruptcy costs, and linear-quadratic costs of external equity. In addition, the model embeds an agency cost of debt, as the equity-maximizing manager underinvests relative to first-best. Parameters describing the firm’s production technology, profitability shocks, and financing costs represent unknowns in the structural model. Of particular interest are the four financing cost parameters, namely, bankruptcy costs as a percentage of capital and three constants in a linear-quadratic cost of external equity function. Under conditions discussed below, minimizing the distance between model-generated moments and real-world moments yields consistent estimates of the unknown parameters. Less formally, one can view the estimates as answering the following question: What magnitude of financing costs “best” explains observed financing and investment patterns? An important step in the SMM procedure involves selecting the moments to be matched. In this",
"title": ""
},
{
"docid": "cb1a99cc1bb705d8ad5f26cc9a61e695",
"text": "In the smart grid system, dynamic pricing can be an efficient tool for the service provider which enables efficient and automated management of the grid. However, in practice, the lack of information about the customers' time-varying load demand and energy consumption patterns and the volatility of electricity price in the wholesale market make the implementation of dynamic pricing highly challenging. In this paper, we study a dynamic pricing problem in the smart grid system where the service provider decides the electricity price in the retail market. In order to overcome the challenges in implementing dynamic pricing, we develop a reinforcement learning algorithm. To resolve the drawbacks of the conventional reinforcement learning algorithm such as high computational complexity and low convergence speed, we propose an approximate state definition and adopt virtual experience. Numerical results show that the proposed reinforcement learning algorithm can effectively work without a priori information of the system dynamics.",
"title": ""
},
{
"docid": "1b8e90d78ca21fcaa5cca628cba4111a",
"text": "The Rutgers Master II-ND glove is a haptic interface designed for dextrous interactions with virtual environments. The glove provides force feedback up to 16 N each to the thumb, index, middle, and ring fingertips. It uses custom pneumatic actuators arranged in a direct-drive configuration in the palm. Unlike commercial haptic gloves, the direct-drive actuators make unnecessary cables and pulleys, resulting in a more compact and lighter structure. The force-feedback structure also serves as position measuring exoskeleton, by integrating noncontact Hall-effect and infrared sensors. The glove is connected to a haptic-control interface that reads its sensors and servos its actuators. The interface has pneumatic servovalves, signal conditioning electronics, A/D/A boards, power supply and an imbedded Pentium PC. This distributed computing assures much faster control bandwidth than would otherwise be possible. Communication with the host PC is done over an RS232 line. Comparative data with the CyberGrasp commercial haptic glove is presented.",
"title": ""
},
{
"docid": "8ae257994c6f412ceb843fcb98a67043",
"text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.",
"title": ""
},
{
"docid": "7fbd687aaea396343740288233225f85",
"text": "We address the problem of answering new questions in community forums, by selecting suitable answers to already asked questions. We approach the task as an answer ranking problem, adopting a pairwise neural network architecture that selects which of two competing answers is better. We focus on the utility of the three types of similarities occurring in the triangle formed by the original question, the related question, and an answer to the related comment, which we call relevance, relatedness, and appropriateness. Our proposed neural network models the interactions among all input components using syntactic and semantic embeddings, lexical matching, and domain-specific features. It achieves state-of-the-art results, showing that the three similarities are important and need to be modeled together. Our experiments demonstrate that all feature types are relevant, but the most important ones are the lexical similarity features, the domain-specific features, and the syntactic and semantic embeddings.",
"title": ""
},
{
"docid": "201f576423ed88ee97d1505b6d5a4d3f",
"text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.",
"title": ""
},
{
"docid": "6facd37330e6d88c84bbbfe0119d7008",
"text": "Bug fixing is one of the most important activities in software development and maintenance. A software project often employs an issue tracking system such as Bugzilla to store and manage their bugs. In the issue tracking system, many bugs are invalid but take unnecessary efforts to identify them. In this paper, we mainly focus on bug fixing rate, i.e., The proportion of the fixed bugs in the reported closed bugs. In particular, we study the characteristics of bug fixing rate and investigate the impact of a reporter's different contribution behaviors to the bug fixing rate. We perform an empirical study on all reported bugs of two large open source software communities Eclipse and Mozilla. We find (1) the bug fixing rates of both projects are not high, (2) there exhibits a negative correlation between a reporter's bug fixing rate and the average time cost to close the bugs he/she reports, (3) the amount of bugs a reporter ever fixed has a strong positive impact on his/her bug fixing rate, (4) reporters' bug fixing rates have no big difference, whether their contribution behaviors concentrate on a few products or across many products, (5) reporters' bug fixing rates tend to increase as time goes on, i.e., Developers become more experienced at reporting bugs.",
"title": ""
},
{
"docid": "87eed35ce26bf0194573f3ed2e6be7ca",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem, because such visualization can reveal deep insights of complex data. However, most of the existing embedding approaches run on an excessively high precision, even when users want to obtain a brief insight from a visualization of large-scale datasets, ignoring the fact that in the end, the outputs are embedded onto a fixed-range pixel-based screen space. Motivated by this observation and directly considering the properties of screen space in an embedding algorithm, we propose Pixel-Aligned Stochastic Neighbor Embedding (PixelSNE), a highly efficient screen resolution-driven 2D embedding method which accelerates Barnes-Hut treebased t-distributed stochastic neighbor embedding (BH-SNE), which is known to be a state-of-the-art 2D embedding method. Our experimental results show a significantly faster running time for PixelSNE compared to BH-SNE for various datasets while maintaining comparable embedding quality.",
"title": ""
},
{
"docid": "c42c3eb5c431fb3e3c588613859c241e",
"text": "-This paper presents a monitoring and control system for greenhouse through Internet of Things(IOT). The system will monitor the various environmental conditions such as humidity, soil moisture, temperature, presence of fire, etc. If any condition crosses certain limits, a message will be sent to the registered number through GSM module. The microcontroller will automatically turn on the motor if the soil moisture is less than a particular value. A color sensor will sense the color of the leaves and send message. The prototype was tested under various combinations of inputs in our laboratory and the experimental results were found as expected. KeywordsGSM module, microcontroller, sensors,",
"title": ""
},
{
"docid": "1509a06ce0b2395466fe462b1c3bd333",
"text": "This paper addresses mechanics, design, estimation and control for aerial grasping. We present the design of several light-weight, low-complexity grippers that allow quadrotors to grasp and perch on branches or beams and pick up and transport payloads. We then show how the robot can use rigid body dynamic models and sensing to verify a grasp, to estimate the the inertial parameters of the grasped object, and to adapt the controller and improve performance during flight. We present experimental results with different grippers and different payloads and show the robot's ability to estimate the mass, the location of the center of mass and the moments of inertia to improve tracking performance.",
"title": ""
},
{
"docid": "21502c42ef7a8e342334b93b1b5069d6",
"text": "Motivations to engage in retail online shopping can include both utilitarian and hedonic shopping dimensions. To cater to these consumers, online retailers can create a cognitively and esthetically rich shopping environment, through sophisticated levels of interactive web utilities and features, offering not only utilitarian benefits and attributes but also providing hedonic benefits of enjoyment. Since the effect of interactive websites has proven to stimulate online consumer’s perceptions, this study presumes that websites with multimedia rich interactive utilities and features can influence online consumers’ shopping motivations and entice them to modify or even transform their original shopping predispositions by providing them with attractive and enhanced interactive features and controls, thus generating a positive attitude towards products and services offered by the retailer. This study seeks to explore the effects of Web interactivity on online consumer behavior through an attitudinal model of technology acceptance.",
"title": ""
}
] |
scidocsrr
|
713ee77d9d1d75ba1676446766043a5b
|
Sustained attention in children with specific language impairment (SLI).
|
[
{
"docid": "bb65decbaecb11cf14044b2a2cbb6e74",
"text": "The ability to remain focused on goal-relevant stimuli in the presence of potentially interfering distractors is crucial for any coherent cognitive function. However, simply instructing people to ignore goal-irrelevant stimuli is not sufficient for preventing their processing. Recent research reveals that distractor processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on \"frontal\" cognitive control processes increases distractor processing. These findings provide a resolution to the long-standing early and late selection debate within a load theory of attention that accommodates behavioural and neuroimaging data within a framework that integrates attention research with executive function.",
"title": ""
}
] |
[
{
"docid": "3b72c70213ccd3d5f3bda5cc2e2c6945",
"text": "Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the softmax layer over a large vocabulary. We observe that, in decoding of many NLP tasks, only the probabilities of the top-K hypotheses need to be calculated preciously and K is often much smaller than the vocabulary size. This paper proposes a novel softmax layer approximation algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a given context, a set of K words that are most likely to occur according to a NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude while attaining close to the full softmax baseline accuracy on neural machine translation and language modeling tasks. We also prove the theoretical guarantee on the softmax approximation quality.",
"title": ""
},
{
"docid": "7528af716f17f125b253597e8c3e596f",
"text": "BACKGROUND\nEnhancement of the osteogenic potential of mesenchymal stem cells (MSCs) is highly desirable in the field of bone regeneration. This paper proposes a new approach for the improvement of osteogenesis combining hypergravity with osteoinductive nanoparticles (NPs).\n\n\nMATERIALS AND METHODS\nIn this study, we aimed to investigate the combined effects of hypergravity and barium titanate NPs (BTNPs) on the osteogenic differentiation of rat MSCs, and the hypergravity effects on NP internalization. To obtain the hypergravity condition, we used a large-diameter centrifuge in the presence of a BTNP-doped culture medium. We analyzed cell morphology and NP internalization with immunofluorescent staining and coherent anti-Stokes Raman scattering, respectively. Moreover, cell differentiation was evaluated both at the gene level with quantitative real-time reverse-transcription polymerase chain reaction and at the protein level with Western blotting.\n\n\nRESULTS\nFollowing a 20 g treatment, we found alterations in cytoskeleton conformation, cellular shape and morphology, as well as a significant increment of expression of osteoblastic markers both at the gene and protein levels, jointly pointing to a substantial increment of NP uptake. Taken together, our findings suggest a synergistic effect of hypergravity and BTNPs in the enhancement of the osteogenic differentiation of MSCs.\n\n\nCONCLUSION\nThe obtained results could become useful in the design of new approaches in bone-tissue engineering, as well as for in vitro drug-delivery strategies where an increment of nanocarrier internalization could result in a higher drug uptake by cell and/or tissue constructs.",
"title": ""
},
{
"docid": "1cd77d97f27b45d903ffcecda02795a5",
"text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.",
"title": ""
},
{
"docid": "0441fb016923cd0b7676d3219951c230",
"text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.",
"title": ""
},
{
"docid": "3bb6bfbb139ab9b488c4106c9d6cc3bd",
"text": "BACKGROUND\nRecent evidence demonstrates growth in both the quality and quantity of evidence in physical therapy. Much of this work has focused on randomized controlled trials and systematic reviews.\n\n\nOBJECTIVE\nThe purpose of this study was to conduct a comprehensive bibliometric assessment of Physical Therapy (PTJ) over the past 30 years to examine trends for all types of studies.\n\n\nDESIGN\nThis was a bibliometric analysis.\n\n\nMETHODS\nAll manuscripts published in PTJ from 1980 to 2009 were reviewed. Research reports, topical reviews (including perspectives and nonsystematic reviews), and case reports were included. Articles were coded based on type, participant characteristics, physical therapy focus, research design, purpose of article, clinical condition, and intervention. Coding was performed by 2 independent reviewers, and author, institution, and citation information was obtained using bibliometric software.\n\n\nRESULTS\nOf the 4,385 publications identified, 2,519 were included in this analysis. Of these, 67.1% were research reports, 23.0% were topical reviews, and 9.9% were case reports. Percentage increases over the past 30 years were observed for research reports, inclusion of \"symptomatic\" participants (defined as humans with a current symptomatic condition), systematic reviews, qualitative studies, prospective studies, and articles focused on prognosis, diagnosis, or metric topics. Percentage decreases were observed for topical reviews, inclusion of only \"asymptomatic\" participants (defined as humans without a current symptomatic condition), education articles, nonsystematic reviews, and articles focused on anatomy/physiology.\n\n\nLIMITATIONS\nQuality assessment of articles was not performed.\n\n\nCONCLUSIONS\nThese trends provide an indirect indication of the evolution of the physical therapy profession through the publication record in PTJ. Collectively, the data indicated an increased emphasis on publishing articles consistent with evidence-based practice and clinically based research. Bibliometric analyses indicated the most frequent citations were metric studies and references in PTJ were from journals from a variety of disciplines.",
"title": ""
},
{
"docid": "5c9ba6384b6983a26212e8161e502484",
"text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",
"title": ""
},
{
"docid": "9b7654390d496cb041f3073dcfb07e67",
"text": "Electronic commerce (EC) transactions are subject to multiple information security threats. Proposes that consumer trust in EC transactions is influenced by perceived information security and distinguishes it from the objective assessment of security threats. Proposes mechanisms of encryption, protection, authentication, and verification as antecedents of perceived information security. These mechanisms are derived from technological solutions to security threats that are visible to consumers and hence contribute to actual consumer perceptions. Tests propositions in a study of 179 consumers and shows a significant relationship between consumers’ perceived information security and trust in EC transactions. Explores the role of limited financial liability as a surrogate for perceived security. However, the findings show that there is a minimal effect of financial liability on consumers’ trust in EC. Engenders several new insights regarding the role of perceived security in EC transactions.",
"title": ""
},
{
"docid": "52b354c9b1cfe53598f159b025ec749a",
"text": "This paper describes a survey designed to determine the information seeking behavior of graduate students at the University of Macedonia (UoM). The survey is a continuation of a previous one undertaken in the Faculties of Philosophy and Engineering at the Aristotle University of Thessaloniki (AUTh). This paper primarily presents results from the UoM survey, but also makes comparisons with the findings from the earlier survey at AUTh. The 254 UoM students responding tend to use the simplest information search techniques with no critical variations between different disciplines. Their information seeking behavior seems to be influenced by their search experience, computer and web experience, perceived ability and frequency of use of esources, and not by specific personal characteristics or attendance at library instruction programs. Graduate students of both universities similar information seeking preferences, with the UoM students using more sophisticated techniques, such as Boolean search and truncation, more often than the AUTh students.",
"title": ""
},
{
"docid": "247eb1c32cf3fd2e7a925d54cb5735da",
"text": "Several applications in machine learning and machine-to-human interactions tolerate small deviations in their computations. Digital systems can exploit this fault-tolerance to increase their energy-efficiency, which is crucial in embedded applications. Hence, this paper introduces a new means of Approximate Computing: Dynamic-Voltage-Accuracy-Frequency-Scaling (DVAFS), a circuit-level technique enabling a dynamic trade-off of energy versus computational accuracy that outperforms other Approximate Computing techniques. The usage and applicability of DVAFS is illustrated in the context of Deep Neural Networks, the current state-of-the-art in advanced recognition. These networks are typically executed on CPU's or GPU's due to their high computational complexity, making their deployment on battery-constrained platforms only possible through wireless connections with the cloud. This work shows how deep learning can be brought to IoT devices by running every layer of the network at its optimal computational accuracy. Finally, we demonstrate a DVAFS processor for Convolutional Neural Networks, achieving efficiencies of multiple TOPS/W.",
"title": ""
},
{
"docid": "d1a4abaa57f978858edf0d7b7dc506ba",
"text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.",
"title": ""
},
{
"docid": "932b189b21703a4c50399f27395f37a6",
"text": "An ultra-low power wake-up receiver for body channel communication (BCC) is implemented in 0.13 μm CMOS process. The proposed wake-up receiver uses the injection-locking ring-oscillator (ILRO) to replace the RF amplifier with low power consumption. Through the ILRO, the frequency modulated input signal is converted to the full swing rectangular signal which is directly demodulated by the following low power PLL based FSK demodulator. In addition, the relaxed sensitivity and selectivity requirement by the good channel quality of the BCC reduces the power consumption of the receiver. As a result, the proposed wake-up receiver achieves a sensitivity of -55.2 dbm at a data rate of 200 kbps while consuming only 39 μW from the 0.7 V supply.",
"title": ""
},
{
"docid": "ba94bc5f5762017aed0c307ce89c0558",
"text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.",
"title": ""
},
{
"docid": "e6640dc272e4142a2ddad8291cfaead7",
"text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.",
"title": ""
},
{
"docid": "3af1e6d82d1c70a2602d52f47ddce665",
"text": "Birds have a smaller repertoire of immune genes than mammals. In our efforts to study antiviral responses to influenza in avian hosts, we have noted key genes that appear to be missing. As a result, we speculate that birds have impaired detection of viruses and intracellular pathogens. Birds are missing TLR8, a detector for single-stranded RNA. Chickens also lack RIG-I, the intracellular detector for single-stranded viral RNA. Riplet, an activator for RIG-I, is also missing in chickens. IRF3, the nuclear activator of interferon-beta in the RIG-I pathway is missing in birds. Downstream of interferon (IFN) signaling, some of the antiviral effectors are missing, including ISG15, and ISG54 and ISG56 (IFITs). Birds have only three antibody isotypes and IgD is missing. Ducks, but not chickens, make an unusual truncated IgY antibody that is missing the Fc fragment. Chickens have an expanded family of LILR leukocyte receptor genes, called CHIR genes, with hundreds of members, including several that encode IgY Fc receptors. Intriguingly, LILR homologues appear to be missing in ducks, including these IgY Fc receptors. The truncated IgY in ducks, and the duplicated IgY receptor genes in chickens may both have resulted from selective pressure by a pathogen on IgY FcR interactions. Birds have a minimal MHC, and the TAP transport and presentation of peptides on MHC class I is constrained, limiting function. Perhaps removing some constraint, ducks appear to lack tapasin, a chaperone involved in loading peptides on MHC class I. Finally, the absence of lymphotoxin-alpha and beta may account for the observed lack of lymph nodes in birds. As illustrated by these examples, the picture that emerges is some impairment of immune response to viruses in birds, either a cause or consequence of the host-pathogen arms race and long evolutionary relationship of birds and RNA viruses.",
"title": ""
},
{
"docid": "de408de1915d43c4db35702b403d0602",
"text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.",
"title": ""
},
{
"docid": "cd1af39ff72f2ff36708ed0bf820fb95",
"text": "Classifying semantic relations between entity pairs in sentences is an important task in Natural Language Processing (NLP). Most previous models for relation classification rely on the high-level lexical and syntatic features obtained by NLP tools such as WordNet, dependency parser, part-ofspeech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information of entity that may be the most crucial features for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only utilizes entities and their latent types as features effectively but also is more interpretable by visualizing attention mechanisms applied to our model and results of LET. Experimental results on the SemEval-2010 Task 8, one of the most popular relation classification task, demonstrate that our model outperforms existing state-ofthe-art models without any high-level features.",
"title": ""
},
{
"docid": "77f83ada0854e34ac60c725c21671434",
"text": "OBJECTIVES\nThis subanalysis of the TNT (Treating to New Targets) study investigates the effects of intensive lipid lowering with atorvastatin in patients with coronary heart disease (CHD) with and without pre-existing chronic kidney disease (CKD).\n\n\nBACKGROUND\nCardiovascular disease is a major cause of morbidity and mortality in patients with CKD.\n\n\nMETHODS\nA total of 10,001 patients with CHD were randomized to double-blind therapy with atorvastatin 80 mg/day or 10 mg/day. Patients with CKD were identified at baseline on the basis of an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m(2) using the Modification of Diet in Renal Disease equation. The primary efficacy outcome was time to first major cardiovascular event.\n\n\nRESULTS\nOf 9,656 patients with complete renal data, 3,107 had CKD at baseline and demonstrated greater cardiovascular comorbidity than those with normal eGFR (n = 6,549). After a median follow-up of 5.0 years, 351 patients with CKD (11.3%) experienced a major cardiovascular event, compared with 561 patients with normal eGFR (8.6%) (hazard ratio [HR] = 1.35; 95% confidence interval [CI] 1.18 to 1.54; p < 0.0001). Compared with atorvastatin 10 mg, atorvastatin 80 mg reduced the relative risk of major cardiovascular events by 32% in patients with CKD (HR = 0.68; 95% CI 0.55 to 0.84; p = 0.0003) and 15% in patients with normal eGFR (HR = 0.85; 95% CI 0.72 to 1.00; p = 0.049). Both doses of atorvastatin were well tolerated in patients with CKD.\n\n\nCONCLUSIONS\nAggressive lipid lowering with atorvastatin 80 mg was both safe and effective in reducing the excess of cardiovascular events in a high-risk population with CKD and CHD.",
"title": ""
},
{
"docid": "d3c8903fed280246ea7cb473ee87c0e7",
"text": "Reaction time has a been a favorite subject of experimental psychologists since the middle of the nineteenth century. However, most studies ask questions about the organization of the brain, so the authors spend a lot of time trying to determine if the results conform to some mathematical model of brain activity. This makes these papers hard to understand for the beginning student. In this review, I have ignored these brain organization questions and summarized the major literature conclusions that are applicable to undergraduate laboratories using my Reaction Time software. I hope this review helps you write a good report on your reaction time experiment. I also apologize to reaction time researchers for omissions and oversimplifications.",
"title": ""
},
{
"docid": "40a181cc018d3050e41fe9e2659acd0a",
"text": "Efforts to adapt and extend graphic arts printing techniques for demanding device applications in electronics, biotechnology and microelectromechanical systems have grown rapidly in recent years. Here, we describe the use of electrohydrodynamically induced fluid flows through fine microcapillary nozzles for jet printing of patterns and functional devices with submicrometre resolution. Key aspects of the physics of this approach, which has some features in common with related but comparatively low-resolution techniques for graphic arts, are revealed through direct high-speed imaging of the droplet formation processes. Printing of complex patterns of inks, ranging from insulating and conducting polymers, to solution suspensions of silicon nanoparticles and rods, to single-walled carbon nanotubes, using integrated computer-controlled printer systems illustrates some of the capabilities. High-resolution printed metal interconnects, electrodes and probing pads for representative circuit patterns and functional transistors with critical dimensions as small as 1 mum demonstrate potential applications in printed electronics.",
"title": ""
},
{
"docid": "b0532d77781257c80024926c836f14e1",
"text": "Various levels of automation can be introduced by intelligent decision support systems, from fully automated, where the operator is completely left out of the decision process, to minimal levels of automation, where the automation only makes recommendations and the operator has the final say. For rigid tasks that require no flexibility in decision-making and with a low probability of system failure, higher levels of automation often provide the best solution. However, in time critical environments with many external and changing constraints such as air traffic control and military command and control operations, higher levels of automation are not advisable because of the risks and the complexity of both the system and the inability of the automated decision aid to be perfectly reliable. Human-inthe-loop designs, which employ automation for redundant, manual, and monotonous tasks and allow operators active participation, provide not only safety benefits, but also allow a human operator and a system to respond more flexibly to uncertain and unexpected events. However, there can be measurable costs to human performance when automation is used, such as loss of situational awareness, complacency, skill degradation, and automation bias. This paper will discuss the influence of automation bias in intelligent decision support systems, particularly those in aviation domains. Automation bias occurs in decision-making because humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct and can be exacerbated in time critical domains. Automated decision aids are designed to reduce human error but actually can cause new errors in the operation of a system if not designed with human cognitive limitations in mind.",
"title": ""
}
] |
scidocsrr
|
950344250abe2b91d045e3f7e3bff252
|
eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys
|
[
{
"docid": "2657bb2a6b2fb59714417aa9e6c6c5eb",
"text": "Mash extends the MinHash dimensionality-reduction technique to include a pairwise mutation distance and P value significance test, enabling the efficient clustering and search of massive sequence collections. Mash reduces large sequences and sequence sets to small, representative sketches, from which global mutation distances can be rapidly estimated. We demonstrate several use cases, including the clustering of all 54,118 NCBI RefSeq genomes in 33 CPU h; real-time database search using assembled or unassembled Illumina, Pacific Biosciences, and Oxford Nanopore data; and the scalable clustering of hundreds of metagenomic samples by composition. Mash is freely released under a BSD license ( https://github.com/marbl/mash ).",
"title": ""
}
] |
[
{
"docid": "c55cab85bc7f1903e4355168e6e4e07b",
"text": "Objectives: Several quantitative studies have now examined the relationship between quality of life (QoL) and bipolar disorder (BD) and have generally indicated that QoL is markedly impaired in patients with BD. However, little qualitative research has been conducted to better describe patients’ own experiences of how BD impacts upon life quality. We report here on a series of in-depth qualitative interviews we conducted as part of the item generation phase for a disease-specific scale to assess QoL in BD. Methods: We conducted 52 interviews with people with BD (n=35), their caregivers (n=5) and healthcare professionals (n=12) identified by both convenience and purposive sampling. Clinical characteristics of the affected sample ranged widely between individuals who had been clinically stable for several years through to inpatients who were recovering from a severe episode of depression or mania. Interviews were tape recorded, transcribed verbatim and analyzed thematically. Results: Although several interwoven themes emerged from the data, we chose to focus on 6 for the purposes of this paper: routine, independence, stigma and disclosure, identity, social support and spirituality. When asked to prioritize the areas they thought were most important in determining QoL, the majority of participants ranked social support as most important, followed by mental health. Conclusions: Findings indicate that there is a complex, multifaceted relationship between BD and QoL. Most of the affected individuals we interviewed reported that BD had a profoundly negative effect upon their life quality, particularly in the areas of education, vocation, financial functioning, and social and intimate relationships. However, some people also reported that having BD opened up new doors of opportunity.",
"title": ""
},
{
"docid": "9e3562c5d4baf6be3293486383e62b3e",
"text": "Many philosophical and contemplative traditions teach that \"living in the moment\" increases happiness. However, the default mode of humans appears to be that of mind-wandering, which correlates with unhappiness, and with activation in a network of brain areas associated with self-referential processing. We investigated brain activity in experienced meditators and matched meditation-naive controls as they performed several different meditations (Concentration, Loving-Kindness, Choiceless Awareness). We found that the main nodes of the default-mode network (medial prefrontal and posterior cingulate cortices) were relatively deactivated in experienced meditators across all meditation types. Furthermore, functional connectivity analysis revealed stronger coupling in experienced meditators between the posterior cingulate, dorsal anterior cingulate, and dorsolateral prefrontal cortices (regions previously implicated in self-monitoring and cognitive control), both at baseline and during meditation. Our findings demonstrate differences in the default-mode network that are consistent with decreased mind-wandering. As such, these provide a unique understanding of possible neural mechanisms of meditation.",
"title": ""
},
{
"docid": "dc4abae418c9df783d78f508cdc2187a",
"text": "Biological sensors are becoming more important to monitor the quality of the aquatic environment. In this paper the valve movement response of freshwater (Dreissena polymorpha) and marine (Mytilus edulis) mussels is presented as a tool in monitoring studies. Examples of various methods for data storage and data treatment are presented, elucidating easier operation and lower detection limits. Several applications are mentioned, including an early warning system based on this valve movement response of mussels.",
"title": ""
},
{
"docid": "8ffc37aeacd3136d3a5801f87a3140df",
"text": "Syndromic surveillance detects and monitors individual and population health indicators through sources such as emergency department records. Automated classification of these records can improve outbreak detection speed and diagnosis accuracy. Current syndromic systems rely on hand-coded keyword-based methods to parse written fields and may benefit from the use of modern supervised-learning classifier models. In this paper we implement two recurrent neural network models based on long short-term memory (LSTM) and gated recurrent unit (GRU) cells and compare them to two traditional bag-of-words classifiers: multinomial naïve Bayes (MNB) and a support vector machine (SVM). The MNB classifier is one of only two machine learning algorithms currently being used for syndromic surveillance. All four models are trained to predict diagnostic code groups as defined by Clinical Classification Software, first to predict from discharge diagnosis, then from chief complaint fields. The classifiers are trained on 3.6 million de-identified emergency department records from a single United States jurisdiction. We compare performance of these models primarily using the F1 score. We measure absolute model performance to determine which conditions are the most amenable to surveillance based on chief complaint alone. Using discharge diagnoses The LSTM classifier performs best, though all models exhibit an F1 score above 96.00. The GRU performs best on chief complaints (F1=47.38), and MNB with bigrams performs worst (F1=39.40). Certain syndrome types are easier to detect than others. For examples, chief complaints using the GRU model predicts alcohol-related disorders well (F1=78.91) but predicts influenza poorly (F1=14.80). In all instances the RNN models outperformed the bag-of-word classifiers suggesting deep learning models could substantially improve the automatic classification of unstructured text for syndromic surveillance. INTRODUCTION Syndromic surveillance—detection and monitoring individual and population health indicators that are discernible before confirmed diagnoses are made (Mandl et al.2004)—can draw from many data sources. Electronic health records of emergency department encounters, especially the free-text chief complaint field, are a common focus for syndromic surveillance (Yoon, Ising, & Gunn 2017). In practice, a computer algorithm associates the text of the chief complaint field with predefined syndromes, often by picking out keywords or parts of keywords or a machine learning algorithm based on mathematical representation of the chief complaint text. In this paper, we explore recurrent neural networks as an alternative to existing methods for associating chief complaint text with syndromes. Overview of Chief Complaint Classifiers In a recent overview of chief complaint classifiers (Conway et al., 2013), the authors divide chief complaint classifiers into 3 categories: keyword-based classifiers, linguistic classifiers, and statistical classifiers.",
"title": ""
},
{
"docid": "c953895c57d8906736352698a55c24a9",
"text": "Data scientists and physicians are starting to use artificial intelligence (AI) even in the medical field in order to better understand the relationships among the huge amount of data coming from the great number of sources today available. Through the data interpretation methods made available by the recent AI tools, researchers and AI companies have focused on the development of models allowing to predict the risk of suffering from a specific disease, to make a diagnosis, and to recommend a treatment that is based on the best and most updated scientific evidence. Even if AI is used to perform unimaginable tasks until a few years ago, the awareness about the ongoing revolution has not yet spread through the medical community for several reasons including the lack of evidence about safety, reliability and effectiveness of these tools, the lack of regulation accompanying hospitals in the use of AI by health care providers, the difficult attribution of liability in case of errors and malfunctions of these systems, and the ethical and privacy questions that they raise and that, as of today, are still unanswered.",
"title": ""
},
{
"docid": "982d7d2d65cddba4fa7dac3c2c920790",
"text": "In this paper, we present our multichannel neural architecture for recognizing emerging named entity in social media messages, which we applied in the Novel and Emerging Named Entity Recognition shared task at the EMNLP 2017 Workshop on Noisy User-generated Text (W-NUT). We propose a novel approach, which incorporates comprehensive word representations with multichannel information and Conditional Random Fields (CRF) into a traditional Bidirectional Long Short-Term Memory (BiLSTM) neural network without using any additional hand-crafted features such as gazetteers. In comparison with other systems participating in the shared task, our system won the 3rd place in terms of the average of two evaluation metrics.",
"title": ""
},
{
"docid": "f741eb8ca9fb9798fb89674a0e045de9",
"text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.",
"title": ""
},
{
"docid": "03f913234dc6d41aada7ce3fe8de1203",
"text": "Epicanthoplasty is commonly performed on Asian eyelids. Consequently, overcorrection may appear. The aim of this study was to introduce a method of reconstructing the epicanthal fold and to apply this method to the patients. A V flap with an extension (eagle beak shaped) was designed on the medial canthal area. The upper incision line started near the medial end of the double-fold line, and it followed its curvature inferomedially. For the lower incision, starting at the tip (medial end) of the flap, a curvilinear incision was designed first diagonally and then horizontally along the lower blepharoplasty line. The V flap was elevated as thin as possible. Then, the upper flap was deeply undermined to make it thick. The lower flap was made a little thinner than the upper flap. Then, the upper and lower flaps were approximated to form the anteromedial surface of the epicanthal fold in a fashion sufficient to cover the red caruncle. The V flap was rotated inferolaterally over the caruncle. The tip of the V flap was sutured to the medial one-third point of the lower margin. The inferior border of the V flap and the residual lower margin were approximated. Thereafter, the posterolateral surface of the epicanthal fold was made. From 1999 to 2011, 246 patients were operated on using this method. Among them, 62 patients were followed up. The mean intercanthal distance was increased from 31.7 to 33.8 mm postoperatively. Among the 246 patients operated on, reoperation was performed for 6 patients. Among the 6 patients reoperated on, 3 cases were due to epicanthus inversus, 1 case was due to insufficient reconstruction, 1 case was due to making an infold, and 1 case was due to reopening the epicanthal fold.This V-Y and rotation flap can be a useful method for reconstruction of the epicanthal fold.",
"title": ""
},
{
"docid": "afbd52acb39600e8a0804f2140ebf4fc",
"text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.",
"title": ""
},
{
"docid": "30a6a3df784c2a8cc69a1bd75ad1998b",
"text": "Traditional stock market prediction approaches commonly utilize the historical price-related data of the stocks to forecast their future trends. As the Web information grows, recently some works try to explore financial news to improve the prediction. Effective indicators, e.g., the events related to the stocks and the people’s sentiments towards the market and stocks, have been proved to play important roles in the stocks’ volatility, and are extracted to feed into the prediction models for improving the prediction accuracy. However, a major limitation of previous methods is that the indicators are obtained from only a single source whose reliability might be low, or from several data sources but their interactions and correlations among the multi-sourced data are largely ignored. In this work, we extract the events from Web news and the users’ sentiments from social media, and investigate their joint impacts on the stock price movements via a coupled matrix and tensor factorization framework. Specifically, a tensor is firstly constructed to fuse heterogeneous data and capture the intrinsic ∗Corresponding author Email addresses: [email protected] (Xi Zhang), [email protected] (Yunjia Zhang), [email protected] (Senzhang Wang), [email protected] (Yuntao Yao), [email protected] (Binxing Fang), [email protected] (Philip S. Yu) Preprint submitted to Journal of LTEX Templates September 2, 2018 ar X iv :1 80 1. 00 58 8v 1 [ cs .S I] 2 J an 2 01 8 relations among the events and the investors’ sentiments. Due to the sparsity of the tensor, two auxiliary matrices, the stock quantitative feature matrix and the stock correlation matrix, are constructed and incorporated to assist the tensor decomposition. The intuition behind is that stocks that are highly correlated with each other tend to be affected by the same event. Thus, instead of conducting each stock prediction task separately and independently, we predict multiple correlated stocks simultaneously through their commonalities, which are enabled via sharing the collaboratively factorized low rank matrices between matrices and the tensor. Evaluations on the China A-share stock data and the HK stock data in the year 2015 demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "4a4a868d64a653fac864b5a7a531f404",
"text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.",
"title": ""
},
{
"docid": "c77fad43abe34ecb0a451a3b0b5d684e",
"text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"title": ""
},
{
"docid": "9a217426c46fbbb3065f141a5d70cb6b",
"text": "BACKGROUND & AIMS\nAnti-tumor necrosis factors (anti-TNF) including infliximab, adalimumab and certolizumab pegol are used to treat Crohn's disease (CD) and ulcerative colitis (UC). Paradoxically, while also indicated for the treatment of psoriasis, anti-TNF therapy has been associated with development of psoriasiform lesions in IBD patients and can compel discontinuation of therapy. We aim to investigate IBD patient, clinical characteristics, and frequency for the development of and outcomes associated with anti-TNF induced psoriasiform rash.\n\n\nMETHODS\nWe identify IBD patients on anti-TNFs with an onset of a psoriasiform rash. Patient characteristics, duration of anti-TNF, concomitant immunosuppressants, lesion distribution, and outcomes of rash are described.\n\n\nRESULTS\nOf 1004 IBD patients with exposure to anti-TNF therapy, 27 patients (2.7%) developed psoriasiform lesions. Psoriasiform rash cases stratified by biologic use were 1.3% for infliximab, 4.1% for adalimumab, and 6.4% for certolizumab. Average time on treatment (206.3weeks) and time on treatment until onset of psoriasiform lesions (126.9weeks) was significantly higher in the infliximab group. The adalimumab group had the highest need for treatment discontinuation (60%). The majority (59.3%) of patients were able to maintain on anti-TNFs despite rash onset. Among patients that required discontinuation (40.7%), the majority experienced improvement with a subsequent anti-TNF (66.7%).\n\n\nCONCLUSION\n27 cases of anti-TNF associated psoriasiform lesions are reported. Discontinuation of anti-TNF treatment is unnecessary in the majority. Dermatologic improvement was achieved in the majority with a subsequent anti-TNF, suggesting anti-TNF induced psoriasiform rash is not necessarily a class effect.",
"title": ""
},
{
"docid": "4e55d02fdd8ff4c5739cc433f4f15e9b",
"text": "muchine, \" a progrum f o r uutomuticully generating syntacticully correct progrums (test cusrs> f o r checking compiler front ends. The notion of \" clynumic grammur \" is introduced und is used in a syntax-defining notution thut procides f o r context-sensitiuity. Exurnples demonstrute use of the syntax machine. The \" syntax machine \" discussed here automatically generates random test cases for any suitably defined programming language.' The test cases it produces are syntactically valid programs. But they are not \" meaningful, \" and if an attempt is made to execute them, the results are unpredictable and uncheckable. For this reason, they are less valuable than handwritten test cases. However, as an inexhaustible source of new test material, the syntax machine has shown itself to be a valuable tool. In the following sections, we characterize the use of this tool in testing different types of language processors, introduce the concept of \" dynamic grammar \" of a programming language, outline the structure of the system, and show what the syntax machine does by means of some examples. Test cases Test cases for a language processor are programs written following the rules of the language, as documented. The test cases, when processed, should give known results. If this does not happen, then either the processor or its documentation is in error. We can distinguish three categories of language processors and assess the usefulness of the syntax machine for testing them. For an interpreter, the syntax machine test cases are virtually useless,",
"title": ""
},
{
"docid": "69b831bb25e5ad0f18054d533c313b53",
"text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.",
"title": ""
},
{
"docid": "7bd7b0b85ae68f0ccd82d597667d8acb",
"text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.",
"title": ""
},
{
"docid": "ca20d27b1e6bfd1f827f967473d8bbdd",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "c592a75ae5b607f04bdb383a1a04ccba",
"text": "Searching for influential spreaders in complex networks is an issue of great significance for applications across various domains, ranging from the epidemic control, innovation diffusion, viral marketing, social movement to idea propagation. In this paper, we first display some of the most important theoretical models that describe spreading processes, and then discuss the problem of locating both the individual and multiple influential spreaders respectively. Recent approaches in these two topics are presented. For the identification of privileged single spreaders, we summarize several widely used centralities, such as degree, betweenness centrality, PageRank, k-shell, etc. We investigate the empirical diffusion data in a large scale online social community – LiveJournal. With this extensive dataset, we find that various measures can convey very distinct information of nodes. Of all the users in LiveJournal social network, only a small fraction of them involve in spreading. For the spreading processes in LiveJournal, while degree can locate nodes participating in information diffusion with higher probability, k-shell is more effective in finding nodes with large influence. Our results should provide useful information for designing efficient spreading strategies in reality.",
"title": ""
},
{
"docid": "0f3fc1501a5990e6219b13c906c5c9fa",
"text": "Many wideband baluns have been presented in the past using coupled lines, pure magnetic coupling or slotlines. Their limitations were set whether in high frequency or low frequency performance. Due to their lumped element bandpass representation, many of them allow just certain bandwidth. The tapered coaxial coil structure allows balun operation beyond 26 GHz and down to the kHz range through partial ferrite filling. The cable losses, cable cut-off frequency, the number of windings, the permeability of the ferrite and the minimum coil diameter limit the bandwidth. The tapering allows resonance free operation through the whole band. Many microwave devices like mixers, power amplifiers, SWR-bridges, antennas, etc. can be made more broadband with this kind of balun. A stepwise approach to the proposed structure is presented and compared to previous balun implementations. Finally, a measurement is provided and some implementation possibilities are discussed.",
"title": ""
}
] |
scidocsrr
|
e31bba4be9c13b0611101be7b86081df
|
Multi-Level Fusion for Person Re-identification with Incomplete Marks
|
[
{
"docid": "dbe5661d99798b24856c61b93ddb2392",
"text": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.",
"title": ""
},
{
"docid": "ab2159730f00662ba29e25a0e27d1799",
"text": "This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "6c69be0c2a16efbe00c557650a856b21",
"text": "Visually identifying a target individual reliably in a crowded environment observed by a distributed camera network is critical to a variety of tasks in managing business information, border control, and crime prevention. Automatic re-identification of a human candidate from public space CCTV video is challenging due to spatiotemporal visual feature variations and strong visual similarity between different people, compounded by low-resolution and poor quality video data. In this work, we propose a novel method for re-identification that learns a selection and weighting of mid-level semantic attributes to describe people. Specifically, the model learns an attribute-centric, parts-based feature representation. This differs from and complements existing low-level features for re-identification that rely purely on bottom-up statistics for feature selection, which are limited in discriminating and identifying reliably visual appearances of target people appearing in different camera views under certain degrees of occlusion due to crowdedness. Our experiments demonstrate the effectiveness of our approach compared to existing feature representations when applied to benchmarking datasets.",
"title": ""
}
] |
[
{
"docid": "2eb303f3382491ae1977a3e907f197c0",
"text": "Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. The results demonstrate the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "6c1138ec8f490f824e34d15c13593007",
"text": "We present a DSP simulation environment that will enable students to perform laboratory exercises using Android mobile devices and tablets. Due to the pervasive nature of the mobile technology, education applications designed for mobile devices have the potential to stimulate student interest in addition to offering convenient access and interaction capabilities. This paper describes a portable signal processing laboratory for the Android platform. This software is intended to be an educational tool for students and instructors in DSP, and signals and systems courses. The development of Android JDSP (A-JDSP) is carried out using the Android SDK, which is a Java-based open source development platform. The proposed application contains basic DSP functions for convolution, sampling, FFT, filtering and frequency domain analysis, with a convenient graphical user interface. A description of the architecture, functions and planned assessments are presented in this paper. Introduction Mobile technologies have grown rapidly in recent years and play a significant role in modern day computing. The pervasiveness of mobile devices opens up new avenues for developing applications in education, entertainment and personal communications. Understanding the effectiveness of smartphones and tablets in classroom instruction have been a subject of considerable research in recent years. The advantages of handheld devices over personal computers in K-12 education have been investigated 1 . The study has found that the easy accessibility and maneuverability of handheld devices lead to an increase in student interest. By incorporating mobile technologies into mathematics and applied mathematics courses, it has been shown that smartphones can broaden the scope and effectiveness of technical education in classrooms 2 . Fig 1: Splash screen of the AJDSP Android application Designing interactive applications to complement traditional teaching methods in STEM education has also been of considerable interest. The role of interactive learning in knowledge dissemination and acquisition has been discussed and it has been found to assist in the development of cognitive skills 3 . It has been showed learning potential is enhanced when education tools that possess a higher degree of interactivity are employed 4 . Software applications that incorporate visual components in learning, in order to simplify the understanding of complex theoretical concepts, have been also been developed 5-9 . These applications are generally characterized by rich user interaction and ease of accessibility. Modern mobile phones and tablets possess abundant memory and powerful processors, in addition to providing highly interactive interfaces. These features enable the design of applications that require intensive calculations to be supported on mobile devices. In particular, Android operating system based smartphones and tablets have large user base and sophisticated hardware configurations. Though several applications catering to elementary school education have been developed for Android devices, not much effort has been undertaken towards building DSP simulation applications 10 . In this paper, we propose a mobile based application that will enable students to perform Digital Signal Processing laboratories on their smartphone devices (Figure 1). In order to enable students to perform DSP labs over the Internet, the authors developed J-DSP, a visual programming environment 11-12 . J-DSP was designed as a zero footprint, standalone Java applet that can run directly on a browser. Several interactive laboratories have been developed and assessed in undergraduate courses. In addition to containing basic signal processing functions such as sampling, convolution, digital filter design and spectral analysis, J-DSP is also supported by several toolboxes. An iOS version of the software has also been developed and presented 13-15 . Here, we describe an Android based graphical application, A-JDSP, for signal processing simulation. The proposed tool has the potential to enhance DSP education by supporting both educators and students alike to teach and learn digital signal processing. The rest of the paper is organized as follows. We review related work in Section 2 and present the architecture of the proposed application in Section 3. In Section 4 we describe some of the functionalities of the software. We describe planned assessment strategies for the proposed application in Section 5. The concluding remarks and possible directions of extending this work are discussed in Section 6. Related Work Commercial packages such as MATLAB 16 and LabVIEW 17 are commonly used in signal processing research and application development. J-DSP, a web-based graphical DSP simulation package, was proposed as a non-commercial alternative for performing laboratories in undergraduate courses 3 . Though J-DSP is a light-weight application, running J-DSP over the web on mobile devices can be data-intensive. Hence, executing simulations directly on the mobile device is a suitable alternative. A mobile application that supports functions pertinent to different areas in electrical engineering, such as circuit theory, control systems and DSP has been reported 18 . However, it does not contain a comprehensive set of functions to simulate several DSP systems. In addition to this, a mobile interface for the MATLAB package has been released 19 . However, this requires an active version of MATLAB on a remote machine and a high speed internet connection to access the remote machine from the mobile device. In order to circumvent these problems, i-JDSP, an iOS version of the J-DSP software was proposed 13-15 . It implements DSP functions and algorithms optimized for mobile devices, thereby removing the need for internet connectivity. Our work builds upon J-DSP 11-12 and the iOS version of J-DSP 13-15 , and proposes to build an application for the Android operating system. Presently, to the best of our knowledge, there are no freely available Android applications that focus on signal processing education. Architecture The proposed application is implemented using Android-SDK 22 , which is a Java based development framework. The user interfaces are implemented using XML as it is well suited for Android development. The architecture of the proposed system is illustrated in Figure 2. It has five main components: (i) User Interfaces, (ii) Part Object, (iii) Part Calculator, (iv) Part View, and (v) Parts Controller. The role of each of them is described below in detail. The blocks in A-JDSP can be accessed through a function palette (user interface) and each block is associated with a view using which the function properties can be modified. The user interfaces obtain the user input data and pass them to the Part Object. Furthermore, every block has a separate Calculator function to perform the mathematical and signal processing algorithms. The Part Calculator uses the data from the input pins of the block, implements the relevant algorithms and updates the output pins. Figure 2. Architecture of AJDSP. Parts Controller Part Calculator Part Object User Interface Part View All the configuration information, such as the pin specifications, the part name and location of the block is contained in the Part Object class. In addition, the Part Object can access the data from each of the input pins of the block. When the user adds a particular block in the simulation, an instance of the Part Object class is created and is stored by a list object in the Parts Controller. The Parts Controller is an interface between the Part Object and the Part View. One of the main functions of Parts Controller is supervising block creation. The process of block creation by the Parts Controller can be described as follows: The block is configured by the user through the user interface and the block data is passed to an instance of the Part Object class. The Part Object then sends the block configuration information through the Parts Controller to the Part View, which finally renders the block. The Part View is the main graphical interface of the application. This displays the blocks and connections on the screen. It contains functionalities for selecting, moving and deleting blocks. Examples of block diagrams in the A-JDSP application for different simulations are illustrated in Figure 3(a), Figure 4(a) and Figure 5(a) respectively. Functionalities In this section, we describe some of the DSP functionalities that have been developed as part of A-JDSP. Android based Signal Generator block This generates the various input signals necessary for A-JDSP simulations. In addition to deterministic signals such as square, triangular and sinusoids; random signals from Gaussian Rayleigh and Uniform distributions can be generated. The signal related parameters such as signal frequency, time shift, mean and variance can be set through the user interface.",
"title": ""
},
{
"docid": "1c06e82a20b72c8c1ec7d493d7dbee78",
"text": "Automotive industry is facing a multitude of challenges towards sustainability that can be partly also addressed by product design: o Climate change and oil dependency. The growing weight of evidence holds that manmade greenhouse gas emissions are starting to influence the world’s climate in ways that affect all parts of the globe (IPCC 2007) – along with growing concerns over the use and availability of fossil carbon. There is a need for timely action including those in vehicle design. o Air Quality and other emissions as noise. Summer smog situa tions frequently lead to traffic restrictions for vehicles not compliant to most recent emission standards. Other emissions as noise affect up to 80 million citizens – much of it caused by the transport sector (roads, railway, aircraft, etc.) (ERF 2007). o Mobility Capability. Fulfilling the societal mobility demand is a key factor enabling (sustainable) development. This is challenged where the infrastructure is not aligned to the mobility demand and where the mobility capability of the individual transport mode (cars, trains, etc.) are not fulfilling these needs – leading to unnecessary travel time and emissions (traffic jams, non-direct connections, lack of parking opportunities, etc.). In such areas, insufficient infrastructure is the reason for 38% of CO2 vehicle emissions (SINTEF 2007). Industry has also to consider changing mobility needs in aging societies. o Safety. Road accidents (including all related transport modes as well as pedestrians) result to 1.2 million fatalities globally according to the World Bank. o Affordability. As mobility is an important precondition for any development it is important that all the mobility solutions are affordable for the targeted regions and markets. All these challenges are both, risks and business opportunities.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "da3e4903974879868b87b94d7cc0bf21",
"text": "INTRODUCTION\nThe existence of maternal health service does not guarantee its use by women; neither does the use of maternal health service guarantee optimal outcomes for women. The World Health Organization recommends monitoring and evaluation of maternal satisfaction to improve the quality and efficiency of health care during childbirth. Thus, this study aimed at assessing maternal satisfaction on delivery service and factors associated with it.\n\n\nMETHODS\nCommunity based cross-sectional study was conducted in Debre Markos town from March to April 2014. Systematic random sampling technique were used to select 398 mothers who gave birth within one year. The satisfaction of mothers was measured using 19 questions which were adopted from Donabedian quality assessment framework. Binary logistic regression was fitted to identify independent predictors.\n\n\nRESULT\nAmong mothers, the overall satisfaction on delivery service was found to be 318 (81.7%). Having plan to deliver at health institution (AOR = 3.30, 95% CI: 1.38-7.9) and laboring time of less than six hours (AOR = 4.03, 95% CI: 1.66-9.79) were positively associated with maternal satisfaction on delivery service. Those mothers who gave birth using spontaneous vaginal delivery (AOR = 0.11, 95% CI: 0.023-0.51) were inversely related to maternal satisfaction on delivery service.\n\n\nCONCLUSION\nThis study revealed that the overall satisfaction of mothers on delivery service was found to be suboptimal. Reasons for delivery visit, duration of labor, and mode of delivery are independent predictors of maternal satisfaction. Thus, there is a need of an intervention on the independent predictors.",
"title": ""
},
{
"docid": "a607d049ef590f13b31566a14e158dc9",
"text": "In this video, we present our latest results towards fully autonomous flights with a small helicopter. Using a monocular camera as the only exteroceptive sensor, we fuse inertial measurements to achieve a self-calibrating power-on-and-go system, able to perform autonomous flights in previously unknown, large, outdoor spaces. Our framework achieves Simultaneous Localization And Mapping (SLAM) with previously unseen robustness in onboard aerial navigation for small platforms with natural restrictions on weight and computational power. We demonstrate successful operation in flights with altitude between 0.2-70 m, trajectories with 350 m length, as well as dynamic maneuvers with track speed of 2 m/s. All flights shown are performed autonomously using vision in the loop, with only high-level waypoints given as directions.",
"title": ""
},
{
"docid": "867ddbd84e8544a5c2d6f747756ca3d9",
"text": "We report a 166 W burst mode pulse fiber amplifier seeded by a Q-switched mode-locked all-fiber laser at 1064 nm based on a fiber-coupled semiconductor saturable absorber mirror. With a pump power of 230 W at 976 nm, the output corresponds to a power conversion efficiency of 74%. The repetition rate of the burst pulse is 20 kHz, the burst energy is 8.3 mJ, and the burst duration is ∼ 20 μs, which including about 800 mode-locked pulses at a repetition rate of 40 MHz and the width of the individual mode-locked pulse is measured to be 112 ps at the maximum output power. To avoid optical damage to the fiber, the initial mode-locked pulses were stretched to 72 ps by a bandwidth-limited fiber bragg grating. After a two-stage preamplifier, the pulse width was further stretched to 112 ps, which is a result of self-phase modulation of the pulse burst during the amplification.",
"title": ""
},
{
"docid": "370a2009695f1a18b2e6dbe6bc463bb0",
"text": "While automated vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for automated driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human–machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter. Furthermore, factors-affecting trust change, both during user interactions with the system and over time; thus, HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.",
"title": ""
},
{
"docid": "a2f3b158f1ec7e6ecb68f5ddfeaf0502",
"text": "Facial landmark detection of face alignment has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multitask learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [29]. In this technical report, we extend the method presented in our ECCV 2014 [39] paper to handle more landmark points (68 points instead of 5 major facial points) without either redesigning the deep model or involving significant increase in run time cost. This is made possible by transferring the learned 5-point model to the desired facial landmark configuration, through model fine-tuning with dense landmark annotations. Our new model achieves the state-of-the-art result on the 300-W benchmark dataset (mean error of 9.15% on the challenging IBUG subset).",
"title": ""
},
{
"docid": "78cbb5522d1eb479f194dccec53307c4",
"text": "Introduction: The roots of the cannabis plant have a long history of medical use stretching back millennia. However, the therapeutic potential of cannabis roots has been largely ignored in modern times. Discussion: In the first century, Pliny the Elder described in Natural Histories that a decoction of the root in water could be used to relieve stiffness in the joints, gout, and related conditions. By the 17th century, various herbalists were recommending cannabis root to treat inflammation, joint pain, gout, and other conditions. There has been a subsequent paucity of research in this area, with only a few studies examining the composition of cannabis root and its medical potential. Active compounds identified and measured in cannabis roots include triterpenoids, friedelin (12.8 mg/kg) and epifriedelanol (21.3 mg/kg); alkaloids, cannabisativine (2.5 mg/kg) and anhydrocannabisativine (0.3 mg/kg); carvone and dihydrocarvone; N-(p-hydroxy-β-phenylethyl)-p-hydroxy-trans-cinnamamide (1.6 mg/kg); various sterols such as sitosterol (1.5%), campesterol (0.78%), and stigmasterol (0.56%); and other minor compounds, including choline. Of note, cannabis roots are not a significant source of Δ9-tetrahydrocannabinol (THC), cannabidiol, or other known phytocannabinoids. Conclusion: The current available data on the pharmacology of cannabis root components provide significant support to the historical and ethnobotanical claims of clinical efficacy. Certainly, this suggests the need for reexamination of whole root preparations on inflammatory and malignant conditions employing modern scientific techniques.",
"title": ""
},
{
"docid": "36c556c699db79c3a84a897b7b382c73",
"text": "This paper presents a new fingerprint minutiae extraction approach that is based on the analysis of the ridge flux distribution. The considerable processing time taken by the conventional approaches, most of which use the ridge thinning process with a rather large calculation time, is a problem that has recently attracted increased attention. We observe that the features of a ridge curve are very similar to those of a vector flux such as a line of electric force or a line of magnetic force. In the proposed approach, vector flux analysis is applied to detect minutiae without using the ridge thinning process in order to reduce the computation time. The experimental results show that the proposed approach can achieve a reduction in calculation time, while achieving the same success detection rate as that of the conventional approaches.",
"title": ""
},
{
"docid": "20f4bcde35458104271e9127d8b7f608",
"text": "OBJECTIVES\nTo evaluate the effect of bulk-filling high C-factor posterior cavities on adhesion to cavity-bottom dentin.\n\n\nMETHODS\nA universal flowable composite (G-ænial Universal Flo, GC), a bulk-fill flowable base composite (SDR Posterior Bulk Fill Flowable Base, Dentsply) and a conventional paste-like composite (Z100, 3M ESPE) were bonded (G-ænial Bond, GC) into standardized cavities with different cavity configurations (C-factors), namely C=3.86 (Class-I cavity of 2.5mm deep, bulk-filled), C=5.57 (Class-I cavity of 4mm deep, bulk-filled), C=1.95 (Class-I cavity of 2.5mm deep, filled in three equal layers) and C=0.26 (flat surface). After one-week water storage, the restorations were sectioned in 4 rectangular micro-specimens and subjected to a micro-tensile bond strength (μTBS) test.\n\n\nRESULTS\nHighly significant differences were found between pairs of means of the experimental groups (Kruskal-Wallis, p<0.0001). Using the bulk-fill flowable base composite SDR (Dentsply), no significant differences in μTBS were measured among all cavity configurations (p>0.05). Using the universal flowable composite G-ænial Universal Flo (GC) and the conventional paste-like composite Z100 (3M ESPE), the μTBS to cavity-bottom dentin was not significantly different from that of SDR (Dentsply) when the cavities were layer-filled or the flat surface was build up in layers; it was however significantly lower when the Class-I cavities were filled in bulk, irrespective of cavity depth.\n\n\nSIGNIFICANCE\nThe filling technique and composite type may have a great impact on the adhesion of the composite, in particular in high C-factor cavities. While the bulk-fill flowable base composite provided satisfactory bond strengths regardless of filling technique and cavity depth, adhesion failed when conventional composites were used in bulk.",
"title": ""
},
{
"docid": "f8763404f21e3bea6744a3fb51838569",
"text": "Search engine advertising in the present day is a pronounced component of the Web. Choosing the appropriate and relevant ad for a particular query and positioning of the ad critically impacts the probability of being noticed and clicked. It also strategically impacts the revenue, the search engine shall generate from a particular Ad. Needless to say, showing the user an Ad that is relevant to his/her need greatly improves users satisfaction. For all the aforesaid reasons, its of utmost importance to correctly determine the click-through rate (CTR) of ads in a system. For frequently appearing ads, CTR is empirically measurable, but for the new ads, other means have to be devised. In this paper we propose and establish a model to predict the CTRs of advertisements adopting Logistic Regression as the effective framework for representing and constructing conditions and vulnerabilities among variables. Logistic Regression is a type of probabilistic statistical classification model that predicts a binary response from a binary predictor, based on one or more predictor variables. Advertisements that have the most elevated to be clicked are chosen using supervised machine learning calculation. We tested Logistic Regression algorithm on a one week advertisement data of size around 25 GB by considering position and impression as predictor variables. Using this prescribed model we were able to achieve around 90% accuracy for CTR estimation.",
"title": ""
},
{
"docid": "30d0ff3258decd5766d121bf97ae06d4",
"text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.",
"title": ""
},
{
"docid": "16cd40642b6179cbf08ed09577c12bc9",
"text": "Considerable scientific and technological efforts have been devoted to develop neuroprostheses and hybrid bionic systems that link the human nervous system with electronic or robotic prostheses, with the main aim of restoring motor and sensory functions in disabled patients. A number of neuroprostheses use interfaces with peripheral nerves or muscles for neuromuscular stimulation and signal recording. Herein, we provide a critical overview of the peripheral interfaces available and trace their use from research to clinical application in controlling artificial and robotic prostheses. The first section reviews the different types of non-invasive and invasive electrodes, which include surface and muscular electrodes that can record EMG signals from and stimulate the underlying or implanted muscles. Extraneural electrodes, such as cuff and epineurial electrodes, provide simultaneous interface with many axons in the nerve, whereas intrafascicular, penetrating, and regenerative electrodes may contact small groups of axons within a nerve fascicle. Biological, technological, and material science issues are also reviewed relative to the problems of electrode design and tissue injury. The last section reviews different strategies for the use of information recorded from peripheral interfaces and the current state of control neuroprostheses and hybrid bionic systems.",
"title": ""
},
{
"docid": "d669dfcdc2486314bd7234e1f42357de",
"text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.",
"title": ""
},
{
"docid": "d42a30b26ef26e7bf9b4e5766d620395",
"text": "Development of Web 2.0 enabled users to share information online, which results into an exponential growth of world wide web data. This leads to the so-called information overload problem. Recommender Systems (RS) are intelligent systems, helping on-line users to overcome information overload by providing customized recommendations on various items. In real world, people are willing to take advice and recommendation from their trustworthy friends only. Trust plays a key role in the decision-making process of a person. Incorporation of trust information in RS, results in a new class of recommender systems called trust aware recommender systems (TARS). This paper presents a survey on various implicit trust generation techniques in context of TARS. We have analyzed eight different implicit trust metrics, with respect to various properties of trust proposed by researchers in regard to TARS. Keywords—implicit trust; trust aware recommender system; trust metrics.",
"title": ""
},
{
"docid": "8bb65350ae35b66f54859444ea063bb2",
"text": "Over the course of the next 10 years, the Internet of Things (IoT) is set to have a transformational effect on the everyday technologies which surround us. Access to the data produced by these devices opens an interesting space to practice discovery based learning. This paper outlines a participatory design approach taken to develop an IoTbased ecosystem which was deployed in 8 schools across England. In particular, we describe how we designed and developed the system and reflect on some of the early experiences of students and teachers. We found that schools were willing to adopt the IoT technology within certain bounds and we outline best practices uncovered when introducing technologies to schools.",
"title": ""
},
{
"docid": "42520b1cfaec4a5f890f7f0845d5459b",
"text": "Class imbalance problem is quite pervasive in our nowadays human practice. This problem basically refers to the skewness in the data underlying distribution which, in turn, imposes many difficulties on typical machine learning algorithms. To deal with the emerging issues arising from multi-class skewed distributions, existing efforts are mainly divided into two categories: model-oriented solutions and data-oriented techniques. Focusing on the latter, this paper presents a new over-sampling technique which is inspired by Mahalanobis distance. The presented over-sampling technique, called MDO (Mahalanobis Distance-based Over-sampling technique), generates synthetic samples which have the same Mahalanobis distance from the considered class mean as other minority class examples. By preserving the covariance structure of the minority class instances and intelligently generating synthetic samples along the probability contours, new minority class instances are modelled better for learning algorithms. Moreover, MDO can reduce the risk of overlapping between different class regions which are considered as a serious challenge in multi-class problems. Our theoretical analyses and empirical observations across wide spectrum multi-class imbalanced benchmarks indicate that MDO is the method of choice by offering statistical superior MAUC and precision compared to the popular over-sampling techniques.",
"title": ""
}
] |
scidocsrr
|
1d930c1fe1b190f3f5201eb325aca953
|
Epidemiological modeling of online social network dynamics
|
[
{
"docid": "1176abf11f866dda3a76ce080df07c05",
"text": "Google Flu Trends can detect regional outbreaks of influenza 7-10 days before conventional Centers for Disease Control and Prevention surveillance systems. We describe the Google Trends tool, explain how the data are processed, present examples, and discuss its strengths and limitations. Google Trends shows great promise as a timely, robust, and sensitive surveillance system. It is best used for surveillance of epidemics and diseases with high prevalences and is currently better suited to track disease activity in developed countries, because to be most effective, it requires large populations of Web search users. Spikes in search volume are currently hard to interpret but have the benefit of increasing vigilance. Google should work with public health care practitioners to develop specialized tools, using Google Flu Trends as a blueprint, to track infectious diseases. Suitable Web search query proxies for diseases need to be established for specialized tools or syndromic surveillance. This unique and innovative technology takes us one step closer to true real-time outbreak surveillance.",
"title": ""
}
] |
[
{
"docid": "62783a0f5a4543fc62e39cbac63094a4",
"text": "The habenula is a tiny brain region the size of a pea in humans. This region is highly conserved across vertebrates and has been traditionally overlooked by neuroscientists. The name habenula is derived from the Latin word habena, meaning \"little rein\", because of its elongated shape. Originally its function was thought to be related to the regulation of the nearby pineal gland (which Rene Descartes described as the \"principal seat of the soul\"). More recent evidence, however, demonstrates that the habenula acts as a critical neuroanatomical hub that connects and regulates brain regions important for divergent motivational states and cognition. In this Primer, we will discuss the recent and converging evidence that points to the habenula as a key brain region for motivation and decision-making.",
"title": ""
},
{
"docid": "798ee46a8ac10787eaa154861d0311c6",
"text": "In the last few years, we have seen the transformative impact of deep learning in many applications, particularly in speech recognition and computer vision. Inspired by Google's Inception-ResNet deep convolutional neural network (CNN) for image classification, we have developed\"Chemception\", a deep CNN for the prediction of chemical properties, using just the images of 2D drawings of molecules. We develop Chemception without providing any additional explicit chemistry knowledge, such as basic concepts like periodicity, or advanced features like molecular descriptors and fingerprints. We then show how Chemception can serve as a general-purpose neural network architecture for predicting toxicity, activity, and solvation properties when trained on a modest database of 600 to 40,000 compounds. When compared to multi-layer perceptron (MLP) deep neural networks trained with ECFP fingerprints, Chemception slightly outperforms in activity and solvation prediction and slightly underperforms in toxicity prediction. Having matched the performance of expert-developed QSAR/QSPR deep learning models, our work demonstrates the plausibility of using deep neural networks to assist in computational chemistry research, where the feature engineering process is performed primarily by a deep learning algorithm.",
"title": ""
},
{
"docid": "f78e430994e9eeccd034df76d2b5316a",
"text": "An externally leveraged circular resonant piezoelectric actuator with haptic natural frequency and fast response time was developed within the volume of 10 mm diameter and 3.4 mm thickness for application in mobile phones. An efficient displacement-amplifying mechanism was developed using a piezoelectric bimorph, a lever system, and a mass-spring system. The proposed displacement-amplifying mechanism utilizes both internally and externally leveraged structures. The former generates bending by means of bending deformation of the piezoelectric bimorph, and the latter transforms the bending to radial displacement of the lever system, which is transformed to a large axial displacement of the spring. The piezoelectric bimorph, lever system, and spring were designed to maximize static displacement and the mass-spring system was designed to have a haptic natural frequency. The static displacement, natural frequency, maximum output displacement, and response time of the resonant piezoelectric actuator were calculated by means of finite-element analyses. The proposed resonant piezoelectric actuator was prototyped and the simulated results were verified experimentally. The prototyped piezoelectric actuator generated the maximum output displacement of 290 μm at the haptic natural frequency of 242 Hz. Owing to the proposed efficient displacement-amplifying mechanism, the proposed resonant piezoelectric actuator had the fast response time of 14 ms, approximately one-fifth of a conventional resonant piezoelectric actuator of the same size.",
"title": ""
},
{
"docid": "1367527934bacc04443965406aea1a11",
"text": "The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales.",
"title": ""
},
{
"docid": "1b844eb4aeaac878ebffaaf5b4d6e3ab",
"text": "Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memoryefficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.",
"title": ""
},
{
"docid": "a28567e108f00e3b251882404f2574b2",
"text": "Sirs: A 46-year-old woman was referred to our hospital because of suspected cerebral ischemia. Two days earlier the patient had recognized a left-sided weakness and clumsiness. On neurological examination we found a mild left-sided hemiparesis and hemiataxia. There was a generalized shrinking violaceous netlike pattering of the skin especially on both legs and arms but also on the trunk and buttocks (Fig. 1). The patient reported the skin changing to be more prominent on cold exposure. The patient’s family remembered this skin finding to be evident since the age of five years. A diagnosis of livedo racemosa had been made 5 years ago. The neuropsychological assessment of this highly educated civil servant revealed a slight cognitive decline. MRI showed a right-sided cerebral ischemia in the middle cerebral artery (MCA) territory. Her medical history was significant for migraine-like headache for many years, a miscarriage 18 years before and a deep vein thrombosis of the left leg six years ago. She had no history of smoking or other cerebrovascular risk factors including no estrogen-containing oral contraceptives. The patient underwent intensive examinations including duplex sonography of extraand intracranial arteries, transesophageal echocardiography, 24-h ECG, 24-h blood pressure monitoring, multimodal evoked potentials, electroencephalography, lumbar puncture and sonography of abdomen. All these tests were negative. Extensive laboratory examinations revealed a heterozygote prothrombin 20210 mutation, which is associated with a slightly increased risk for thrombosis. Antiphospholipid antibodies (aplAB) and other laboratory examinations to exclude vasculitis, toxic metabolic disturbances and other causes for livedo racemosa were negative. Skin biopsy showed vasculopathy with intimal proliferation and an occluding thrombus. The patient was diagnosed as having antiphospholipid-antibodynegative Sneddon’s syndrome (SS) based on cerebral ischemia combined with wide-spread livedo racemosa associated with a history of miscarriage, deep vein thrombosis, migraine like headaches and mild cognitive decline. We started long-term prophylactic pharmacological therapy with captopril as a myocyte proliferation agent and with aspirin as an antiplatelet therapy. Furthermore we recommended thrombosis prophylaxis in case of immobilization. One month later the patient experienced vein thrombosis of her right forearm and suffered from dyspnea. Antiphospholipid antibody testing again was negative. EBT and CT of thorax showed an aneurysmatic dilatation of aorta ascendens up to 4.5 cm. After careful consideration of the possible disadvantages we nevertheless decided to start long-term anticoagulation instead of antiplatelet therapy because of the second thrombotic event. The elucidating and interesting issue of this case is the association of miscarriage and two vein thromboses in aplAB-negative SS. Little is known about this phenomenon and there are only a few reports about these symptoms in aplABLETTER TO THE EDITORS",
"title": ""
},
{
"docid": "091eedcd69373f99419a745f2215e345",
"text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.",
"title": ""
},
{
"docid": "641a98a0f0b1ac4d382379271dedfbef",
"text": "The image captured in water is hazy due to the several effects of the underwater medium. These effects are governed by the suspended particles that lead to absorption and scattering of light during image formation process. The underwater medium is not friendly for imaging data and brings low contrast and fade color issues. Therefore, during any image based exploration and inspection activity, it is essential to enhance the imaging data before going for further processing. This paper presents a wavelet-based fusion method to enhance the hazy underwater images by addressing the low contrast and color alteration issues. The publicly available hazy underwater images are enhanced and analyzed qualitatively with some state of the art methods. The quantitative study of image quality depicts promising results.",
"title": ""
},
{
"docid": "36e33b38f188e27db1d64d6291d577e7",
"text": "Steganography, an information-hiding technique, is that embedding secret information into a cover-media to generate a meaningful stego-media. This paper proposes a novel image steganography scheme using 3D-Sudoku. In this paper, a cyclically moving algorithm is used to construct a 3D-Sudoku. The data-embedding phase is that the pixels of the cover-image as the coordinate of 3D-Sudoku are modified in the rule of a minimal distortion to indicate the position of a given secret data. And in the dataextraction phase, the modified pixels of the stego-image as the coordinate of 3D-Sudoku are applied to directly extract the embedded secret data. The experimental results show that the visual quality of stego-image in the proposed scheme is slightly less than that in the steganography scheme using 2D-Sudoku, but the embedding capacity in the proposed scheme is higher than that in the steganography scheme using 2D-Sudoku.",
"title": ""
},
{
"docid": "ded061de80b868ab5c877594a01c23c8",
"text": "In (McCarthy and Hayes 1969), we proposed dividing the artificial intelligence problem into two parts—an epistemological part and a heuristic part. This lecture further explains this division, explains some of the epistemological problems, and presents some new results and approaches. The epistemological part of AI studies what kinds of facts about the world are available to an observer with given opportunities to observe, how these facts can be represented in the memory of a computer, and what rules permit legitimate conclusions to be drawn from these facts. It leaves aside the heuristic problems of how to search spaces of possibilities and how to match patterns. Considering epistemological problems separately has the following advantages:",
"title": ""
},
{
"docid": "a831eb6211cac4876afc91b5a4219cce",
"text": "We present a system for real-time, high-resolution, sparse voxelization of an image-based surface model. Our approach consists of a coarse-to-fine voxel representation and a collection of parallel processing steps. Voxels are stored as a list of unsigned integer triples. An oracle kernel decides, for each voxel in parallel, whether to keep or cull its voxel from the list based on an image consistency criterion of its projection across cameras. After a prefix sum scan, kept voxels are subdivided and the process repeats until projected voxels are pixel size. These voxels are drawn to a render target and shaded as a weighted combination of their projections into a set of calibrated RGB images. We apply this technique to the problem of smooth visual hull reconstruction of human subjects based on a set of live image streams. We demonstrate that human upper body shapes can be reconstructed to giga voxel resolution at greater than 30 fps on modern graphics hardware.",
"title": ""
},
{
"docid": "9c579c8556ae4ecaf48c17d9e75685d5",
"text": "This paper considers the motion control problem of unicycle type mobile ro bots. We present the mathematical model of the mobile robots taken explicitly into account their dynamics and fo rmulate the respectively motion control strategies of tracking and path-following. Two types of controller s presented in the literature are revised and their performance are illustrated through computer simulation s. The problem of regulation to a point is also addressed.",
"title": ""
},
{
"docid": "d07b385e9732a273824897671b119196",
"text": "Motivation: Progress in machine learning techniques has led to the development of various techniques well suited to online estimation and rapid aggregation of information. Theoretical models of marketmaking have led to price-setting equations for which solutions cannot be achieved in practice, whereas empirical work on algorithms for market-making has so far focused on sets of heuristics and rules that lack theoretical justification. We are developing algorithms that are theoretically justified by results in finance, and at the same time flexible enough to be easily extended by incorporating modules for dealing with considerations like portfolio risk and competition from other market-makers.",
"title": ""
},
{
"docid": "b70262eaf97fbdf4ebadd996c8bdc761",
"text": "Online virtual navigation systems enable users to hop from one 360° panorama to another, which belong to a sparse point-to-point collection, resulting in a less pleasant viewing experience. In this paper, we present a novel method, namely Cube2Video, to support navigating between cubic panoramas in a video-viewing mode. Our method circumvents the intrinsic challenge of cubic panoramas, i.e., the discontinuities between cube faces, in an efficient way. The proposed method extends the matching-triangulation-interpolation procedure with special considerations of the spherical domain. A triangle-to-triangle homography-based warping is developed to achieve physically plausible and visually pleasant interpolation results. The temporal smoothness of the synthesized video sequence is improved by means of a compensation transformation. As experimental results demonstrate, our method can synthesize pleasant video sequences in real time, thus mimicking walking or driving navigation.",
"title": ""
},
{
"docid": "93a3895a03edcb50af74db901cb16b90",
"text": "OBJECT\nBecause lumbar magnetic resonance (MR) imaging fails to identify a treatable cause of chronic sciatica in nearly 1 million patients annually, the authors conducted MR neurography and interventional MR imaging in 239 consecutive patients with sciatica in whom standard diagnosis and treatment failed to effect improvement.\n\n\nMETHODS\nAfter performing MR neurography and interventional MR imaging, the final rediagnoses included the following: piriformis syndrome (67.8%), distal foraminal nerve root entrapment (6%), ischial tunnel syndrome (4.7%), discogenic pain with referred leg pain (3.4%), pudendal nerve entrapment with referred pain (3%), distal sciatic entrapment (2.1%), sciatic tumor (1.7%), lumbosacral plexus entrapment (1.3%), unappreciated lateral disc herniation (1.3%), nerve root injury due to spinal surgery (1.3%), inadequate spinal nerve root decompression (0.8%), lumbar stenosis (0.8%), sacroiliac joint inflammation (0.8%), lumbosacral plexus tumor (0.4%), sacral fracture (0.4%), and no diagnosis (4.2%). Open MR-guided Marcaine injection into the piriformis muscle produced the following results: no response (15.7%), relief of greater than 8 months (14.9%), relief lasting 2 to 4 months with continuing relief after second injection (7.5%), relief for 2 to 4 months with subsequent recurrence (36.6%), and relief for 1 to 14 days with full recurrence (25.4%). Piriformis surgery (62 operations; 3-cm incision, transgluteal approach, 55% outpatient; 40% with local or epidural anesthesia) resulted in excellent outcome in 58.5%, good outcome in 22.6%, limited benefit in 13.2%, no benefit in 3.8%, and worsened symptoms in 1.9%.\n\n\nCONCLUSIONS\nThis Class A quality evaluation of MR neurography's diagnostic efficacy revealed that piriformis muscle asymmetry and sciatic nerve hyperintensity at the sciatic notch exhibited a 93% specificity and 64% sensitivity in distinguishing patients with piriformis syndrome from those without who had similar symptoms (p < 0.01). Evaluation of the nerve beyond the proximal foramen provided eight additional diagnostic categories affecting 96% of these patients. More than 80% of the population good or excellent functional outcome was achieved.",
"title": ""
},
{
"docid": "925ae9febfc3e9ab02e76c517ed21bfc",
"text": "This study presents the macrosocial and macropsychological correlates of two cultural dimensions, Individualism-Collectivism and Hierarchy, based on a review of cross-cultural research. Correlations between the culturelevel value scores provided by Hofstede, Schwartz and Trompenaars and nation-level indices confirm their criterion validity. Thus power distance and collectivism are correlated with low social development (HDI index), income differences (Gini index), the socio-political corruption index, and the competitiveness index. The predominantly Protestant societies are more individualist and egalitarian, the Confucianist societies are more collectivist; and Islamic sociRésumé Cette étude présente les facteurs macro-sociaux et macro-psychologiques de deux dimensions culturelles, l’Individualisme-Collectivisme et la Hiérarchie ou Distance au Pouvoir, dimensions basées sur certaines révisions des recherches dans le domaine transculturel. Les corrélations entre les valeurs, au niveau culturel, fournies par Hofstede, Schwartz et Trompenaars, et des index socio-économiques confirment la validité de ces dimensions. La distance de pouvoir et le collectivisme sont associés au bas développement social (indice HDI), aux différences de revenus (indice Gini), à l’indice de corruption sociopolitique et de compétitivité. Les sociétés majoritairement protestantes sont plus individualistes et égalitaires, les sociétés confuciaMots-clés Culture, Individualisme, Collectivisme, Distance au Pouvoir o Hierarchie Key-words Culture, Individualism, Collectivism, Power Distance Correspondence concerning this article should be addressed either to Nekane Basabe, Universidad del País Vasco, Departamento de Psicología Social, Paseo de la Universidad, 7, 01006 Vitoria, Spain; or to Maria Ros, Universidad Complutense, Departamento Psicología Social, 28023 Madrid, Spain. Request for reprints should be directed to Nekane Basabe (email [email protected]) or Maria Ros ([email protected]) This study was supported by the following Basque Country University Research Grants MCYT BSO2001-1236-CO-7-01, 9/UPV00109.231-13645/2001, from the University of the Basque Country and Spanish Government. * Nekane Basabe, University of the Basque Country, San Sebastián, Spain. ** María Ros, Complutense University of Madrid, Madrid, Spain. MEP 1/2005 18/04/05 17:47 Page 189",
"title": ""
},
{
"docid": "b4c3b17b43767c0edffbdb32132a6ad5",
"text": "We study the security and privacy of private browsing modes recently added to all major browsers. We first propose a clean definition of the goals of private browsing and survey its implementation in different browsers. We conduct a measurement study to determine how often it is used and on what categories of sites. Our results suggest that private browsing is used differently from how it is marketed. We then describe an automated technique for testing the security of private browsing modes and report on a few weaknesses found in the Firefox browser. Finally, we show that many popular browser extensions and plugins undermine the security of private browsing. We propose and experiment with a workable policy that lets users safely run extensions in private browsing mode.",
"title": ""
},
{
"docid": "42a0e0ab1ae2b190c913e69367b85001",
"text": "One of the most challenging problems facing network operators today is network attacks identification due to extensive number of vulnerabilities in computer systems and creativity of attackers. To address this problem, we present a deep learning approach for intrusion detection systems. Our approach uses Deep Auto-Encoder (DAE) as one of the most well-known deep learning models. The proposed DAE model is trained in a greedy layer-wise fashion in order to avoid overfitting and local optima. The experimental results on the KDD-CUP'99 dataset show that our approach provides substantial improvement over other deep learning-based approaches in terms of accuracy, detection rate and false alarm rate.",
"title": ""
},
{
"docid": "9d38f723a24c4f330c3667f2789c528a",
"text": "State-of-the-art graphic processing units (GPUs) provide very high memory bandwidth, but the performance of many general-purpose GPU (GPGPU) workloads is still bounded by memory bandwidth. Although compression techniques have been adopted by commercial GPUs, they are only used for compressing texture and color data, not data for GPGPU workloads. Furthermore, the microarchitectural details of GPU compression are proprietary and its performance benefits have not been previously published. In this paper, we first investigate required microarchitectural changes to support lossless compression techniques for data transferred between the GPU and its off-chip memory to provide higher effective bandwidth. Second, by exploiting some characteristics of floating-point numbers in many GPGPU workloads, we propose to apply lossless compression to floating-point numbers after truncating their least-significant bits (i.e., lossy compression). This can reduce the bandwidth usage even further with very little impact on overall computational accuracy. Finally, we demonstrate that a GPU with our lossless and lossy compression techniques can improve the performance of memory-bound GPGPU workloads by 26% and 41% on average.",
"title": ""
},
{
"docid": "75c29edf7090ac60c8738a0a7b127dc1",
"text": "TetriSched is a scheduler that works in tandem with a calendaring reservation system to continuously re-evaluate the immediate-term scheduling plan for all pending jobs (including those with reservations and best-effort jobs) on each scheduling cycle. TetriSched leverages information supplied by the reservation system about jobs' deadlines and estimated runtimes to plan ahead in deciding whether to wait for a busy preferred resource type (e.g., machine with a GPU) or fall back to less preferred placement options. Plan-ahead affords significant flexibility in handling mis-estimates in job runtimes specified at reservation time. Integrated with the main reservation system in Hadoop YARN, TetriSched is experimentally shown to achieve significantly higher SLO attainment and cluster utilization than the best-configured YARN reservation and CapacityScheduler stack deployed on a real 256 node cluster.",
"title": ""
}
] |
scidocsrr
|
9c0a90468db57f87322ff584de436219
|
Understanding the Linux Kernel
|
[
{
"docid": "4304d7ef3caaaf874ad0168ce8001678",
"text": "In a path-breaking paper last year Pat and Betty O’Neil and Gerhard Weikum pro posed a self-tuning improvement to the Least Recently Used (LRU) buffer management algorithm[l5]. Their improvement is called LRU/k and advocates giving priority to buffer pages baaed on the kth most recent access. (The standard LRU algorithm is denoted LRU/l according to this terminology.) If Pl’s kth most recent access is more more recent than P2’s, then Pl will be replaced after P2. Intuitively, LRU/k for k > 1 is a good strategy, because it gives low priority to pages that have been scanned or to pages that belong to a big randomly accessed file (e.g., the account file in TPC/A). They found that LRU/S achieves most of the advantage of their method. The one problem of LRU/S is the processor *Supported by U.S. Office of Naval Research #N00014-91-E 1472 and #N99914-92-J-1719, U.S. National Science Foundation grants #CC%9103953 and IFlI-9224691, and USBA #5555-19. Part of this work was performed while Theodore Johnson was a 1993 ASEE Summer Faculty Fellow at the National Space Science Data Center of NASA Goddard Space Flight Center. t Authors’ e-mail addresses : [email protected] and",
"title": ""
}
] |
[
{
"docid": "266f636d13f406ecbacf8ed8443b2b5c",
"text": "This review examines the most frequently cited sociological theories of crime and delinquency. The major theoretical perspectives are presented, beginning with anomie theory and the theories associated with the Chicago School of Sociology. They are followed by theories of strain, social control, opportunity, conflict, and developmental life course. The review concludes with a conceptual map featuring the inter-relationships and contexts of the major theoretical perspectives.",
"title": ""
},
{
"docid": "21d22dd1ae61539e6885654e95d541ee",
"text": "Reducing noise from the medical images, a satellite image etc. is a challenge for the researchers in digital image processing. Several approaches are there for noise reduction. Generally speckle noise is commonly found in synthetic aperture radar images, satellite images and medical images. This paper proposes filtering techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation.",
"title": ""
},
{
"docid": "a8122b8139b88ad5bff074d527b76272",
"text": "Salt is a natural component of the Australian landscape to which a number of biota inhabiting rivers and wetlands are adapted. Under natural flow conditions periods of low flow have resulted in the concentration of salts in wetlands and riverine pools. The organisms of these systems survive these salinities by tolerance or avoidance. Freshwater ecosystems in Australia are now becoming increasingly threatened by salinity because of rising saline groundwater and modification of the water regime reducing the frequency of high-flow (flushing) events, resulting in an accumulation of salt. Available data suggest that aquatic biota will be adversely affected as salinity exceeds 1000 mg L (1500 EC) but there is limited information on how increasing salinity will affect the various life stages of the biota. Salinisation can lead to changes in the physical environment that will affect ecosystem processes. However, we know little about how salinity interacts with the way nutrients and carbon are processed within an ecosystem. This paper updates the knowledge base on how salinity affects the physical and biotic components of aquatic ecosystems and explores the needs for information on how structure and function of aquatic ecosystems change with increasing salinity. BT0215 Ef ect of s al ini ty on f r eshwat er ecosys t em s in A us t rali a D. L. Niel e etal",
"title": ""
},
{
"docid": "26162f0e3f6c8752a5dbf7174d2e5e44",
"text": "Literature on the combination of qualitative and quantitative research components at the primary empirical study level has recently accumulated exponentially. However, this combination is only rarely discussed and applied at the research synthesis level. The purpose of this paper is to explore the possible contribution of mixed methods research to the integration of qualitative and quantitative research at the synthesis level. In order to contribute to the methodology and utilization of mixed methods at the synthesis level, we present a framework to perform mixed methods research syntheses (MMRS). The presented classification framework can help to inform researchers intending to carry out MMRS, and to provide ideas for conceptualizing and developing those syntheses. We illustrate the use of this framework by applying it to the planning of MMRS on effectiveness studies concerning interventions for challenging behavior in persons with intellectual disabilities, presenting two hypothetical examples. Finally, we discuss possible strengths of MMRS and note some remaining challenges concerning the implementation of these syntheses.",
"title": ""
},
{
"docid": "d40e565a2ed22af998ae60f670210f57",
"text": "Research on human infants has begun to shed light on early-develpping processes for segmenting perceptual arrays into objects. Infants appear to perceive objects by analyzing three-dimensional surface arrangements and motions. Their perception does not accord with a general tendency to maximize figural goodness or to attend-to nonaccidental geometric relations in visual arrays. Object perception does accord with principles governing the motions of material bodies: Infants divide perceptual arrays into units that move as connected wholes, that move separately from one another, that tend to maintain their size and shape over motion, and that tend to act upon each other only on contact. These findings suggest that o general representation of object unity and boundaries is interposed between representations of surfaces and representations of obiects of familiar kinds. The processes that construct this representation may be related to processes of physical reasoning. This article is animated by two proposals about perception and perceptual development. One proposal is substantive: In situations where perception develops through experience, but without instruction or deliberate reflection , development tends to enrich perceptual abilities but not to change them fundamentally. The second proposal is methodological: In the above situations , studies of the origins and early development of perception can shed light on perception in its mature state. These proposals will arise from a discussion of the early development of one perceptual ability: the ability to organize arrays of surfaces into unitary, bounded, and persisting objects. PERCEIVING OBJECTS In recent years, my colleagues and I have been studying young infants' perception of objects in complex displays in which objects are adjacent to other objects, objects are partly hidden behind other objects, of objects move fully",
"title": ""
},
{
"docid": "74a233279ecfd8a66d24d283002051ab",
"text": "This paper proposes a communication-assisted protection strategy implementable by commercially available microprocessor-based relays for the protection of medium-voltage microgrids. Even though the developed protection strategy benefits from communications, it offers a backup protection strategy to manage communication network failures. The paper also introduces the structure of a relay that enables the proposed protection strategy. Comprehensive simulation studies are carried out to verify the effectiveness of the proposed protection strategy under different fault scenarios, in the PSCAD/EMTDC software environment.",
"title": ""
},
{
"docid": "b72c8a92e8d0952970a258bb43f5d1da",
"text": "Neural networks excel in detecting regular patterns but are less successful in representing and manipulating complex data structures, possibly due to the lack of an external memory. This has led to the recent development of a new line of architectures known as Memory-Augmented Neural Networks (MANNs), each of which consists of a neural network that interacts with an external memory matrix. However, this RAM-like memory matrix is unstructured and thus does not naturally encode structured objects. Here we design a new MANN dubbed Relational Dynamic Memory Network (RDMN) to bridge the gap. Like existing MANNs, RDMN has a neural controller but its memory is structured as multi-relational graphs. RDMN uses the memory to represent and manipulate graph-structured data in response to query; and as a neural network, RDMN is trainable from labeled data. Thus RDMN learns to answer queries about a set of graph-structured objects without explicit programming. We evaluate the capability of RDMN on several important prediction problems, including software vulnerability, molecular bioactivity and chemical-chemical interaction. Results demonstrate the efficacy of the proposed model.",
"title": ""
},
{
"docid": "a0a618a4c5e81dce26d095daea7668e2",
"text": "We study the efficiency of deblocking algorithms for improving visual signals degraded by blocking artifacts from compression. Rather than using only the perceptually questionable PSNR, we instead propose a block-sensitive index, named PSNR-B, that produces objective judgments that accord with observations. The PSNR-B modifies PSNR by including a blocking effect factor. We also use the perceptually significant SSIM index, which produces results largely in agreement with PSNR-B. Simulation results show that the PSNR-B results in better performance for quality assessment of deblocked images than PSNR and a well-known blockiness-specific index.",
"title": ""
},
{
"docid": "72535e221c8d0a274ed7b025a17c8a7c",
"text": "Along with increasing demand on improving power quality, the most popular technique that has been used is Active Power Filter (APF); this is because APF can easily eliminate unwanted harmonics, improve power factor and overcome voltage sags. This paper will discuss and analyze the simulation result for a three-phase shunt active power filter using MATLAB/Simulink program. This simulation will implement a non-linear load and compensate line current harmonics under balance and unbalance load. As a result of the simulation, it is found that an active power filter is the better way to reduce the total harmonic distortion (THD) which is required by quality standards IEEE-519.",
"title": ""
},
{
"docid": "3fec27391057a4c14f2df5933c4847d8",
"text": "This article explains how entrepreneurship can help resolve the environmental problems of global socio-economic systems. Environmental economics concludes that environmental degradation results from the failure of markets, whereas the entrepreneurship literature argues that opportunities are inherent in market failure. A synthesis of these literatures suggests that environmentally relevant market failures represent opportunities for achieving profitability while simultaneously reducing environmentally degrading economic behaviors. It also implies conceptualizations of sustainable and environmental entrepreneurship which detail how entrepreneurs seize the opportunities that are inherent in environmentally relevant market failures. Finally, the article examines the ability of the proposed theoretical framework to transcend its environmental context and provide insight into expanding the domain of the study of entrepreneurship. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "08c6bd4aae8995a2291e22ccfcf026f2",
"text": "This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.",
"title": ""
},
{
"docid": "6e72c4401bfeedaffd92d5261face2c6",
"text": "OBJECTIVE\nTo examine the association between television advertising exposure and adults' consumption of fast foods.\n\n\nDESIGN\nCross-sectional telephone survey. Questions included measures of frequency of fast-food consumption at different meal times and average daily hours spent watching commercial television.\n\n\nSUBJECTS/SETTING\nSubjects comprised 1495 adults (41 % response rate) aged >or=18 years from Victoria, Australia.\n\n\nRESULTS\nTwenty-three per cent of respondents usually ate fast food for dinner at least once weekly, while 17 % consumed fast food for lunch on a weekly basis. The majority of respondents reported never eating fast food for breakfast (73 %) or snacks (65 %). Forty-one per cent of respondents estimated watching commercial television for <or=1 h/d (low viewers); 29 % watched for 2 h/d (moderate viewers); 30 % watched for >or=3 h/d (high viewers). After adjusting for demographic variables, high viewers were more likely to eat fast food for dinner at least once weekly compared with low viewers (OR = 1.45; 95 % CI 1.04, 2.03). Both moderate viewers (OR = 1.53; 95 % CI 1.01, 2.31) and high viewers (OR = 1.81; 95 % CI 1.20, 2.72) were more likely to eat fast food for snacks at least once weekly compared with low viewers. Commercial television viewing was not significantly related (P > 0.05) to fast-food consumption at breakfast or lunch.\n\n\nCONCLUSIONS\nThe results of the present study provide evidence to suggest that cumulative exposure to television food advertising is linked to adults' fast-food consumption. Additional research that systematically assesses adults' behavioural responses to fast-food advertisements is needed to gain a greater understanding of the mechanisms driving this association.",
"title": ""
},
{
"docid": "d9cdbc7dd4d8ae34a3d5c1765eb48072",
"text": "Beanstalk is an educational game for children ages 6-10 teaching balance-fulcrum principles while folding in scientific inquiry and socio-emotional learning. This paper explores the incorporation of these additional dimensions using intrinsic motivation and a framing narrative. Four versions of the game are detailed, along with preliminary player data in a 2×2 pilot test with 64 children shaping the modifications of Beanstalk for much broader testing.",
"title": ""
},
{
"docid": "f4c1a8b19248e0cb8e2791210715e7b7",
"text": "The translation of proper names is one of the most challenging activities every translator faces. While working on children’s literature, the translation is especially complicated since proper names usually have various allusions indicating sex, age, geographical belonging, history, specific meaning, playfulness of language and cultural connotations. The goal of this article is to draw attention to strategic choices for the translation of proper names in children’s literature. First, the article presents the theoretical considerations that deal with different aspects of proper names in literary works and the issue of their translation. Second, the translation strategies provided by the translation theorist Eirlys E. Davies used for this research are explained. In addition, the principles of adaptation of proper names provided the State Commission of the Lithuanian Language are presented. Then, the discussion proceeds to the quantitative analysis of the translated proper names with an emphasis on providing and explaining numerous examples. The research has been carried out on four popular fantasy books translated from English and German by three Lithuanian translators. After analyzing the strategies of preservation, localization, transformation and creation, the strategy of localization has proved to be the most frequent one in all translations.",
"title": ""
},
{
"docid": "b52fb324287ec47860e189062f961ad8",
"text": "In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming, stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of algorithms to compute stable models of propositional logic programs.",
"title": ""
},
{
"docid": "0222814440107fe89c13a790a6a3833e",
"text": "This paper presents a third method of generation and detection of a single-sideband signal. The method is basically different from either the conventional filter or phasing method in that no sharp cutoff filters or wide-band 90° phase-difference networks are needed. This system is especially suited to keeping the signal energy confined to the desired bandwidth. Any unwanted sideband occupies the same band as the desired sideband, and the unwanted sideband in the usual sense is not present.",
"title": ""
},
{
"docid": "73d58bbe0550fb58efc49ae5f84a1c7b",
"text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.",
"title": ""
},
{
"docid": "0d6a276770da5e7e544f66256084ba75",
"text": "ARC AND PATH CONSISTENCY REVISITED' Roger Mohr and Thomas C. Henderson 2 CRIN BP 239 54506 Vandoeuvre (France)",
"title": ""
},
{
"docid": "d131cda62d8ac73b209d092d8e36037e",
"text": "The problem of packing congruent spheres (i.e., copies of the same sph ere) in a bounded domain arises in many applications. In this paper, we present a new pack-and-shake scheme for packing congruent spheres in various bounded 2-D domains. Our packing scheme is based on a number of interesting ideas, such as a trimming and packing approach, optimal lattice packing under translation and/or rotation, shaking procedures, etc. Our packing algorithms have fairly low time complexities. In certain cases, they even run in nearly linear time. Our techniques can be easily generalized to congruent packing of other shapes of objects, and are readily extended to higher dimensional spaces. Applications of our packing algorithms to treatment planning of radiosurgery are discussed. Experimental results suggest that our algorithms produce reasonably dense packings.",
"title": ""
},
{
"docid": "1c9c30e3e007c2d11c6f5ebd0092050b",
"text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.",
"title": ""
}
] |
scidocsrr
|
f6c5620afa78588d3bfef71f6690a2fc
|
Automatic Video Summarization by Graph Modeling
|
[
{
"docid": "e5261ee5ea2df8bae7cc82cb4841dea0",
"text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.",
"title": ""
},
{
"docid": "aea474fcacb8af1d820413b5f842056f",
"text": ".4 video sequence can be reprmented as a trajectory curve in a high dmensiond feature space. This video curve can be an~yzed by took Mar to those devdoped for planar cnrv=. h partidar, the classic biiary curve sphtting algorithm has been fonnd to be a nseti tool for video analysis. With a spEtting condition that checks the dimension&@ of the curve szgrnent being spht, the video curve can be recursivdy sirnpMed and repr~ented as a tree stmcture, and the framm that are fomtd to be junctions betieen curve segments at Merent, lev& of the tree can be used as ke-fiarn~s to summarize the tideo sequences at Merent levds of det ti. The-e keyframes can be combmed in various spatial and tempord configurations for browsing purposes. We describe a simple video player that displays the ke.fiarn~ seqnentifly and lets the user change the summarization level on the fly tith an additiond shder. 1.1 Sgrrlficance of the Problem Recent advances in digitd technology have promoted video as a vdnable information resource. I$le can now XCaS Se lected &ps from archives of thousands of hours of video footage host instantly. This new resource is e~citing, yet the sheer volume of data makes any retried task o~emhehning and its dcient. nsage impowible. Brow= ing tools that wodd flow the user to qnitiy get an idea of the content of video footage are SW important ti~~ ing components in these video database syst-Fortunately, the devdopment of browsing took is a very active area of research [13, 16, 17], and pow~ solutions are in the horizon. Browsers use as balding blocks subsets of fiarnes c~ed ke.frames, sdected because they smnmarize the video content better than their neighbors. Obviously, sdecting one keytiarne per shot does not adeqnatdy surnPermisslonlo rna~edigitalorhardcopi= of aftorpartof this v:ork for personalor classroomuse is granted v;IIhouIfee providedlhat copies are nol made or distributed for profitor commercial advantage, andthat copiesbear!hrsnoticeandihe full citationon ihe first page.To copyoxhem,se,IOrepublishtopostonservers or lo redistribute10 lists, requiresprior specific pzrrnisston znt’or a fe~ AChl hlultimedia’9S. BnsIol.UK @ 199sAchi 1-5s11>036s!9s/000s S.oo 211 marize the complex information content of long shots in which camera pan and zoom as we~ as object motion pr~ gr=sivdy unvd entirely new situations. Shots shotid be sampled by a higher or lower density of keyfrarnes according to their activity level. Sampbg techniques that would attempt to detect sigficant information changes simply by looking at pairs of frames or even several consecutive frames are bound to lack robustness in presence of noise, such as jitter occurring during camera motion or sudden ~urnination changes due to fluorescent Eght ticker, glare and photographic flash. kterestin~y, methods devdoped to detect perceptually signi$mnt points and &continuities on noisy 2D curves have succes~y addressed this type of problem, and can be extended to the mdtidimensiond curves that represent video sequences. h this paper, we describe an algorithm that can de compose a curve origin~y defined in a high dmensiond space into curve segments of low dimension. In partictiar, a video sequence can be mapped to a high dimensional polygonal trajectory curve by mapping each frame to a time dependent feature usctor, and representing these feature vectors as points. We can apply this algorithm to segment the curve of the video sequence into low ditnensiond curve segments or even fine segments. Th=e segments correspond to video footage where activity is low and frames are redundant. The idea is to detect the constituent segments of the video curoe rather than attempt to lomte the jtmctions between these segments directly. In such a dud aPProach, the curve is decomposed into segments \\vhich exkibit hearity or low dirnensiontity. Curvature discontinuiti~ are then assigned to the junctions between these segments. Detecting generrd stmcture in the video curves to derive frame locations of features such as cuts and shot transitions, rather than attempting to locate the features thernsdv~ by Iocrd analysis of frame changes, ensures that the detected positions of these features are more stable in the presence of noise which is effectively faltered out. h addition, the proposed technique butids a binary tree representation of a video sequence where branches cent tin frarn= corresponding to more dettied representations of the sequence. The user can view the video sequence at coarse or fine lev& of detds, zooming in by displaying keyfrantes corresponding to the leaves of the tree, or zooming out by displaying keyframes near the root of the tree. ●",
"title": ""
}
] |
[
{
"docid": "298d3280deb3bb326314a7324d135911",
"text": "BACKGROUND\nUterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.\n\n\nCASE\nA 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.\n\n\nSUMMARY AND CONCLUSION\nSubmucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy.",
"title": ""
},
{
"docid": "8dc9f29e305d66590948896de2e0a672",
"text": "Affective events are events that impact people in positive or negative ways. When people discuss an event, people understand not only the affective polarity but also the reason for the event being positive or negative. In this paper, we aim to categorize affective events based on the reasons why events are affective. We propose that an event is affective to people often because the event describes or indicates the satisfaction or violation of certain kind of human needs. For example, the event “I broke my leg” affects people negatively because the need to be physically healthy is violated. “I play computer games” has a positive affect on people because the need to have fun is probably satisfied. To categorize affective events in narrative human language, we define seven common human need categories and introduce a new data set of randomly sampled affective events with manual human need annotations. In addition, we explored two types of methods: a LIWC lexicon based method and supervised classifiers to automatically categorize affective event expressions with respect to human needs. Experiments show that these methods achieved moderate performance on this task.",
"title": ""
},
{
"docid": "77d0786af4c5eee510a64790af497e25",
"text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.",
"title": ""
},
{
"docid": "3cceb3792d55bd14adb579bb9e3932ec",
"text": "BACKGROUND\nTrastuzumab, a monoclonal antibody against human epidermal growth factor receptor 2 (HER2; also known as ERBB2), was investigated in combination with chemotherapy for first-line treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nMETHODS\nToGA (Trastuzumab for Gastric Cancer) was an open-label, international, phase 3, randomised controlled trial undertaken in 122 centres in 24 countries. Patients with gastric or gastro-oesophageal junction cancer were eligible for inclusion if their tumours showed overexpression of HER2 protein by immunohistochemistry or gene amplification by fluorescence in-situ hybridisation. Participants were randomly assigned in a 1:1 ratio to receive a chemotherapy regimen consisting of capecitabine plus cisplatin or fluorouracil plus cisplatin given every 3 weeks for six cycles or chemotherapy in combination with intravenous trastuzumab. Allocation was by block randomisation stratified by Eastern Cooperative Oncology Group performance status, chemotherapy regimen, extent of disease, primary cancer site, and measurability of disease, implemented with a central interactive voice recognition system. The primary endpoint was overall survival in all randomised patients who received study medication at least once. This trial is registered with ClinicalTrials.gov, number NCT01041404.\n\n\nFINDINGS\n594 patients were randomly assigned to study treatment (trastuzumab plus chemotherapy, n=298; chemotherapy alone, n=296), of whom 584 were included in the primary analysis (n=294; n=290). Median follow-up was 18.6 months (IQR 11-25) in the trastuzumab plus chemotherapy group and 17.1 months (9-25) in the chemotherapy alone group. Median overall survival was 13.8 months (95% CI 12-16) in those assigned to trastuzumab plus chemotherapy compared with 11.1 months (10-13) in those assigned to chemotherapy alone (hazard ratio 0.74; 95% CI 0.60-0.91; p=0.0046). The most common adverse events in both groups were nausea (trastuzumab plus chemotherapy, 197 [67%] vs chemotherapy alone, 184 [63%]), vomiting (147 [50%] vs 134 [46%]), and neutropenia (157 [53%] vs 165 [57%]). Rates of overall grade 3 or 4 adverse events (201 [68%] vs 198 [68%]) and cardiac adverse events (17 [6%] vs 18 [6%]) did not differ between groups.\n\n\nINTERPRETATION\nTrastuzumab in combination with chemotherapy can be considered as a new standard option for patients with HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nFUNDING\nF Hoffmann-La Roche.",
"title": ""
},
{
"docid": "59932c6e6b406a41d814e651d32da9b2",
"text": "The purpose of this study was to examine the effects of virtual reality simulation (VRS) on learning outcomes and retention of disaster training. The study used a longitudinal experimental design using two groups and repeated measures. A convenience sample of associate degree nursing students enrolled in a disaster course was randomized into two groups; both groups completed web-based modules; the treatment group also completed a virtually simulated disaster experience. Learning was measured using a 20-question multiple-choice knowledge assessment pre/post and at 2 months following training. Results were analyzed using the generalized linear model. Independent and paired t tests were used to examine the between- and within-participant differences. The main effect of the virtual simulation was strongly significant (p < .0001). The VRS effect demonstrated stability over time. In this preliminary examination, VRS is an instructional method that reinforces learning and improves learning retention.",
"title": ""
},
{
"docid": "a6872c1cab2577547c9a7643a6acd03e",
"text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.",
"title": ""
},
{
"docid": "7dead097d1055a713bb56f9369eb1f98",
"text": "Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of web application vulnerabilities in last decade is growing constantly. Improper input validation and sanitization are reasons for most of them. The most important of these vulnerabilities based on improper input validation and sanitization is SQL injection (SQLI) vulnerability. The primary focus of our research was to develop a reliable black-box vulnerability scanner for detecting SQLI vulnerability - SQLIVDT (SQL Injection Vulnerability Detection Tool). The black-box approach is based on simulation of SQLI attacks against web applications. Thus, the scope of analysis is limited to HTTP responses and HTML pages received from the application server. In order to achieve efficient SQLI vulnerability detection, an efficient algorithm for HTML page similarity detection is used. The proposed tool showed promising results as compared to six well-known web application scanners.",
"title": ""
},
{
"docid": "edd9795ce024f8fed8057992cf3f4279",
"text": "INTRODUCTION\nIdiopathic talipes equinovarus is the most common congenital defect characterized by the presence of a congenital dysplasia of all musculoskeletal tissues distal to the knee. For many years, the treatment has been based on extensive surgery after manipulation and cast trial. Owing to poor surgical results, Ponseti developed a new treatment protocol consisting of manipulation with cast and an Achilles tenotomy. The new technique requires 4 years of orthotic management to guarantee good results. The most recent studies have emphasized how difficult it is to comply with the orthotic posttreatment protocol. Poor compliance has been attributed to parent's low educational and low income level. The purpose of the study is to evaluate if poor compliance is due to the complexity of the orthotic use or if it is related to family education, cultural, or income factors.\n\n\nMETHOD\nFifty-three patients with 73 idiopathic talipes equinovarus feet were treated with the Ponseti technique and followed for 48 months after completing the cast treatment. There was a male predominance (72%). The mean age at presentation was 1 month (range: 1 wk to 7 mo). Twenty patients (38%) had bilateral involvement, 17 patients (32%) had right side affected, and 16 patients (30%) had the left side involved. The mean time of manipulation and casting treatment was 6 weeks (range: 4 to 10 wk). Thirty-eight patients (72%) required Achilles tenotomy as stipulated by the protocol. Recurrence was considered if there was a deterioration of the Dimeglio severity score requiring remanipulation and casting.\n\n\nRESULTS\nTwenty-four out of 73 feet treated by our service showed the evidence of recurrence (33%). Sex, age at presentation, cast treatment duration, unilateral or bilateral, severity score, the necessity of Achilles tenotomy, family educational, or income level did not reveal any significant correlation with the recurrence risk. Noncompliance with the orthotic use showed a significant correlation with the recurrence rate. The noncompliance rate did not show any correlation with the patient demographic data or parent's education level, insurance, or cultural factors as proposed previously.\n\n\nCONCLUSION\nThe use of the brace is extremely relevant with the Ponseti technique outcome (recurrence) in the treatment of idiopathic talipes equinovarus. Noncompliance is not related to family education, cultural, or income level. The Ponseti postcasting orthotic protocol needs to be reevaluated to a less demanding option to improve outcome and brace compliance.",
"title": ""
},
{
"docid": "7db00719532ab0d9b408d692171d908f",
"text": "The real-time monitoring of human movement can provide valuable information regarding an individual's degree of functional ability and general level of activity. This paper presents the implementation of a real-time classification system for the types of human movement associated with the data acquired from a single, waist-mounted triaxial accelerometer unit. The major advance proposed by the system is to perform the vast majority of signal processing onboard the wearable unit using embedded intelligence. In this way, the system distinguishes between periods of activity and rest, recognizes the postural orientation of the wearer, detects events such as walking and falls, and provides an estimation of metabolic energy expenditure. A laboratory-based trial involving six subjects was undertaken, with results indicating an overall accuracy of 90.8% across a series of 12 tasks (283 tests) involving a variety of movements related to normal daily activities. Distinction between activity and rest was performed without error; recognition of postural orientation was carried out with 94.1% accuracy, classification of walking was achieved with less certainty (83.3% accuracy), and detection of possible falls was made with 95.6% accuracy. Results demonstrate the feasibility of implementing an accelerometry-based, real-time movement classifier using embedded intelligence",
"title": ""
},
{
"docid": "a2842352924cbd1deff52976425a0bd6",
"text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"title": ""
},
{
"docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d",
"text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.",
"title": ""
},
{
"docid": "8ed2fa021e5b812de90795251b5c2b64",
"text": "A new implicit surface fitting method for surface reconstruction from scattered point data is proposed. The method combines an adaptive partition of unity approximation with least-squares RBF fitting and is capable of generating a high quality surface reconstruction. Given a set of points scattered over a smooth surface, first a sparse set of overlapped local approximations is constructed. The partition of unity generated from these local approximants already gives a faithful surface reconstruction. The final reconstruction is obtained by adding compactly supported RBFs. The main feature of the developed approach consists of using various regularization schemes which lead to economical, yet accurate surface reconstruction.",
"title": ""
},
{
"docid": "99fdab0b77428f98e9486d1cc7430757",
"text": "Self organizing Maps (SOMs) are most well-known, unsupervised approach of neural network that is used for clustering and are very efficient in handling large and high dimensional dataset. As SOMs can be applied on large complex set, so it can be implemented to detect credit card fraud. Online banking and ecommerce has been experiencing rapid growth over past years and will show tremendous growth even in future. So, it is very necessary to keep an eye on fraudsters and find out some ways to depreciate the rate of frauds. This paper focuses on Real Time Credit Card Fraud Detection and presents a new and innovative approach to detect the fraud by the help of SOM. Keywords— Self-Organizing Map, Unsupervised Learning, Transaction Introduction The fast and rapid growth in the credit card issuers, online merchants and card users have made them very conscious about the online frauds. Card users just want to make safe transactions while purchasing their goods and on the other hand, banks want to differentiate the legitimate as well as fraudulent users. The merchants that is mostly affected as they do not have any kind of evidence like Digital Signature wants to sell their goods only to the legitimate users to make profit and want to use a great secure system that avoid them from a great loss. Our approach of Self Organizing map can work in the large complex datasets and can cluster even unaware datasets. It is an unsupervised neural network that works even in the absence of an external teacher and provides fruitful results in detecting credit card frauds. It is interesting to note that credit card fraud affect owner the least and merchant the most. The existing legislation and card holder protection policies as well as insurance scheme affect most the merchant and customer the least. Card issuer bank also has to pay the administrative cost and infrastructure cost. Studies show that average time lag between the fraudulent transaction dates and charge back notification 1344 Mitali Bansal and Suman can be high as 72 days, thereby giving fraudster sufficient time to cause severe damage. In this paper first, you will see a brief survey of different approaches on credit card fraud detection systems,. In Section 2 we explain the design and architecture of SOM to detect Credit Card Fraud. Section 3, will represent results. Finally, Conclusion are presented in Section 4. A Survey of Credit card fraud Detection Fraud Detection Systems work by trying to identify anomalies in an environment [1]. At the early stage, the research focus lies in using rule based expert systems. The model’s rule constructed through the input of many fraud experts within the bank [2]. But when their processing is encountered, their output become was worst. Because the rule based expert system totally lies on the prior information of the data set that is generally not available easily in the case of credit card frauds. After these many Artificial Neural Network (ANN) is mostly used and solved very complex problems in a very efficient way [3]. Some believe that unsupervised methods are best to detect credit card frauds because these methods work well even in absence of external teacher. While supervised methods are based on prior data knowledge and surely needs an external teacher. Unsupervised method is used [4] [5] to detect some kind of anomalies like fraud. They do not cluster the data but provides a ranking on the list of all segments and by this ranking method they provide how much a segment is anomalous as compare to the whole data sets or other segments [6]. Dempster-Shafer Theory [1] is able to detect anomalous data. They did an experiment to detect infected E-mails by the help of D-S theory. As this theory can also be helpful because in this modern era all the new card information is sent through e-mails by the banks. Some various other approaches have also been used to detect Credit Card Frauds, one of which is ID3 pre pruning method in which decision tree is formed to detect anomalous data [7]. Artificial Neural Networks are other efficient and intelligent methods to detect credit card fraud. A compound method that is based on rule-based systems and ANN is used to detect Credit card fraud by Brause et al. [8]. Our work is based on self-organizing map that is based on unsupervised approach to detect Credit Card Fraud. We focus on to detect anomalous data by making clusters so that legitimate and fraudulent transactions can be differentiated. Collection of data and its pre-processing is also explained by giving example in fraud detection. SYSTEM DESIGN ARCHITECTURE The SOM works well in detecting Credit Card Fraud and all its interesting properties we have already discussed. Here we provide some detailed prototype and working of SOM in fraud detection. Credit Card Fraud Detection Using Self Organised Map 1345 Our Approach to detect Credit Card Fraud Using SOM Our approach towards Real time Credit Card Fraud detection is modelled by prototype. It is a multilayered approach as: 1. Initial Selection of data set. 2. Conversion of data from Symbolic to Numerical Data Set. 3. Implementation of SOM. 4. A layer of further review and decision making. This multilayered approach works well in the detection of Credit Card Fraud. As this approach is based on SOM, so finally it will cluster the data into fraudulent and genuine sets. By further review the sets can be analyzed and proper decision can be taken based on those results. The algorithm that is implemented to detect credit card fraud using Self Organizing Map is represented in Figure 1: 1. Initially choose all neurons (weight vectors wi) randomly. 2. For each input vector Ii { 2. 1) Convert all the symbolic input to the Numerical input by applying some mean and standard deviation formulas. 2. 2) Perform the initial authentication process like verification of Pin, Address, expiry date etc. } 3. Choose the learning rate parameter randomly for eg. 0. 5 4. Initially update all neurons for each input vector Ii. 5. Apply the unsupervised approach to distinguish the transaction into fraudulent and non-fraudulent cluster. 5. 1) Perform iteration till a specific cluster is not formed for a input vector. 6. By applying SOM we can divide the transactions into fraudulent (Fk) and genuine vector (Gk). 7. Perform a manually review decision. 8. Get your optimized result. Figure 1: Algorithm to detect Credit Card Fraud Initial Selection of Data Set Input vectors are generally in the form of High Dimensional Real world quantities which will be fed to a neuron matrix. These quantities are generally divided as [9]: 1346 Mitali Bansal and Suman Figure 2: Division of Transactions to form an Input Matrix In Account related quantities we can include like account number, currency of account, account opening date, last date of credit or debit available balance etc. In customer related quantities we can include customer id, customer type like high profile, low profile etc. In transaction related quantities we can have transaction no, location, currency, its timestamp etc. Conversion of Symbolic data into Numeric In credit card fraud detection, all of the data of banking transactions will be in the form of the symbolic, so there is a need to convert that symbolic data into numeric one. For example location, name, customer id etc. Conversion of all this data needs some normal distribution mechanism on the basis of frequency. The normalizing of data is done using Z = (Ni-Mi) / S where Ni is frequency of occurrence of a particular entity, M is mean and S is standard deviation. Then after all this procedure we will arrive at normalized values [9]. Implementation of SOM After getting all the normalized values, we make a input vector matrix. After that randomly weight vector is selected, this is generally termed as Neuron matrix. Dimension of this neuron matrix will be same as input vector matrix. A randomly learning parameter α is also taken. The value of this learning parameter is a small positive value that can be adjusted according to the process. The commonly used similarity matrix is the Euclidian distance given by equation 1: Distance between two neuron = jx(p)=minj││X-Wj(p)││={ Xi-Wij(p)]}, (1) Where j=1, 2......m and W is neuron or weight matrix, X is Input vectorThe main output of SOM is the patterns and cluster it has given as output vector. The cluster in credit card fraud detection will be in the form of fraudulent and genuine set represented as Fk and Gk respectively. Credit Card Fraud Detection Using Self Organised Map 1347 Review and decision making The clustering of input data into fraudulent and genuine set shows the categories of transactions performed as well as rarely performed more frequently as well as rarely by each customer. Since by the help of SOM relationship as well as hidden patterns is unearthed, we get more accuracy in our results. If the extent of suspicious activity exceeds a certain threshold value that transaction can be sent for review. So, it reduces overall processing time and complexity. Results The no of transactions taken in Test1, Test2, Test3 and Test4 are 500, 1000, 1500 and 2000 respectively. When compared to ID3 algorithm our approach presents much efficient result as shown in figure 3. Conclusion As results shows that SOM gives better results in case of detecting credit card fraud. As all parameters are verified and well represented in plots. The uniqueness of our approach lies in using the normalization and clustering mechanism of SOM of detecting credit card fraud. This helps in detecting hidden patterns of the transactions which cannot be identified to the other traditional method. With appropriate no of weight neurons and with help of thousands of iterations the network is trained and then result is verified to new transactions. The concept of normalization will help to normalize the values in other fraud cases and SOM will be helpful in detecting anomalies in credit card fraud cas",
"title": ""
},
{
"docid": "f7f609ebb1a0fcf789e5e2e5fe463718",
"text": "Individuals with generalized anxiety disorder (GAD) display poor emotional conflict adaptation, a cognitive control process requiring the adjustment of performance based on previous-trial conflict. It is unclear whether GAD-related conflict adaptation difficulties are present during tasks without emotionally-salient stimuli. We examined conflict adaptation using the N2 component of the event-related potential (ERP) and behavioral responses on a Flanker task from 35 individuals with GAD and 35 controls. Groups did not differ on conflict adaptation accuracy; individuals with GAD also displayed intact RT conflict adaptation. In contrast, individuals with GAD showed decreased amplitude N2 principal component for conflict adaptation. Correlations showed increased anxiety and depressive symptoms were associated with longer RT conflict adaptation effects and lower ERP amplitudes, but not when separated by group. We conclude that individuals with GAD show reduced conflict-related component processes that may be influenced by compensatory activity, even in the absence of emotionally-salient stimuli.",
"title": ""
},
{
"docid": "e6bb946ea2984ccb54fd37833bb55585",
"text": "11 Automatic Vehicles Counting and Recognizing (AVCR) is a very challenging topic in transport engineering having important implications for the modern transport policies. Implementing a computer-assisted AVCR in the most vital districts of a country provides a large amount of measurements which are statistically processed and analyzed, the purpose of which is to optimize the decision-making of traffic operation, pavement design, and transportation planning. Since the advent of computer vision technology, video-based surveillance of road vehicles has become a key component in developing autonomous intelligent transportation systems. In this context, this paper proposes a Pattern Recognition system which employs an unsupervised clustering algorithm with the objective of detecting, counting and recognizing a number of dynamic objects crossing a roadway. This strategy defines a virtual sensor, whose aim is similar to that of an inductive-loop in a traditional mechanism, i.e. to extract from the traffic video streaming a number of signals containing anarchic information about the road traffic. Then, the set of signals is filtered with the aim of conserving only motion’s significant patterns. Resulted data are subsequently processed by a statistical analysis technique so as to estimate and try to recognize a number of clusters corresponding to vehicles. Finite Mixture Models fitted by the EM algorithm are used to assess such clusters, which provides ∗Corresponding author Email addresses: [email protected] (Hana RABBOUCH), [email protected] (Foued SAÂDAOUI), [email protected] (Rafaa MRAIHI) Preprint submitted to Journal of LTEX Templates April 21, 2017",
"title": ""
},
{
"docid": "4d84b8dbcd0d5922fa3b20287b75c449",
"text": "We investigate an efficient parallelization of the most common iterative sparse tensor decomposition algorithms on distributed memory systems. A key operation in each iteration of these algorithms is the matricized tensor times Khatri-Rao product (MTTKRP). This operation amounts to element-wise vector multiplication and reduction depending on the sparsity of the tensor. We investigate a fine and a coarse-grain task definition for this operation, and propose hypergraph partitioning-based methods for these task definitions to achieve the load balance as well as reduce the communication requirements. We also design a distributed memory sparse tensor library, HyperTensor, which implements a well-known algorithm for the CANDECOMP-/PARAFAC (CP) tensor decomposition using the task definitions and the associated partitioning methods. We use this library to test the proposed implementation of MTTKRP in CP decomposition context, and report scalability results up to 1024 MPI ranks. We observed up to 194 fold speedups using 512 MPI processes on a well-known real world data, and significantly better performance results with respect to a state of the art implementation.",
"title": ""
},
{
"docid": "6c682f3412cc98eac5ae2a2356dccef7",
"text": "Since their inception, micro-size light emitting diode (µLED) arrays based on III-nitride semiconductors have emerged as a promising technology for a range of applications. This paper provides an overview on a decade progresses on realizing III-nitride µLED based high voltage single-chip AC/DC-LEDs without power converters to address the key compatibility issue between LEDs and AC power grid infrastructure; and high-resolution solid-state self-emissive microdisplays operating in an active driving scheme to address the need of high brightness, efficiency and robustness of microdisplays. These devices utilize the photonic integration approach by integrating µLED arrays on-chip. Other applications of nitride µLED arrays are also discussed.",
"title": ""
},
{
"docid": "14fe7deaece11b3d4cd4701199a18599",
"text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.",
"title": ""
},
{
"docid": "041772bbad50a5bf537c0097e1331bdd",
"text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.",
"title": ""
},
{
"docid": "d1eed1d7875930865944c98fbab5f7e1",
"text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.",
"title": ""
}
] |
scidocsrr
|
8542d6e847a4522a40e735600bd2095a
|
An efficient data replication and load balancing technique for fog computing environment
|
[
{
"docid": "780f2a97da4f18fc3710fa0ca0489ef4",
"text": "MapReduce has gradually become the framework of choice for \"big data\". The MapReduce model allows for efficient and swift processing of large scale data with a cluster of compute nodes. However, the efficiency here comes at a price. The performance of widely used MapReduce implementations such as Hadoop suffers in heterogeneous and load-imbalanced clusters. We show the disparity in performance between homogeneous and heterogeneous clusters in this paper to be high. Subsequently, we present MARLA, a MapReduce framework capable of performing well not only in homogeneous settings, but also when the cluster exhibits heterogeneous properties. We address the problems associated with existing MapReduce implementations affecting cluster heterogeneity, and subsequently present through MARLA the components and trade-offs necessary for better MapReduce performance in heterogeneous cluster and cloud environments. We quantify the performance gains exhibited by our approach against Apache Hadoop and MARIANE in data intensive and compute intensive applications.",
"title": ""
}
] |
[
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "2c0b3b58da77cc217e4311142c0aa196",
"text": "In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures.",
"title": ""
},
{
"docid": "9c7f9ff55b02bd53e94df004dcc615b9",
"text": "Support Vector Machines (SVM) is among the most popular classification techniques in machine learning, hence designing fast primal SVM algorithms for large-scale datasets is a hot topic in recent years. This paper presents a new L2norm regularized primal SVM solver using Augmented Lagrange Multipliers, with linear computational cost for Lp-norm loss functions. The most computationally intensive steps (that determine the algorithmic complexity) of the proposed algorithm is purely and simply matrix-byvector multiplication, which can be easily parallelized on a multi-core server for parallel computing. We implement and integrate our algorithm into the interfaces and framework of the well-known LibLinear software toolbox. Experiments show that our algorithm is with stable performance and on average faster than the stateof-the-art solvers such as SVM perf , Pegasos and the LibLinear that integrates the TRON, PCD and DCD algorithms.",
"title": ""
},
{
"docid": "5d7dced0ed875fed0f11440dc26fffd1",
"text": "Different from conventional mobile networks designed to optimize the transmission efficiency of one particular service (e.g., streaming voice/ video) primarily, the industry and academia are reaching an agreement that 5G mobile networks are projected to sustain manifold wireless requirements, including higher mobility, higher data rates, and lower latency. For this purpose, 3GPP has launched the standardization activity for the first phase 5G system in Release 15 named New Radio (NR). To fully understand this crucial technology, this article offers a comprehensive overview of the state-of-the-art development of NR, including deployment scenarios, numerologies, frame structure, new waveform, multiple access, initial/random access procedure, and enhanced carrier aggregation (CA) for resource requests and data transmissions. The provided insights thus facilitate knowledge of design and practice for further features of NR.",
"title": ""
},
{
"docid": "96d8e375616a7ee137276d385c14a18a",
"text": "Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.",
"title": ""
},
{
"docid": "70f0997789d4d61a6e5d44f15a6af32a",
"text": "This study reviewed the literature on cone-beam computerized tomography (CBCT) imaging of the oral and maxillofacial (OMF) region. A PUBMED search (National Library of Medicine, NCBI; revised 1 December 2007) from 1998 to December 2007 was conducted. This search revealed 375 papers, which were screened in detail. 176 papers were clinically relevant and were analyzed in detail. CBCT is used in OMF surgery and orthodontics for numerous clinical applications, particularly for its low cost, easy accessibility and low radiation compared with multi-slice computerized tomography. The results of this systematic review show that there is a lack of evidence-based data on the radiation dose for CBCT imaging. Terminology and technical device properties and settings were not consistent in the literature. An attempt was made to provide a minimal set of CBCT device-related parameters for dedicated OMF scanners as a guideline for future studies.",
"title": ""
},
{
"docid": "4d91850baa5995bc7d5e3d5e9e11fa58",
"text": "Drug risk management has many tools for minimizing risk and black-boxed warnings (BBWs) are one of those tools. Some serious adverse drug reactions (ADRs) emerge only after a drug is marketed and used in a larger population. In Thailand, additional legal warnings after drug approval, in the form of black-boxed warnings, may be applied. Review of their characteristics can assist in the development of effective risk mitigation. This study was a cross sectional review of all legal warnings imposed in Thailand after drug approval (2003-2012). Any boxed warnings for biological products and revised warnings which were not related to safety were excluded. Nine legal warnings were evaluated. Seven related to drugs classes and two to individual drugs. The warnings involved four main types of predictable ADRs: drug-disease interactions, side effects, overdose and drug-drug interactions. The average time from first ADRs reported to legal warnings implementation was 12 years. The triggers were from both safety signals in Thailand and regulatory measures in other countries outside Thailand.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "5e0921d158f0fa7b299fffba52f724d5",
"text": "Space syntax derives from a set of analytic measures of configuration that have been shown to correlate well with how people move through and use buildings and urban environments. Space syntax represents the open space of an environment in terms of the intervisibility of points in space. The measures are thus purely configurational, and take no account of attractors, nor do they make any assumptions about origins and destinations or path planning. Space syntax has found that, despite many proposed higher-level cognitive models, there appears to be a fundamental process that informs human and social usage of an environment. In this paper we describe an exosomatic visual architecture, based on space syntax visibility graphs, giving many agents simultaneous access to the same pre-processed information about the configuration of a space layout. Results of experiments in a simulated retail environment show that a surprisingly simple ‘random next step’ based rule outperforms a more complex ‘destination based’ rule in reproducing observed human movement behaviour. We conclude that the effects of spatial configuration on movement patterns that space syntax studies have found are consistent with a model of individual decision behaviour based on the spatial affordances offered by the morphology of the local visual field.",
"title": ""
},
{
"docid": "5910bcdd2dcacb42d47194a70679edb1",
"text": "Developing effective suspicious activity detection methods has become an increasingly critical problem for governments and financial institutions in their efforts to fight money laundering. Previous anti-money laundering (AML) systems were mostly rule-based systems which suffered from low efficiency and could can be easily learned and evaded by money launders. Recently researchers have begun to use machine learning methods to solve the suspicious activity detection problem. However nearly all these methods focus on detecting suspicious activities on accounts or individual level. In this paper we propose a sequence matching based algorithm to identify suspicious sequences in transactions. Our method aims to pick out suspicious transaction sequences using two kinds of information as reference sequences: 1) individual account’s transaction history and 2) transaction information from other accounts in a peer group. By introducing the reference sequences, we can combat those who want to evade regulations by simply learning and adapting reporting criteria, and easily detect suspicious patterns. The initial results show that our approach is highly accurate.",
"title": ""
},
{
"docid": "a0eb1b462d2169f5e7fa67690169591f",
"text": "In this paper, we present 3 different neural network-based methods to perform variable selection. OCD Optimal Cell Damage is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of J_.e Cun et al.). Regularization theory proposes to constrain estimators by adding a term to the cost function used to train a neural network. In the Bayesian framework, this additional term can be interpreted as the log prior to the weights distribution. We propose to use two priors (a Gaussian and a Gaussian mixture) and show that this regularization approach allows to select efficient subsets of variables. Our methods are compared to conventional statistical selection procedures and are shown to significantly improve on that.",
"title": ""
},
{
"docid": "6d3dbbf788255dfc137b1324e491fd9d",
"text": "Nowadays, a great number of healthcare data are generated every day from both medical institutions and individuals. Healthcare information exchange (HIE) has been proved to benefit the medical industry remarkably. To store and share such large amount of healthcare data is important while challenging. In this paper, we propose BlocHIE, a Blockchain-based platform for healthcare information exchange. First, we analyze the different requirements for sharing healthcare data from different sources. Based on the analysis, we employ two loosely-coupled Blockchains to handle different kinds of healthcare data. Second, we combine off-chain storage and on-chain verification to satisfy the requirements of both privacy and authenticability. Third, we propose two fairness-based packing algorithms to improve the system throughput and the fairness among users jointly. To demonstrate the practicability and effectiveness of BlocHIE, we implement BlocHIE in a minimal-viable-product way and evaluate the proposed packing algorithms extensively.",
"title": ""
},
{
"docid": "3714dabbe309545a1926e06e82f91975",
"text": "The automatic generation of anime characters offers an opportunity to bring a custom character into existence without professional skill. Besides, professionals may also take advantages of the automatic generation for inspiration on animation and game character design. however results from existing models [15, 18, 8, 22, 12] on anime image generation are blurred and distorted on an non-trivial frequency, thus generating industry-standard facial images for anime characters remains a challenge. In this paper, we propose a model that produces anime faces at high quality with promising rate of success with three-fold contributions: A clean dataset from Getchu, a suitable DRAGAN[10]-based SRResNet[11]like GAN model, and our general approach to training conditional model from image with estimated tags as conditions. We also make available a public accessible web interface.",
"title": ""
},
{
"docid": "22bb6af742b845dea702453b6b14ef3a",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "658c7ae98ea4b0069a7a04af1e462307",
"text": "Exploiting packetspsila timing information for covert communication in the Internet has been explored by several network timing channels and watermarking schemes. Several of them embed covert information in the inter-packet delay. These channels, however, can be detected based on the perturbed traffic pattern, and their decoding accuracy could be degraded by jitter, packet loss and packet reordering events. In this paper, we propose a novel TCP-based timing channel, named TCPScript to address these shortcomings. TCPScript embeds messages in ldquonormalrdquo TCP data bursts and exploits TCPpsilas feedback and reliability service to increase the decoding accuracy. Our theoretical capacity analysis and extensive experiments have shown that TCPScript offers much higher channel capacity and decoding accuracy than an IP timing channel and JitterBug. On the countermeasure, we have proposed three new metrics to detect aggressive TCPScript channels.",
"title": ""
},
{
"docid": "0b7ed990d65be35f445d4243d627f9cd",
"text": "A middle-1x nm design rule multi-level NAND flash memory cell (M1X-NAND) has been successfully developed for the first time. 1) QSPT (Quad Spacer Patterning Technology) of ArF immersion lithography is used for patterning mid-1x nm rule wordline (WL). In order to achieve high performance and reliability, several integration technologies are adopted, such as 2) advanced WL air-gap process, 3) floating gate slimming process, and 4) optimized junction formation scheme. And also, by using 5) new N±1 WL Vpass scheme during programming, charge loss and program speed are greatly improved. As a result, mid-1x nm design rule NAND flash memories has been successfully realized.",
"title": ""
},
{
"docid": "17ed907c630ec22cbbb5c19b5971238d",
"text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.",
"title": ""
},
{
"docid": "db8b26229ced95bab2028d0b8eb8a43f",
"text": "OBJECTIVES\nThis study investigated isometric and isokinetic hip strength in individuals with and without symptomatic femoroacetabular impingement (FAI). The specific aims were to: (i) determine whether differences exist in isometric and isokinetic hip strength measures between groups; (ii) compare hip strength agonist/antagonist ratios between groups; and (iii) examine relationships between hip strength and self-reported measures of either hip pain or function in those with FAI.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nFifteen individuals (11 males; 25±5 years) with symptomatic FAI (clinical examination and imaging (alpha angle >55° (cam FAI), and lateral centre edge angle >39° and/or positive crossover sign (combined FAI))) and 14 age- and sex-matched disease-free controls (no morphological FAI on magnetic resonance imaging) underwent strength testing. Maximal voluntary isometric contraction strength of hip muscle groups and isokinetic hip internal (IR) and external rotation (ER) strength (20°/s) were measured. Groups were compared with independent t-tests and Mann-Whitney U tests.\n\n\nRESULTS\nParticipants with FAI had 20% lower isometric abduction strength than controls (p=0.04). There were no significant differences in isometric strength for other muscle groups or peak isokinetic ER or IR strength. The ratio of isometric, but not isokinetic, ER/IR strength was significantly higher in the FAI group (p=0.01). There were no differences in ratios for other muscle groups. Angle of peak IR torque was the only feature correlated with symptoms.\n\n\nCONCLUSIONS\nIndividuals with symptomatic FAI demonstrate isometric hip abductor muscle weakness and strength imbalance in the hip rotators. Strength measurement, including agonist/antagonist ratios, may be relevant for clinical management of FAI.",
"title": ""
},
{
"docid": "d284fff9eed5e5a332bb3cfc612a081a",
"text": "This paper describes the NILC USP system that participated in SemEval-2013 Task 2: Sentiment Analysis in Twitter. Our system adopts a hybrid classification process that uses three classification approaches: rulebased, lexicon-based and machine learning approaches. We suggest a pipeline architecture that extracts the best characteristics from each classifier. Our system achieved an Fscore of 56.31% in the Twitter message-level subtask.",
"title": ""
}
] |
scidocsrr
|
70efe5abbfaba4e4e37050dc906b7a85
|
Maximum battery life routing to support ubiquitous mobile computing in wireless ad hoc networks
|
[
{
"docid": "bbdb676a2a813d29cd78facebc38a9b8",
"text": "In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.",
"title": ""
},
{
"docid": "b5da410382e8ad27f012f3adac17592e",
"text": "In this paper, we propose a new routing protocol, the Zone Routing Protocol (ZRP), for the Reconfigurable Wireless Networks, a large scale, highly mobile ad-hoc networking environment. The novelty of the ZRP protocol is that it is applicable to large flat-routed networks. Furthermore, through the use of the zone radius parameter, the scheme exhibits adjustable hybrid behavior of proactive and reactive routing schemes. We evaluate the performance of the protocol, showing the reduction in the number of control messages, as compared with other reactive schemes, such as flooding. INTRODUCTION Recently, there has been an increased interest in ad-hoc networking [1]. In general, ad-hoc networks are network architecture that can be rapidly deployed, without preexistence of any fixed infrastructure. A special case of ad-hoc networks, the Reconfigurable Wireless Networks (RWN), was previously introduced [2,3] to emphasize a number of special characteristics of the RWN communication environment: 3⁄4 large network coverage; large network radius, net r , 3⁄4 large number of network nodes, and 3⁄4 large range of nodal velocities (from stationary to highly mobile). In particular, the topology of the RWN is quite frequently changing, while self-adapting to the connectivity and propagation conditions and to the traffic and mobility patterns. Examples of the use of the RWNs are: • military (tactical) communication for fast establishment of communication infrastructure during deployment of forces in a foreign (hostile) terrain • rescue missions for communication in areas without adequate wireless coverage • national security for communication in times of national crisis, when the existing communication infrastructure is non-operational due to a natural disasters or a global war • law enforcement similar to tactical communication 1 For example, the maximal nodal velocity is such that the lifetime of a link can be between hundreds of milliseconds to few seconds only. • commercial use for setting up communication in exhibitions, conferences, or sale presentations • education for operation of virtual classrooms • sensor networks for communication between intelligent sensors (e.g., MEMS) mounted on mobile platforms. Basically, there are two approaching in providing ad-hoc network connectivity: flat-routed or hierarchical network architectures. An example of a flat-routed network is shown in Figure 1 and of a two-tiered hierarFigure 1: A flat-routed ad-hoc network chical network in Figure 2. In flat-routed networks, all the nodes are “equal” and the packet routing is done based on peer-to-peer connections, restricted only by the propagation conditions. In hierarchical networks, there are at least two tiers; on the lower tier, nodes in geographical proximity create peer-to-peer networks. In each one of these lower-tier networks, at least one node is designated to serve as a \"gateway” to the higher tier. These “gateway” nodes create the highertier network, which usually requires more powerful transmitters/receivers. Although routing between nodes that belong to the same lower-tier network is based on peer-to-peer routing, routing between nodes that belong to different lower-tier networks is through the gateway nodes. Figure 2: A two-tiered ad-hoc network tier-1 network tier-2 network tier-1 network tier-1 network tier-1 network cluster cluster head We will omit here the comparison of the two architectures. Nevertheless, we note that the flat-routed networks are more suitable for the highly versatile communication environment as the RWN-s. The reason is that the maintenance of the hierarchies (and the associated cluster heads) is too costly in network resources when the lifetime of the links is quite short. Thus, we chose to concentrate on the flat-routed network architecture in our study of the routing protocols for the RWN. PREVIOUS AND RELATED WORK The currently available routing protocols are inadequate for the RWN. The main problem is that they do not support either fast-changeable network architecture or that they do not scale well with the size of the network (number of nodes). Surprisingly, these shortcomings are present even in some routing protocols that were proposed for ad-hoc networks. More specifically, the challenge stems from the fact that, on one hand, in-order to route packets in a network, the network topology needs to be known to the traversed nodes. On the other hand, in a RWN, this topology may change quite often. Also, the number of nodes may be very large. Thus, the cost of updates is quite high, in contradiction with the fact that updates are expensive in the wireless communication environment. Furthermore, as the number of network nodes may be large, the potential number of destinations is also large, requiring large and frequent exchange of data (e.g., routes, routes updates, or routing tables) between network nodes. The wired Internet uses routing protocols based on topological broadcast, such as the OSPF [4]. These protocols are not suitable for the RWN due to the relatively large bandwidth required for update messages. In the past, routing in multi-hop packet radio networks was based on shortest-path routing algorithms [5], such as Distributed Bellman-Ford (DBF) algorithm. These algorithms suffer from very slow convergence (the “counting to infinity” problem). Besides, DBF-like algorithms incur large update message penalty. Protocols that attempted to cure some of the shortcoming of DFB, such as DestinationSequenced Distance-Vector Routing (DSDV) [6], were proposed and studied. Nevertheless, synchronization problems and extra processing overhead are common in these protocols. Other protocols that rely on the information from the predecessor of the shortest path solve the slow convergence problem of DBF (e.g., [7]). However, the processing requirements of these protocols may be quite high, because of the way they process the update messages. Use of dynamic source routing protocol, which utilizes flooding to discover a route to a destination, is described in [8]. A number of optimization techniques, such as route caching are also presented that reduce the route determination/maintenance overhead. In a highly dynamic environment, such as the RWN is, this type of protocols lead to a large delay and the techniques to reduce overhead may not perform well. A query-reply based routing protocol has been introduced recently in [9]. Practical implementation of this protocol in the RWN-s can lead, however, to high communication requirements. A new distance-vector routing protocol for packet radio networks (WRP) is presented in [10]. Upon change in the network topology, WRP relies on communicating the change to its neighbors, which effectively propagates throughout the whole network. The salient advantage of WRP is the considerable reduction in the probability of loops in the calculated routes. The main disadvantage of WRP for the RWN is in the fact that routing nodes constantly maintain full routing information in each network node, which was obtained at relatively high cost in wireless resources In [11], routing is based on temporary addresses assigned to nodes. These addresses are concatenation of the node’s addresses on a physical and a virtual networks. However, routing requires full connectivity among all the physical network nodes. Furthermore, the routing may not be optimal, as it is based on addresses, which may not be related to the geographical locations, producing a long path for communication between two close-by nodes. The above routing protocols can be classified either as proactive or as reactive. Proactive protocols attempt to continuously evaluate the routes within the network, so that when a packet needs to be forwarded, the route is already known and can be immediately used. Reactive protocols, on the other hand, invoke the route determination procedures on demand only. Thus, when a route is needed, some sort of global search procedure is employed. The advantage of the proactive schemes is that, once a route is requested, there is little delay until route is determined. In reactive protocols, because route information may not be available at the time a routing request is received, the delay to determine a route can be quite significant. Because of this long delay, pure reactive routing protocols may not be applicable to realtime communication. However, pure proactive schemes are likewise not appropriate for the RWN environment, as they continuously use large portion of the network capacity to keep the routing information current. Since in an RWN nodes move quite fast, and as the changes may be more frequent than the routing requests, most of this routing information is never used! This results in an excessive waste of the network capacity. What is needed is a protocol that, on one hand, initiates the route-determination procedure on-demand, but with limited cost of the global search. The introduced here routing protocol, which is based on the notion of routing zones, incurs very low overhead in route determination. It requires maintaining a small amount of routing information in each node. There is no overhead of wireless resources to maintain routing information of inactive routes. Moreover, it identifies multiple routes with no looping problems. The ZONE ROUTING PROTOCOL (ZRP) Our approach to routing in the RWN is based on the notion of a routing zone, which is defined for each node and includes the nodes whose distance (e.g., in hops) is at most some predefined number. This distance is referred to here as the zone radius, zone r . Each node is required to know the topology of the network within its routing zone only and nodes are updated about topological changes only within their routing zone. Thus, even though a network can be quite large, the updates are only locally propagated. Since for radius greater than 1 the routing zones heavily overlap, the routing tends to be extremely robust. The rout",
"title": ""
}
] |
[
{
"docid": "31a198040fed8ce96dae2968a4060e4d",
"text": "Recent research has indicated that the degree of strategic planning in organisations is likely to have a direct impact on business performance and business evaluation. However, these findings leave small and medium-sized businesses (SMEs) in particular, with the challenge of matching the requirement for an improved strategic planning processes with the competitive advantage associated with being a “simple” and highly responsive organisation. In response to that challenge this paper discusses the potential benefits to SMEs in adopting the Balanced Scorecard methodology and the underlying management processes most relevant to SMEs. It also makes observations about how use and value may differ between Balanced Scorecard application in large and smaller enterprises.",
"title": ""
},
{
"docid": "abbb08ccfac8a7fb3bfe92e950bd4186",
"text": "This paper presents how text summarization can be influenced by textual entailment. We show that if we use textual entailment recognition together with text summarization approach, we achieve good results for final summaries, obtaining an improvement of 6.78% with respect to the summarization approach only. We also compare the performance of this combined approach to two baselines (the one provided in DUC 2002 and ours based on word-frequency technique) and we discuss the preliminary results obtained in order to infer conclusions that can be useful for future research.",
"title": ""
},
{
"docid": "6f6ebcdc15339df87b9499c0760936ce",
"text": "This paper outlines the design, implementation and evaluation of CAPTURE - a novel automated, continuously working cyber attack forecast system. It uses a broad range of unconventional signals from various public and private data sources and a set of signals forecasted via the Auto-Regressive Integrated Moving Average (ARIMA) model. While generating signals, auto cross correlation is used to find out the optimum signal aggregation and lead times. Generated signals are used to train a Bayesian classifier against the ground truth of each attack type. We show that it is possible to forecast future cyber incidents using CAPTURE and the consideration of the lead time could improve forecast performance.",
"title": ""
},
{
"docid": "0070d6e21bdb8bac260178603cfbf67d",
"text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.",
"title": ""
},
{
"docid": "e84b6bbb2eaee0edb6ac65d585056448",
"text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.",
"title": ""
},
{
"docid": "51be236c79d1af7a2aff62a8049fba34",
"text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.",
"title": ""
},
{
"docid": "9c79105367f92ee1d6ac604af2105bf2",
"text": "Vector controlled motor drives are widely used in industry application areas, usually they contain two current sensors and a speed sensor. A fault diagnosis and reconfiguration structure is proposed in this paper including current sensor measurement errors and sensors open-circuit fault. Sliding windows and special features are designed to real-time detect the measurement errors, compensations are made according to detected offset and scaling values. When open-circuit faults occur, sensor outputs are constant-zero, the residuals between the Extended Kalman Filter (EKF) outputs and the sensors outputs are larger than pre-defined close-to-zero thresholds, under healthy condition, the residuals are equal to zero, as a result, the residuals can be used for open circuit fault detection. In this situation, the feedback signals immediately switch to EKF outputs to realize reconfiguration. Fair robustness are evaluated under disturbance such as load torque changes and variable speed. Simulation results show the effectiveness and merits of the proposed methods in this paper.",
"title": ""
},
{
"docid": "35a85bb270f1140d4dbb1090fd1e26cc",
"text": "English. The Citation Contexts of a cited entity can be seen as little tesserae that, fit together, can be exploited to follow the opinion of the scientific community towards that entity as well as to summarize its most important contents. This mosaic is an excellent resource of information also for identifying topic specific synonyms, indexing terms and citers’ motivations, i.e. the reasons why authors cite other works. Is a paper cited for comparison, as a source of data or just for additional info? What is the polarity of a citation? Different reasons for citing reveal also different weights of the citations and different impacts of the cited authors that go beyond the mere citation count metrics. Identifying the appropriate Citation Context is the first step toward a multitude of possible analysis and researches. So far, Citation Context have been defined in several ways in literature, related to different purposes, domains and applications. In this paper we present different dimensions of Citation Context investigated by researchers through the years in order to provide an introductory review of the topic to anyone approaching this subject. Italiano. Possiamo pensare ai Contesti Citazionali come tante tessere che, unite, possono essere sfruttate per seguire l’opinione della comunità scientifica riguardo ad un determinato lavoro o per riassumerne i contenuti più importanti. Questo mosaico di informazioni può essere utilizzato per identificare sinonimi specifici e Index Terms nonchè per individuare i motivi degli autori dietro le citazioni. Identificare il Contesto Citazionale ottimale è il primo passo per numerose analisi e ricerche. Il Contesto Citazionale è stato definito in diversi modi in letteratura, in relazione a differenti scopi, domini e applicazioni. In questo paper presentiamo le principali dimensioni testuali di Contesto Citazionale investigate dai ricercatori nel corso degli",
"title": ""
},
{
"docid": "5b7106a23930af7ccaeac561837c5154",
"text": "Recent years the number of vehicles increases tremendously. Because of that to identify the vehicle is significant task. Vehicle color and number plate recognition are various ways to identify the vehicle. So Vehicle color recognition essential part of an intelligent transportation system. There are several methods for recognizing the color of the vehicle like feature extract, template matching, convolutional neural network (CNN), etc. CNN is emerging technique within the field of Deep learning. The survey concludes that compared to other techniques CNN gives more accurate results with less training time even for large dataset. The images taken from roads or hill areas aren't visible because of haze. Consequently, removing haze may improve the color recognition. The proposed system combines both techniques and it adopts the dark channel prior technique to remove the haze, followed by feature learning using CNN. After feature learning, classification can be performed by effective classification technique like SVM.",
"title": ""
},
{
"docid": "6e97021a746cf7134d194f0ec58c3212",
"text": "Recently, medium-chain triglycerides (MCTs) containing a large fraction of lauric acid (LA) (C12)-about 30%-have been introduced commercially for use in salad oils and in cooking applications. As compared to the long-chain fatty acids found in other cooking oils, the medium-chain fats in MCTs are far less likely to be stored in adipose tissue, do not give rise to 'ectopic fat' metabolites that promote insulin resistance and inflammation, and may be less likely to activate macrophages. When ingested, medium-chain fatty acids are rapidly oxidised in hepatic mitochondria; the resulting glut of acetyl-coenzyme A drives ketone body production and also provokes a thermogenic response. Hence, studies in animals and humans indicate that MCT ingestion is less obesogenic than comparable intakes of longer chain oils. Although LA tends to raise serum cholesterol, it has a more substantial impact on high density lipoprotein (HDL) than low density lipoprotein (LDL) in this regard, such that the ratio of total cholesterol to HDL cholesterol decreases. LA constitutes about 50% of the fatty acid content of coconut oil; south Asian and Oceanic societies which use coconut oil as their primary source of dietary fat tend to be at low cardiovascular risk. Since ketone bodies can exert neuroprotective effects, the moderate ketosis induced by regular MCT ingestion may have neuroprotective potential. As compared to traditional MCTs featuring C6-C10, laurate-rich MCTs are more feasible for use in moderate-temperature frying and tend to produce a lower but more sustained pattern of blood ketone elevation owing to the more gradual hepatic oxidation of ingested laurate.",
"title": ""
},
{
"docid": "245371dccf75c8982f77c4d48d84d370",
"text": "This paper addresses the problem of streaming packetized media over a lossy packet network in a rate-distortion optimized way. We show that although the data units in a media presentation generally depend on each other according to a directed acyclic graph, the problem of rate-distortion optimized streaming of an entire presentation can be reduced to the problem of error-cost optimized transmission of an isolated data unit. We show how to solve the latter problem in a variety of scenarios, including the important common scenario of sender-driven streaming with feedback over a best-effort network, which we couch in the framework of Markov decision processes. We derive a fast practical algorithm for nearly optimal streaming in this scenario, and we derive a general purpose iterative descent algorithm for locally optimal streaming in arbitrary scenarios. Experimental results show that systems based on our algorithms have steady-state gains of 2-6 dB or more over systems that are not rate-distortion optimized. Furthermore, our systems essentially achieve the best possible performance: the operational distortion-rate function of the source at the capacity of the packet erasure channel.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "896fa229bd0ffe9ef6da9fbe0e0866e6",
"text": "In this paper, a cascaded current-voltage control strategy is proposed for inverters to simultaneously improve the power quality of the inverter local load voltage and the current exchanged with the grid. It also enables seamless transfer of the operation mode from stand-alone to grid-connected or vice versa. The control scheme includes an inner voltage loop and an outer current loop, with both controllers designed using the H∞ repetitive control strategy. This leads to a very low total harmonic distortion in both the inverter local load voltage and the current exchanged with the grid at the same time. The proposed control strategy can be used to single-phase inverters and three-phase four-wire inverters. It enables grid-connected inverters to inject balanced clean currents to the grid even when the local loads (if any) are unbalanced and/or nonlinear. Experiments under different scenarios, with comparisons made to the current repetitive controller replaced with a current proportional-resonant controller, are presented to demonstrate the excellent performance of the proposed strategy.",
"title": ""
},
{
"docid": "767d0ad795eedc0109d3afe738dc9ce7",
"text": "We do not know a-priori what the normalised patch looks like. But we may know the transformation Tgt between the patches: Tgt = Tʹ T = Ψ(xʹ) Ψ(x) Learning Formulation Using synthetic translations of real patches: ǁTgt Ψ(xʹ) + Ψ(x)ǁ ≅ 0 General Local Feature Detectors A general covariant framework Generalization to Affine Covariant Detectors Features are oriented ellipses and transformations are affinities.",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "740666c9391668a1e4763a612776ad75",
"text": "Building user empathy in a tech organization is crucial to ensure that products are designed with an eye toward user needs and experiences. The Pokerface program is a Google internal user empathy campaign with 26 researchers that helped more than 1500 employees-including engineers, product managers, designers, analysts, and program managers across more than 15 sites-have first-hand experiences with their users. Here, we discuss the goals of the Pokerface program, some challenges that we have faced during execution, and the impact we have measured thus far.",
"title": ""
},
{
"docid": "95411969fcf7e2ba1eb506edf30d7c3e",
"text": "The increasing implementation of various platforms and technology of information systems also increases the complexity of integration when the integration system is needed. This is just like what has happened in most of government areas. As we have done from a case study in Sleman, a regency of Yogyakarta Indonesia, it has many departments that use different platform and technology on implementing information system. Integration services using point-to-point method is considered to be irrelevant whereas the number of services are growing up rapidly and more complex. So, in this paper we have proposed a service orchestration mechanism using enterprise service bus (ESB) to integrate many services from many departments which used their owned platform and technology of information system. ESB can be the solution of n-to-n integration problem and it strongly supports the implementation of service oriented architecture (SOA). This paper covers the analysis, design and implementation of integration system in government area. Then the result of this integration has been deployed as a single real time executive dashboard system that can be useful for the governance in order to support them on making decision or policy. Functional and performance testing are used to ensure that the implementation of integration does not disrupt other transaction processes.",
"title": ""
},
{
"docid": "d9ce8f84bfac52a9d7d8a2924cec7e3d",
"text": "Urban water quality is of great importance to our daily lives. Prediction of urban water quality help control water pollution and protect human health. In this work, we forecast the water quality of a station over the next few hours, using a multitask multi-view learning method to fuse multiple datasets from different domains. In particular, our learning model comprises two alignments. The first alignment is the spaio-temporal view alignment, which combines local spatial and temporal information of each station. The second alignment is the prediction alignment among stations, which captures their spatial correlations and performs copredictions by incorporating these correlations. Extensive experiments on real-world datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "de96b6b43f68972faac8eec246e34c25",
"text": "The idea that chemotherapy can be used in combination with immunotherapy may seem somewhat counterproductive, as it can theoretically eliminate the immune cells needed for antitumour immunity. However, much preclinical work has now demonstrated that in addition to direct cytotoxic effects on cancer cells, a proportion of DNA damaging agents may actually promote immunogenic cell death, alter the inflammatory milieu of the tumour microenvironment and/or stimulate neoantigen production, thereby activating an antitumour immune response. Some notable combinations have now moved forward into the clinic, showing promise in phase I–III trials, whereas others have proven toxic, and challenging to deliver. In this review, we discuss the emerging data of how DNA damaging agents can enhance the immunogenic properties of malignant cells, focussing especially on immunogenic cell death, and the expansion of neoantigen repertoires. We discuss how best to strategically combine DNA damaging therapeutics with immunotherapy, and the challenges of successfully delivering these combination regimens to patients. With an overwhelming number of chemotherapy/immunotherapy combination trials in process, clear hypothesis-driven trials are needed to refine the choice of combinations, and determine the timing and sequencing of agents in order to stimulate antitumour immunological memory and improve maintained durable response rates, with minimal toxicity.",
"title": ""
}
] |
scidocsrr
|
62033235c6aa05b1442b204e73fd0aa3
|
Static analysis for probabilistic programs: inferring whole program properties from finitely many paths
|
[
{
"docid": "e49aa0d0f060247348f8b3ea0a28d3c6",
"text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"title": ""
}
] |
[
{
"docid": "36afb791436e95cec6167499bf4b0214",
"text": "Leveraging historical data from the movie industry, this study built a predictive model for movie success, deviating from past studies by predicting profit (as opposed to revenue) at early stages of production (as opposed to just prior to release) to increase investor certainty. Our work derived several groups of novel features for each movie, based on the cast and collaboration network (who’), content (‘what’), and time of release (‘when’).",
"title": ""
},
{
"docid": "b017fd773265c73c7dccad86797c17b8",
"text": "Active learning, which has a strong impact on processing data prior to the classification phase, is an active research area within the machine learning community, and is now being extended for remote sensing applications. To be effective, classification must rely on the most informative pixels, while the training set should be as compact as possible. Active learning heuristics provide capability to select unlabeled data that are the “most informative” and to obtain the respective labels, contributing to both goals. Characteristics of remotely sensed image data provide both challenges and opportunities to exploit the potential advantages of active learning. We present an overview of active learning methods, then review the latest techniques proposed to cope with the problem of interactive sampling of training pixels for classification of remotely sensed data with support vector machines (SVMs). We discuss remote sensing specific approaches dealing with multisource and spatially and time-varying data, and provide examples for high-dimensional hyperspectral imagery.",
"title": ""
},
{
"docid": "0d2ddb448c01172e53f19d9d5ac39f21",
"text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.",
"title": ""
},
{
"docid": "23eb737d3930862326f81bac73c5e7f5",
"text": "O discussion communities have become a widely used medium for interaction, enabling conversations across a broad range of topics and contexts. Their success, however, depends on participants’ willingness to invest their time and attention in the absence of formal role and control structures. Why, then, would individuals choose to return repeatedly to a particular community and engage in the various behaviors that are necessary to keep conversation within the community going? Some studies of online communities argue that individuals are driven by self-interest, while others emphasize more altruistic motivations. To get beyond these inconsistent explanations, we offer a model that brings dissimilar rationales into a single conceptual framework and shows the validity of each rationale in explaining different online behaviors. Drawing on typologies of organizational commitment, we argue that members may have psychological bonds to a particular online community based on (a) need, (b) affect, and/or (c) obligation. We develop hypotheses that explain how each form of commitment to a community affects the likelihood that a member will engage in particular behaviors (reading threads, posting replies, moderating the discussion). Our results indicate that each form of community commitment has a unique impact on each behavior, with need-based commitment predicting thread reading, affect-based commitment predicting reply posting and moderating behaviors, and obligation-based commitment predicting only moderating behavior. Researchers seeking to understand how discussion-based communities function will benefit from this more precise theorizing of how each form of member commitment relates to different kinds of online behaviors. Community managers who seek to encourage particular behaviors may use our results to target the underlying form of commitment most likely to encourage the activities they wish to promote.",
"title": ""
},
{
"docid": "f2f5495973c560f15c307680bd5d3843",
"text": "The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.",
"title": ""
},
{
"docid": "91504378f63ba0c0d662180981f30f03",
"text": "Closely matching natural teeth with an artificial restoration can be one of the most challenging procedures in restorative dentistry. Natural teeth vary greatly in color and shape. They reveal ample information about patients' background and personality. Dentistry provides the opportunity to restore unique patient characteristics or replace them with alternatives. Whether one tooth or many are restored, the ability to assess and properly communicate information to the laboratory can be greatly improved by learning the language of color and light characteristics. It is only possible to duplicate in ceramic what has been distinguished, understood, and communicated in the shade-matching process of the natural dentition. This article will give the reader a better understanding of what happens when incident light hits the surface of a tooth and give strategies for best assessing and communicating this to the dental laboratory.",
"title": ""
},
{
"docid": "3f4d83525145a963c87167e3e02136a6",
"text": "Using the GTZAN Genre Collection [1], we start with a set of 1000 30 second song excerpts subdivided into 10 pre-classified genres: Blues, Classical, Country, Disco, Hip-Hop, Jazz, Metal, Pop, Reggae, and Rock. We downsampled to 4000 Hz, and further split each excerpt into 5-second clips For each clip, we compute a spectrogram using Fast Fourier Transforms, giving us 22 timestep vectors of dimensionality 513 for each clip. Spectrograms separate out component audio signals at different frequencies from a raw audio signal, and provide us with a tractable, loosely structured feature set for any given audio clip that is well-suited for deep learning techniques. (See, for example, the spectrogram produced by a jazz excerpt below) Models",
"title": ""
},
{
"docid": "a56650db0651fc0e76f9c0f383aec0e9",
"text": "Solid evidence of virtual reality's benefits has graduated from impressive visual demonstrations to producing results in practical applications. Further, a realistic experience is no longer immersion's sole asset. Empirical studies show that various components of immersion provide other benefits - full immersion is not always necessary. The goal of immersive virtual environments (VEs) was to let the user experience a computer-generated world as if it were real - producing a sense of presence, or \"being there,\" in the user's mind.",
"title": ""
},
{
"docid": "499fe7f6bf5c7d8fcfe690e7390a5d36",
"text": "Compressional or traumatic asphyxia is a well recognized entity to most forensic pathologists. The vast majority of reported cases have been accidental. The case reported here describes the apparent inflicted compressional asphyxia of a small child. A review of mechanisms and related controversy regarding proposed mechanisms is discussed.",
"title": ""
},
{
"docid": "2cc1373758f509c39275562f69b602c1",
"text": "This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera sensors are both well-suited for recovering the helicopter’s relative motion and velocity. Because they use different cues from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor. Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results in this direction, describing the key components for autonomous navigation using either of the two sensors separately.",
"title": ""
},
{
"docid": "fa2e8f411d74030bbec7937114f88f35",
"text": "We present a method for synthesizing a frontal, neutralexpression image of a person’s face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous generative approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar.",
"title": ""
},
{
"docid": "246cddf2c76383e82dab8f498b6974bb",
"text": "With the growing use of the Social Web, an increasing number of applications for exchanging opinions with other people are becoming available online. These applications are widely adopted with the consequence that the number of opinions about the debated issues increases. In order to cut in on a debate, the participants need first to evaluate the opinions in favour or against the debated issue. Argumentation theory proposes algorithms and semantics to evaluate the set of accepted arguments, given the conflicts among them. The main problem is how to automatically generate the arguments from the natural language formulation of the opinions used in these applications. Our paper addresses this problem by proposing and evaluating the use of natural language techniques to generate the arguments. In particular, we adopt the textual entailment approach, a generic framework for applied semantics, where linguistic objects are mapped by means of semantic inferences at a textual level. We couple textual entailment together with a Dung-like argumentation system which allows us to identify the arguments that are accepted in the considered online debate. The originality of the proposed framework lies in the following point: natural language debates are analyzed and the arguments are automatically extracted.",
"title": ""
},
{
"docid": "7dc7eaef334fc7678821fa66424421f1",
"text": "The present research complements extant variable-centered research that focused on the dimensions of autonomous and controlled motivation through adoption of a person-centered approach for identifying motivational profiles. Both in high school students (Study 1) and college students (Study 2), a cluster analysis revealed 4 motivational profiles: a good quality motivation group (i.e., high autonomous, low controlled); a poor quality motivation group (i.e., low autonomous, high controlled); a low quantity motivation group (i.e., low autonomous, low controlled); and a high quantity motivation group (i.e., high autonomous, high controlled). To compare the 4 groups, the authors derived predictions from qualitative and quantitative perspectives on motivation. Findings generally favored the qualitative perspective; compared with the other groups, the good quality motivation group displayed the most optimal learning pattern and scored highest on perceived need-supportive teaching. Theoretical and practical implications of the findings are discussed.",
"title": ""
},
{
"docid": "7f5af3806f0baa040a26f258944ad3f9",
"text": "Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a harmonic mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted harmonic mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.",
"title": ""
},
{
"docid": "8c47d9a93e3b9d9f31b77b724bf45578",
"text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.",
"title": ""
},
{
"docid": "5ae22c0209333125c61f66aafeeda139",
"text": "The author reports the development of a multi-finger robot hand with the mechatronics approach. The proposed robot hand has 4 fingers with 14 under-actuated joints driven by 10 linear actuators with linkages. Each of the 10 nodes in the distributed control system uses position and current feedback to monitor the contact stiffness and control the grasping force according to the motor current change rate. The combined force and position control loop enable the robot hand to grasp an object with the unknown shape. Pre-defined tasks, such as grasping and pinching are stored as scripts in the hand controller to provide a high-level programming interface for the upstream robot controller. The mechanical design, controller design and co-simulation are performed in an integrated model-based software environment, and also for the real time code generation and for mechanical parts manufacturing with a 3D printer. Based on the same model for design, a virtual robot hand interface is developed to provide off-line simulation tool and user interface to the robot hand to reduce the programming effort in fingers' motion planning. In the development of the robot hand, the mechatronics approach has been proven to be an indispensable tool for such a complex system.",
"title": ""
},
{
"docid": "3a948bb405b89376807a60a2a70ce7f7",
"text": "The objective of this research is to develop feature extraction and classification techniques for the task of acoustic event recognition (AER) in unstructured environments, which are those where adverse effects such as noise, distortion and multiple sources are likely to occur. The goal is to design a system that can achieve human-like sound recognition performance on a variety of hearing tasks in different environments. The research is important, as the field is commonly overshadowed by the more popular area of automatic speech recognition (ASR), and typical AER systems are often based on techniques taken directly from this. However, direct application presents difficulties, as the characteristics of acoustic events are less well defined than those of speech, and there is no sub-word dictionary available like the phonemes in speech. In addition, the performance of ASR systems typically degrades dramatically in such adverse, unstructured environments. Therefore, it is important to develop a system that can perform well for this challenging task. In this work, two novel feature extraction methods are proposed for recognition of environmental sounds in severe noisy conditions, based on the visual signature of the sounds. The first method is called the Spectrogram Image Feature (SIF), and is based on the timefrequency spectrogram of the sound. This is captured through an image-processing inspired quantisation and mapping of the dynamic range prior to feature extraction. Experimental results show that the feature based on the raw-power spectrogram has a good performance, and is particularly suited to severe mismatched conditions. The second proposed method is the Spectral Power Distribution Image Feature (SPD-IF), which uses the same image feature approach, but is based on an SPD image derived from the stochastic distribution of power over the sound clip. This is combined with a missing feature classification system, which marginalises the image regions containing only noise, and experiments show the method achieves the high accuracy of the baseline methods in clean conditions combined with robust results in mismatched noise.",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
},
{
"docid": "378f0e528dddcb0319d0015ebc5f8ccb",
"text": "Specific and non specific cholinesterase activities were demonstrated in the ABRM of Mytilus edulis L. and Mytilus galloprovincialis L. by means of different techniques. The results were found identical for both species: neuromuscular junctions “en grappe”-type scarcely distributed within the ABRM, contain AChE. According to the histochemical inhibition tests, (a) the eserine inhibits AChE activity of the ABRM with a level of 5·10−5 M or higher, (b) the ChE non specific activities are inhibited by iso-OMPA level between 5·10−5 to 10−4 M. The histo- and cytochemical observations were completed by showing the existence of neuromuscular junctions containing small clear vesicles: they probably are the morphological support for ACh presence. Moreover, specific and non specific ChE activities were localized in the glio-interstitial cells. AChE precipitates were developped along the ABRM sarcolemma, some muscle mitochondria and in the intercellular spaces remain enigmatic.",
"title": ""
},
{
"docid": "301373338fe35426f5186f400f63dbd3",
"text": "OBJECTIVE\nThis paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds.\n\n\nMETHODS AND MATERIAL\nReview of the current medical and technological literature using Pubmed and personal experience.\n\n\nRESULTS\nThe study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms…\n\n\nCONCLUSION\nThe next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.",
"title": ""
}
] |
scidocsrr
|
be3bde921a65f73375afbcdd6a19940a
|
Intergroup emotions: explaining offensive action tendencies in an intergroup context.
|
[
{
"docid": "59af1eb49108e672a35f7c242c5b4683",
"text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?",
"title": ""
}
] |
[
{
"docid": "bc57dfee1a00d7cfb025a1a5840623f8",
"text": "Production and consumption relationship shows that marketing plays an important role in enterprises. In the competitive market, it is very important to be able to sell rather than produce. Nowadays, marketing is customeroriented and aims to meet the needs and expectations of customers to increase their satisfaction. While creating a marketing strategy, an enterprise must consider many factors. Which is why, the process can and should be considered as a multi-criteria decision making (MCDM) case. In this study, marketing strategies and marketing decisions in the new-product-development process has been analyzed in a macro level. To deal quantitatively with imprecision or uncertainty, fuzzy sets theory has been used throughout the analysis.",
"title": ""
},
{
"docid": "f267f44fe9463ac0114335959f9739fa",
"text": "HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-to-display delay by 90.1% and the average start-up delay by 40.1%.",
"title": ""
},
{
"docid": "59c83aa2f97662c168316f1a4525fd4d",
"text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.",
"title": ""
},
{
"docid": "765e766515c9c241ffd2d84572fd887f",
"text": "The cost of reconciling consistency and state management with high availability is highly magnified by the unprecedented scale and robustness requirements of today’s Internet applications. We propose two strategies for improving overall availability using simple mechanisms that scale over large applications whose output behavior tolerates graceful degradation. We characterize this degradation in terms of harvest and yield, and map it directly onto engineering mechanisms that enhance availability by improving fault isolation, and in some cases also simplify programming. By collecting examples of related techniques in the literature and illustrating the surprising range of applications that can benefit from these approaches, we hope to motivate a broader research program in this area. 1. Motivation, Hypothesis, Relevance Increasingly, infrastructure services comprise not only routing, but also application-level resources such as search engines [15], adaptation proxies [8], and Web caches [20]. These applications must confront the same operational expectations and exponentially-growing user loads as the routing infrastructure, and consequently are absorbing comparable amounts of hardware and software. The current trend of harnessing commodity-PC clusters for scalability and availability [9] is reflected in the largest web server installations. These sites use tens to hundreds of PC’s to deliver 100M or more read-mostly page views per day, primarily using simple replication or relatively small data sets to increase throughput. The scale of these applications is bringing the wellknown tradeoff between consistency and availability [4] into very sharp relief. In this paper we propose two general directions for future work in building large-scale robust systems. Our approaches tolerate partial failures by emphasizing simple composition mechanisms that promote fault containment, and by translating possible partial failure modes into engineering mechanisms that provide smoothlydegrading functionality rather than lack of availability of the service as a whole. The approaches were developed in the context of cluster computing, where it is well accepted [22] that one of the major challenges is the nontrivial software engineering required to automate partial-failure handling in order to keep system management tractable. 2. Related Work and the CAP Principle In this discussion, strong consistency means singlecopy ACID [13] consistency; by assumption a stronglyconsistent system provides the ability to perform updates, otherwise discussing consistency is irrelevant. High availability is assumed to be provided through redundancy, e.g. data replication; data is considered highly available if a given consumer of the data can always reach some replica. Partition-resilience means that the system as whole can survive a partition between data replicas. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. The CAP formulation makes explicit the trade-offs in designing distributed infrastructure applications. It is easy to identify examples of each pairing of CAP, outlining the proof by exhaustive example of the Strong CAP Principle: CA without P: Databases that provide distributed transactional semantics can only do so in the absence of a network partition separating server peers. CP without A: In the event of a partition, further transactions to an ACID database may be blocked until the partition heals, to avoid the risk of introducing merge conflicts (and thus inconsistency). AP without C: HTTP Web caching provides clientserver partition resilience by replicating documents, but a client-server partition prevents verification of the freshness of an expired replica. In general, any distributed database problem can be solved with either expiration-based caching to get AP, or replicas and majority voting to get PC (the minority is unavailable). In practice, many applications are best described in terms of reduced consistency or availability. For example, weakly-consistent distributed databases such as Bayou [5] provide specific models with well-defined consistency/availability tradeoffs; disconnected filesystems such as Coda [16] explicitly argued for availability over strong consistency; and expiration-based consistency mechanisms such as leases [12] provide fault-tolerant consistency management. These examples suggest that there is a Weak CAP Principle which we have yet to characterize precisely: The stronger the guarantees made about any two of strong consistency, high availability, or resilience to partitions, the weaker the guarantees that can be made about the third. 3. Harvest, Yield, and the CAP Principle Both strategies we propose for improving availability with simple mechanisms rely on the ability to broaden our notion of “correct behavior” for the target application, and then exploit the tradeoffs in the CAP principle to improve availability at large scale. We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query. Yield is the common metric and is typically measured in “nines”: “four-nines availability” means a completion probability of . In practice, good HA systems aim for four or five nines. In the presence of faults there is typically a tradeoff between providing no answer (reducing yield) and providing an imperfect answer (maintaining yield, but reducing harvest). Some applications do not tolerate harvest degradation because any deviation from the single well-defined correct behavior renders the result useless. For example, a sensor application that must provide a binary sensor reading (presence/absence) does not tolerate degradation of the output.1 On the other hand, some applications tolerate graceful degradation of harvest: online aggregation [14] allows a user to explicitly trade running time for precision and confidence in performing arithmetic aggregation queries over a large dataset, thereby smoothly trading harvest for response time, which is particularly useful for approximate answers and for avoiding work that looks unlikely to be worthwhile based on preliminary results. At first glance, it would appear that this kind of degradation applies only to queries and not to updates. However, the model can be applied in the case of “single-location” updates: those changes that are localized to a single node (or technically a single partition). In this case, updates that 1This is consistent with the use of the term yield in semiconductor manufacturing: typically, each die on a wafer is intolerant to harvest degradation, and yield is defined as the fraction of working dice on a wafer. affect reachable nodes occur correctly but have limited visibility (a form of reduced harvest), while those that require unreachable nodes fail (reducing yield). These localized changes are consistent exactly because the new values are not available everywhere. This model of updates fails for global changes, but it is still quite useful for many practical applications, including personalization databases and collaborative filtering. 4. Strategy 1: Trading Harvest for Yield— Probabilistic Availability Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures), and Internet-based servers are dependent on the best-effort Internet for true availability. Therefore availability maps naturally to probabilistic approaches, and it is worth addressing probabilistic systems directly, so that we can understand and limit the impact of faults. This requires some basic decisions about what needs to be available and the expected nature of faults. For example, node faults in the Inktomi search engine remove a proportional fraction of the search database. Thus in a 100-node cluster a single-node fault reduces the harvest by 1% during the duration of the fault (the overall harvest is usually measured over a longer interval). Implicit in this approach is graceful degradation under multiple node faults, specifically, linear degradation in harvest. By randomly placing data on nodes, we can ensure that the 1% lost is a random 1%, which makes the average-case and worstcase fault behavior the same. In addition, by replicating a high-priority subset of data, we reduce the probability of losing that data. This gives us more precise control of harvest, both increasing it and reducing the practical impact of missing data. Of course, it is possible to replicate all data, but doing so may have relatively little impact on harvest and yield despite significant cost, and in any case can never ensure 100% harvest or yield because of the best-effort Internet protocols the service relies on. As a similar example, transformation proxies for thin clients [8] also trade harvest for yield, by degrading results on demand to match the capabilities of clients that might otherwise be unable to get results at all. Even when the 100%-harvest answer is useful to the client, it may still be preferable to trade response time for harvest when clientto-server bandwidth is limited, for example, by intelligent degradation to low-bandwidth formats [7]. 5. Strategy 2: Application Decomposition and Orthogonal Mechanisms Some large applications can be decomposed into subsystems that are independently intolerant to harvest degradation (i.e. they fail by reducing yield), but whose independent failure allows the overall application to continue functioning with reduced utility. The application as a whole is then tolerant of harvest degradation. A good decomposition has at least one actual benefit and one potential benefit. The actual benefi",
"title": ""
},
{
"docid": "227f23f0357e0cad280eb8e6dec4526b",
"text": "This paper presents an iterative and analytical approach to optimal synthesis of a multiplexer with a star-junction. Two types of commonly used lumped-element junction models, namely, nonresonant node (NRN) type and resonant type, are considered and treated in a uniform way. A new circuit equivalence called phased-inverter to frequency-invariant reactance inverter transformation is introduced. It allows direct adoption of the optimal synthesis theory of a bandpass filter for synthesizing channel filters connected to a star-junction by converting the synthesized phase shift to the susceptance compensation at the junction. Since each channel filter is dealt with individually and alternately, when synthesizing a multiplexer with a high number of channels, good accuracy can still be maintained. Therefore, the approach can be used to synthesize a wide range of multiplexers. Illustrative examples of synthesizing a diplexer with a common resonant type of junction and a triplexer with an NRN type of junction are given to demonstrate the effectiveness of the proposed approach. A prototype of a coaxial resonator diplexer according to the synthesized circuit model is fabricated to validate the synthesized result. Excellent agreement is obtained.",
"title": ""
},
{
"docid": "a8d6fe9d4670d1ccc4569aa322f665ee",
"text": "Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.",
"title": ""
},
{
"docid": "6aa9eaad1024bf49e24eabc70d5d153d",
"text": "High-quality documentary photo series have a special place in rhinoplasty. The exact photographic reproduction of the nasal contours is an essential part of surgical planning, documentation and follow-up of one’s own work. Good photographs can only be achieved using suitable technology and with a good knowledge of photography. Standard operating procedures are also necessary. The photographic equipment should consist of a digital single-lens reflex camera, studio flash equipment and a suitable room for photography with a suitable backdrop. The high standards required cannot be achieved with simple photographic equipment. The most important part of the equipment is the optics. Fixed focal length lenses with a focal length of about 105 mm are especially suited to this type of work. Nowadays, even a surgeon without any photographic training is in a position to produce a complete series of clinical images. With digital technology, any of us can take good photographs. The correct exposure, the right depth of focus for the key areas of the nose and the right camera angle are the decisive factors in a good image series. Up to six standard images are recommended in the literature for the proper documentation of nasal surgery. The most important are frontal, three quarters and profile views. In special cases, close-up images may also be necessary. Preparing a professional image series is labour-intensive and very expensive. Large hospitals no longer employ professional photographers. Despite this, we must strive to maintain a high standard of photodocumenation for publications and to ensure that cases can be compared at congresses.",
"title": ""
},
{
"docid": "d0a6ca9838f8844077fdac61d1d75af1",
"text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-",
"title": ""
},
{
"docid": "82835828a7f8c073d3520cdb4b6c47be",
"text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.",
"title": ""
},
{
"docid": "48e917ffb0e5636f5ca17b3242c07706",
"text": "Two studies examined the influence of approach and avoidance social goals on memory for and evaluation of ambiguous social information. Study 1 found that individual differences in avoidance social goals were associated with greater memory of negative information, negatively biased interpretation of ambiguous social cues, and a more pessimistic evaluation of social actors. Study 2 experimentally manipulated social goals and found that individuals high in avoidance social motivation remembered more negative information and expressed more dislike for a stranger in the avoidance condition than in the approach condition. Results suggest that avoidance social goals are associated with emphasizing potential threats when making sense of the social environment.",
"title": ""
},
{
"docid": "9666ac68ee1aeb8ce18ccd2615cdabb2",
"text": "As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues. Keywords—Bring your own device; access control; policy; security",
"title": ""
},
{
"docid": "ec237c01100bf6afa26f3b01a62577f3",
"text": "Polyphenols are secondary metabolites of plants and are generally involved in defense against ultraviolet radiation or aggression by pathogens. In the last decade, there has been much interest in the potential health benefits of dietary plant polyphenols as antioxidant. Epidemiological studies and associated meta-analyses strongly suggest that long term consumption of diets rich in plant polyphenols offer protection against development of cancers, cardiovascular diseases, diabetes, osteoporosis and neurodegenerative diseases. Here we present knowledge about the biological effects of plant polyphenols in the context of relevance to human health.",
"title": ""
},
{
"docid": "61d8761f3c6a8974d0384faf9a084b53",
"text": "With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into “malignant” and “benign” cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.",
"title": ""
},
{
"docid": "9d0ea524b8f591d9ea337a8c789e51c1",
"text": "Abstract—The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.",
"title": ""
},
{
"docid": "458470e18ce2ab134841f76440cfdc2b",
"text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",
"title": ""
},
{
"docid": "f407ea856f2d00dca1868373e1bd9e2f",
"text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.",
"title": ""
},
{
"docid": "eec33c75a0ec9b055a857054d05bcf54",
"text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.",
"title": ""
},
{
"docid": "e985d20f75d29c24fda39135e0e54636",
"text": "Software testing is a highly complex and time consu ming activityIt is even difficult to say when tes ing is complete. The effective combination of black box (external) a nd white box (internal) testing is known as Gray-bo x testing. Gray box testing is a powerful idea if one knows something about how the product works on the inside; one can test it b etter, even from the outside. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. It is not to be confused with white box testing, testi ng approach that attempts to cover the internals of the product in detail. Gray box testing is a test strategy based partly on internal s. This paper will present all the three methodolog y Black-box, White-box, Graybox and how this method has been applied to validat e cri ical software systems. KeywordsBlack-box, White-box, Gray-box or Grey-box Introduction In most software projects, testing is not given the necessary attention. Statistics reveal that the ne arly 30-40% of the effort goes into testing irrespective of the type of project; h ardly any time is allocated for testing. The comput er industry is changing at a very rapid pace. In order to keep pace with a rapidly ch anging computer industry, software test must develo p methods to verify and validate software for all aspects of the product li fecycle. Test case design techniques can be broadly split into two main categories: Black box & White box. Black box + White box = Gray Box Spelling: Note that Gray is also spelt as Grey. Hence Gray Box Testing and Grey Box Testing mean the same. Gray Box testing is a technique to test the applica tion with limited knowledge of the internal working s of an application. In software testing, the term the more you know the be tter carries a lot of weight when testing an applic ation. Mastering the domain of a system always gives the t ester an edge over someone with limited domain know ledge. Unlike black box testing, where the tester only tests the applicatio n's user interface, in Gray box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scena rios when making the test plan. The gray-box testing goes mainly with the testing of web applications b ecause it considers high-level development, operati ng environment, and compatibility conditions. During b lack-box or white-box analysis it is harder to iden tify problems, related to endto-end data flow. Context-specific problems, associ ated with web site testing are usually found during gray-box verifying. Bridge between Black Box and White Box – ISSN 2277-1956/V2N1-175-185 Testing Methods Fig 1: Classification 1. Black Box Testing Black box testing is a software testing techniques in which looking at the internal code structure, implementation details and knowledge of internal pa ths of the software. testing is based entirely on the software requireme nts and specifications. Black box testing is best suited for rapid test sce nario testing and quick Web Service Services provides quick feedback on the functional re diness of operations t better suited for operations that have enumerated necessary. It is used for finding the following errors: 1. Incorrect or missing functions 2. Interface errors 3. Errors in data structures or External database access 4. Performance errors 5. Initialization and termination errors Example A tester, without knowledge of the internal structu res of a website, tests the web pages by using a br owse ; providing inputs (clicks, keystrokes) and verifying the outputs agai nst the expected outcome. Levels Applicable To Black Box testing method is applicable to all levels of the software testing process: Testing, and Acceptance Testing. The higher the level, and hence the bigger and more c mplex the box, the mo method comes into use. Black Box Testing Techniques Following are some techniques that can be used for esigning black box tests. Equivalence partitioning Equivalence Partitioning is a software test design technique that involves selecting representative values from each partition as test data. Boundary Value Analysis Boundary Value Analysis is a software test design t echnique that involves determination of boundaries for selecting values that are at the boundaries and jus t inside/outside of the boundaries as test data. Cause Effect Graphing Cause Effect Graphing is a software test design tec hnique that involves identifying the cases (input c onditions) and conditions), producing a CauseEffect Graph, and generating test cases accordingly . Gray Box Testing Technique",
"title": ""
},
{
"docid": "7ad4f52279e85f8e20239e1ea6c85bbb",
"text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.",
"title": ""
},
{
"docid": "4825e492dc1b7b645a5b92dde0c766cd",
"text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.",
"title": ""
}
] |
scidocsrr
|
ce75749e2f558ac953323ec5541b7b67
|
Analysis of the 802.11i 4-way handshake
|
[
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] |
[
{
"docid": "3653e29e71d70965317eb4c450bc28da",
"text": "This paper comprises an overview of different aspects for wire tension control devices and algorithms according to the state of industrial use and state of research. Based on a typical winding task of an orthocyclic winding scheme, possible new principles for an alternative piezo-electric actuator and an electromechanical tension control will be derived and presented.",
"title": ""
},
{
"docid": "3eebecff1cb89f5490602f43717902b7",
"text": "Radiation therapy (RT) is an integral part of prostate cancer treatment across all stages and risk groups. Immunotherapy using a live, attenuated, Listeria monocytogenes-based vaccines have been shown previously to be highly efficient in stimulating anti-tumor responses to impact on the growth of established tumors in different tumor models. Here, we evaluated the combination of RT and immunotherapy using Listeria monocytogenes-based vaccine (ADXS31-142) in a mouse model of prostate cancer. Mice bearing PSA-expressing TPSA23 tumor were divided to 5 groups receiving no treatment, ADXS31-142, RT (10 Gy), control Listeria vector and combination of ADXS31-142 and RT. Tumor growth curve was generated by measuring the tumor volume biweekly. Tumor tissue, spleen, and sera were harvested from each group for IFN-γ ELISpot, intracellular cytokine assay, tetramer analysis, and immunofluorescence staining. There was a significant tumor growth delay in mice that received combined ADXS31-142 and RT treatment as compared with mice of other cohorts and this combined treatment causes complete regression of their established tumors in 60 % of the mice. ELISpot and immunohistochemistry of CD8+ cytotoxic T Lymphocytes (CTL) showed a significant increase in IFN-γ production in mice with combined treatment. Tetramer analysis showed a fourfold and a greater than 16-fold increase in PSA-specific CTLs in animals receiving ADXS31-142 alone and combination treatment, respectively. A similar increase in infiltration of CTLs was observed in the tumor tissues. Combination therapy with RT and Listeria PSA vaccine causes significant tumor regression by augmenting PSA-specific immune response and it could serve as a potential treatment regimen for prostate cancer.",
"title": ""
},
{
"docid": "89fd46da8542a8ed285afb0cde9cc236",
"text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.",
"title": ""
},
{
"docid": "06cc255e124702878e2106bf0e8eb47c",
"text": "Agent technology has been recognized as a promising paradigm for next generation manufacturing systems. Researchers have attempted to apply agent technology to manufacturing enterprise integration, enterprise collaboration (including supply chain management and virtual enterprises), manufacturing process planning and scheduling, shop floor control, and to holonic manufacturing as an implementation methodology. This paper provides an update review on the recent achievements in these areas, and discusses some key issues in implementing agent-based manufacturing systems such as agent encapsulation, agent organization, agent coordination and negotiation, system dynamics, learning, optimization, security and privacy, tools and standards. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f2492c40f98e3cccc3ac3ab7accf4af7",
"text": "Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.",
"title": ""
},
{
"docid": "25e50a3e98b58f833e1dd47aec94db21",
"text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"title": ""
},
{
"docid": "3467f4be08c4b8d6cd556f04f324ce67",
"text": "Round robin arbiter (RRA) is a critical block in nowadays designs. It is widely found in System-on-chips and Network-on-chips. The need of an efficient RRA has increased extensively as it is a limiting performance block. In this paper, we deliver a comparative review between different RRA architectures found in literature. We also propose a novel efficient RRA architecture. The FPGA implementation results of the previous RRA architectures and our proposed one are given, that show the improvements of the proposed RRA.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "7f68d6a6432f55684ad79a4f79406dab",
"text": "Half of patients with heart failure (HF) have a preserved left ventricular ejection fraction (HFpEF). Morbidity and mortality in HFpEF are similar to values observed in patients with HF and reduced EF, yet no effective treatment has been identified. While early research focused on the importance of diastolic dysfunction in the pathophysiology of HFpEF, recent studies have revealed that multiple non-diastolic abnormalities in cardiovascular function also contribute. Diagnosis of HFpEF is frequently challenging and relies upon careful clinical evaluation, echo-Doppler cardiography, and invasive haemodynamic assessment. In this review, the principal mechanisms, diagnostic approaches, and clinical trials are reviewed, along with a discussion of novel treatment strategies that are currently under investigation or hold promise for the future.",
"title": ""
},
{
"docid": "3edf5d1cce2a26fbf5c2cc773649629b",
"text": "We conducted three experiments to investigate the mental images associated with idiomatic phrases in English. Our hypothesis was that people should have strong conventional images for many idioms and that the regularity in people's knowledge of their images for idioms is due to the conceptual metaphors motivating the figurative meanings of idioms. In the first study, subjects were asked to form and describe their mental images for different idiomatic expressions. Subjects were then asked a series of detailed questions about their images regarding the causes and effects of different events within their images. We found high consistency in subjects' images of idioms with similar figurative meanings despite differences in their surface forms (e.g., spill the beans and let the cat out of the bag). Subjects' responses to detailed questions about their images also showed a high degree of similarity in their answers. Further examination of subjects' imagery protocols supports the idea that the conventional images and knowledge associated with idioms are constrained by the conceptual metaphors (e.g., the MIND IS A CONTAINER and IDEAS ARE ENTITIES) which motivate the figurative meanings of idioms. The results of two control studies showed that the conventional images associated with idioms are not solely based on their figurative meanings (Experiment 2) and that the images associated with literal phrases (e.g., spill the peas) were quite varied and unlikely to be constrained by conceptual metaphor (Experiment 3). These findings support the view that idioms are not \"dead\" metaphors with their meanings being arbitrarily determined. Rather, the meanings of many idioms are motivated by speakers' tacit knowledge of the conceptual metaphors underlying the meanings of these figurative phrases.",
"title": ""
},
{
"docid": "69ced55a44876f7cc4e57f597fcd5654",
"text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.",
"title": ""
},
{
"docid": "db3abbca12b7a1c4e611aa3707f65563",
"text": "This paper describes the background and methods for the prod uction of CIDOC-CRM compliant data sets from diverse collec tions of source data. The construction of such data sets is based on data in column format, typically exported for databases, as well as free text, typically created through scanning and OCR proce ssing or transcription.",
"title": ""
},
{
"docid": "7db5807fc15aeb8dfe4669a8208a8978",
"text": "This document is an output from a project funded by the UK Department for International Development (DFID) for the benefit of developing countries. The views expressed are not necessarily those of DFID. Contents Contents i List of tables ii List of figures ii List of boxes ii Acronyms iii Acknowledgements iv Summary 1 1. Introduction: why worry about disasters? 7 Objectives of this Study 7 Global disaster trends 7 Why donors should be concerned 9 What donors can do 9 2. What makes a disaster? 11 Characteristics of a disaster 11 Disaster risk reduction 12 The diversity of hazards 12 Vulnerability and capacity, coping and adaptation 15 Resilience 16 Poverty and vulnerability: links and differences 16 'The disaster management cycle' 17 3. Why should disasters be a development concern? 19 3.1 Disasters hold back development 19 Disasters undermine efforts to achieve the Millennium Development Goals 19 Macroeconomic impacts of disasters 21 Reallocation of resources from development to emergency assistance 22 Disaster impact on communities and livelihoods 23 3.2 Disasters are rooted in development failures 25 Dominant development models and risk 25 Development can lead to disaster 26 Poorly planned attempts to reduce risk can make matters worse 29 Disaster responses can themselves exacerbate risk 30 3.3 'Disaster-proofing' development: what are the gains? 31 From 'vicious spirals' of failed development and disaster risk… 31 … to 'virtuous spirals' of risk reduction 32 Disaster risk reduction can help achieve the Millennium Development Goals 33 … and can be cost-effective 33 4. Why does development tend to overlook disaster risk? 36 4.1 Introduction 36 4.2 Incentive, institutional and funding structures 36 Political incentives and governance in disaster prone countries 36 Government-donor relations and moral hazard 37 Donors and multilateral agencies 38 NGOs 41 4.3 Lack of exposure to and information on disaster issues 41 4.4 Assumptions about the risk-reducing capacity of development 43 ii 5. Tools for better integrating disaster risk reduction into development 45 Introduction 45 Poverty Reduction Strategy Papers (PRSPs) 45 UN Development Assistance Frameworks (UNDAFs) 47 Country assistance plans 47 National Adaptation Programmes of Action (NAPAs) 48 Partnership agreements with implementing agencies and governments 49 Programme and project appraisal guidelines 49 Early warning and information systems 49 Risk transfer mechanisms 51 International initiatives and policy forums 51 Risk reduction performance targets and indicators for donors 52 6. Conclusions and recommendations 53 6.1 Main conclusions 53 6.2 Recommendations 54 Core recommendation …",
"title": ""
},
{
"docid": "4a9a53444a74f7125faa99d58a5b0321",
"text": "The new transformed read-write Web has resulted in a rapid growth of user generated content on the Web resulting into a huge volume of unstructured data. A substantial part of this data is unstructured text such as reviews and blogs. Opinion mining and sentiment analysis (OMSA) as a research discipline has emerged during last 15 years and provides a methodology to computationally process the unstructured data mainly to extract opinions and identify their sentiments. The relatively new but fast growing research discipline has changed a lot during these years. This paper presents a scientometric analysis of research work done on OMSA during 20 0 0–2016. For the scientometric mapping, research publications indexed in Web of Science (WoS) database are used as input data. The publication data is analyzed computationally to identify year-wise publication pattern, rate of growth of publications, types of authorship of papers on OMSA, collaboration patterns in publications on OMSA, most productive countries, institutions, journals and authors, citation patterns and an year-wise citation reference network, and theme density plots and keyword bursts in OMSA publications during the period. A somewhat detailed manual analysis of the data is also performed to identify popular approaches (machine learning and lexicon-based) used in these publications, levels (document, sentence or aspect-level) of sentiment analysis work done and major application areas of OMSA. The paper presents a detailed analytical mapping of OMSA research work and charts the progress of discipline on various useful parameters. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "abc160fc578bb40935afa7aea93cf6ca",
"text": "This study investigates the effect of leader and follower behavior on employee voice, team task responsibility and team effectiveness. This study distinguishes itself by including both leader and follower behavior as predictors of team effectiveness. In addition, employee voice and team task responsibility are tested as potential mediators of the relationship between task-oriented behaviors (informing, directing, verifying) and team effectiveness as well as the relationship between relation-oriented behaviors (positive feedback, intellectual stimulation, individual consideration) and team effectiveness. This cross-sectional exploratory study includes four methods: 1) inter-reliable coding of leader and follower behavior during staff meetings; 2) surveys of 57 leaders; 3) surveys of643 followers; 4) survey of 56 lean coaches. Regression analyses showed that both leaders and followers display more task-oriented behaviors opposed to relation-oriented behaviors during staff meetings. Contrary to the hypotheses, none of the observed leader behaviors positively influences employee voice, team task responsibility or team effectiveness. However, all three task-oriented follower behaviors indirectly influence team effectiveness. The findings from this research illustrate that follower behaviors has more influence on team effectiveness compared to leader behavior. Practical implications, strengths and limitations of the research are discussed. Moreover, future research directions including the mediating role of culture and psychological safety are proposed as well.",
"title": ""
},
{
"docid": "e97c0bbb74534a16c41b4a717eed87d5",
"text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.",
"title": ""
},
{
"docid": "7539a738cad3a36336dc7019e2aabb21",
"text": "In this paper a compact antenna for ultrawideband applications is presented. The antenna is based on the biconical antenna design and has two identical elements. Each element is composed of a cone extended with a ring and an inner cylinder. The modification of the well-known biconical structure is made in order to reduce the influence of the radiation of the feeding cable. To obtain the optimum parameters leading to a less impact of the cable effect on the antenna performance, during the optimization process the antenna was coupled with a feeding coaxial cable. The proposed antenna covers the frequency range from 1.5 to 41 GHz with voltage standing wave ratio below 2 and has an omnidirectional radiation pattern. The realized total efficiency is above 85 % which indicates a good performance.",
"title": ""
},
{
"docid": "a87ba6d076c3c05578a6f6d9da22ac79",
"text": "Here we review and extend a new unitary model for the pathophysiology of involutional osteoporosis that identifies estrogen (E) as the key hormone for maintaining bone mass and E deficiency as the major cause of age-related bone loss in both sexes. Also, both E and testosterone (T) are key regulators of skeletal growth and maturation, and E, together with GH and IGF-I, initiate a 3- to 4-yr pubertal growth spurt that doubles skeletal mass. Although E is required for the attainment of maximal peak bone mass in both sexes, the additional action of T on stimulating periosteal apposition accounts for the larger size and thicker cortices of the adult male skeleton. Aging women undergo two phases of bone loss, whereas aging men undergo only one. In women, the menopause initiates an accelerated phase of predominantly cancellous bone loss that declines rapidly over 4-8 yr to become asymptotic with a subsequent slow phase that continues indefinitely. The accelerated phase results from the loss of the direct restraining effects of E on bone turnover, an action mediated by E receptors in both osteoblasts and osteoclasts. In the ensuing slow phase, the rate of cancellous bone loss is reduced, but the rate of cortical bone loss is unchanged or increased. This phase is mediated largely by secondary hyperparathyroidism that results from the loss of E actions on extraskeletal calcium metabolism. The resultant external calcium losses increase the level of dietary calcium intake that is required to maintain bone balance. Impaired osteoblast function due to E deficiency, aging, or both also contributes to the slow phase of bone loss. Although both serum bioavailable (Bio) E and Bio T decline in aging men, Bio E is the major predictor of their bone loss. Thus, both sex steroids are important for developing peak bone mass, but E deficiency is the major determinant of age-related bone loss in both sexes.",
"title": ""
},
{
"docid": "296705d6bfc09f58c8e732a469b17871",
"text": "Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.",
"title": ""
},
{
"docid": "ac57fab046cfd02efa1ece262b07492f",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
}
] |
scidocsrr
|
1d99c577fe448b1ec5f29a3367d0a504
|
Clustering of Vehicle Trajectories
|
[
{
"docid": "9d5593d89a206ac8ddb82921c2a68c43",
"text": "This paper presents an automatic traffic surveillance system to estimate important traffic parameters from video sequences using only one camera. Different from traditional methods that can classify vehicles to only cars and noncars, the proposed method has a good ability to categorize vehicles into more specific classes by introducing a new \"linearity\" feature in vehicle representation. In addition, the proposed system can well tackle the problem of vehicle occlusions caused by shadows, which often lead to the failure of further vehicle counting and classification. This problem is solved by a novel line-based shadow algorithm that uses a set of lines to eliminate all unwanted shadows. The used lines are devised from the information of lane-dividing lines. Therefore, an automatic scheme to detect lane-dividing lines is also proposed. The found lane-dividing lines can also provide important information for feature normalization, which can make the vehicle size more invariant, and thus much enhance the accuracy of vehicle classification. Once all features are extracted, an optimal classifier is then designed to robustly categorize vehicles into different classes. When recognizing a vehicle, the designed classifier can collect different evidences from its trajectories and the database to make an optimal decision for vehicle classification. Since more evidences are used, more robustness of classification can be achieved. Experimental results show that the proposed method is more robust, accurate, and powerful than other traditional methods, which utilize only the vehicle size and a single frame for vehicle classification.",
"title": ""
},
{
"docid": "c7d6e273065ce5ca82cd55f0ba5937cd",
"text": "Many environmental and socioeconomic time–series data can be adequately modeled using Auto-Regressive Integrated Moving Average (ARIMA) models. We call such time–series ARIMA time–series. We consider the problem of clustering ARIMA time–series. We propose the use of the Linear Predictive Coding (LPC) cepstrum of time–series for clustering ARIMA time–series, by using the Euclidean distance between the LPC cepstra of two time–series as their dissimilarity measure. We demonstrate that LPC cepstral coefficients have the desired features for accurate clustering and efficient indexing of ARIMA time–series. For example, few LPC cepstral coefficients are sufficient in order to discriminate between time–series that are modeled by different ARIMA models. In fact this approach requires fewer coefficients than traditional approaches, such as DFT and DWT. The proposed distance measure can be used for measuring the similarity between different ARIMA models as well. We cluster ARIMA time–series using the Partition Around Medoids method with various similarity measures. We present experimental results demonstrating that using the proposed measure we achieve significantly better clusterings of ARIMA time–series data as compared to clusterings obtained by using other traditional similarity measures, such as DFT, DWT, PCA, etc. Experiments were performed both on simulated as well as real data.",
"title": ""
}
] |
[
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "43e90cd84394bd686303e07b3048e3ac",
"text": "A harlequin fetus seen at birth was treated with etretinate and more general measures, including careful attention to fluid balance, calorie intake and temperature control. She improved, continued to develop, and had survived to 5 months at the time of this report.",
"title": ""
},
{
"docid": "c2a307faaec42f3c05188a5153eade19",
"text": "A 28-year-old breastfeeding mother of term-born 3-month old twins contacted the Hospital Lactation consultant for advice. She had expressed milk at 2am and had stored the milk in the fridge. She fed some of that milk to one of the twins at 11am and further milk to both twins at 4pm. All three bottles were left on the bench until the next morning when the mother intended to clean the bottles. She found that the milk residue in all three feeding bottles had turned bright pink and had a strong earthy odour (see Fig. 1). The mother brought one of the bottles containing the bright pink milk with her to the hospital. The mother was in good health, with no symptoms of mastitis and no fever. Both twins were also healthy and continued to feed well and gain weight. What is the cause of the pink milk? (answer on page 82)",
"title": ""
},
{
"docid": "2194de791698f6a0180e6a1bca8714a7",
"text": "Several procedures have been utilized to elevate plasma free fatty acid (FFA) concentration and increase fatty acid (FA) delivery to skeletal muscle during exercise. These include fasting, caffeine ingestion, L-carnitine supplementation, ingestion of medium-chain and long-chain triglyceride (LCT) solutions, and intravenous infusion of intralipid emulsions. Studies in which both untrained and well-trained subjects have ingested LCT solutions or received an infusion of intralipid (in combination with an injection of heparin) before exercise have reported significant reductions in whole-body carbohydrate oxidation and decreased muscle glycogen utilization during both moderate and intense dynamic exercise lasting 15-60 min. The effects of increased FA provision on rates of muscle glucose uptake during exercise are, however, equivocal. Despite substantial muscle glycogen sparing (15-48% compared with control), exercise capacity is not systematically improved in the face of increased FA availability.",
"title": ""
},
{
"docid": "cc6161fd350ac32537dc704cbfef2155",
"text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.",
"title": ""
},
{
"docid": "e4427550b3d34557f073c3c16e1c61d9",
"text": "Despite the significant progress in multiagent teamwork, existing research does not address the optimality of its prescriptions nor the complexity of the teamwork problem. Thus, we cannot determine whether the assumptions and approximations made by a particular theory gain enough efficiency to justify the losses in overall performance. To provide a tool for evaluating this tradeoff, we present a unified framework, the COMmunicative Multiagent Team Decision Problem (COM-MTDP) model, which is general enough to subsume many existing models of multiagent systems. We analyze use the COM-MTDP model to provide a breakdown of the computational complexity of constructing optimal teams under problem domains divided along the dimensions of observability and communication cost. We then exploit the COM-MTDP's ability to encode existing teamwork theories and models to encode two instantiations of joint intentions theory, including STEAM. We then derive a domain-independent criterion for optimal communication and provide a comparative analysis of the two joint intentions instantiations. We have implemented a reusable, domain-independent software package based COM-MTDPs to analyze teamwork coordination strategies, and we demonstrate its use by encoding and evaluating the two joint intentions strategies within an example domain.",
"title": ""
},
{
"docid": "e4222dda5ecde102c0fdea0d48fb5baf",
"text": "The association of hematological malignancies with a mediastinal germ cell tumor (GCT) is very rare. We report one case of a young adult male with primary mediastinal GCT who subsequently developed acute megakaryoblastic leukemia involving isochromosome (12p). A 25-yr-old man had been diagnosed with a mediastinal GCT and underwent surgical resection and adjuvant chemotherapy. At 1 week after the last cycle of chemotherapy, his peripheral blood showed leukocytosis with blasts. A bone marrow study confirmed the acute megakaryoblastic leukemia. A cytogenetic study revealed a complex karyotype with i(12p). Although additional chemotherapy was administered, the patient could not attain remission and died of septic shock. This case was definitely distinct from therapy-related secondary leukemia in terms of clinical, morphologic, and cytogenetic features. To our knowledge, this is the first case report of a patient with mediastinal GCT subsequently developing acute megakaryoblastic leukemia involving i(12p) in Korea.",
"title": ""
},
{
"docid": "ac8d66a387f3c2b7fc6c579e33b27c64",
"text": "We revisit the relation between stock market volatility and macroeconomic activity using a new class of component models that distinguish short-run from long-run movements. We formulate models with the long-term component driven by inflation and industrial production growth that are in terms of pseudo out-of-sample prediction for horizons of one quarter at par or outperform more traditional time series volatility models at longer horizons. Hence, imputing economic fundamentals into volatility models pays off in terms of long-horizon forecasting. We also find that macroeconomic fundamentals play a significant role even at short horizons.",
"title": ""
},
{
"docid": "9779c9f4f15d9977a20592cabb777059",
"text": "Expert search or recommendation involves the retrieval of people (experts) in response to a query and on occasion, a given set of constraints. In this paper, we address expert recommendation in academic domains that are different from web and intranet environments studied in TREC. We propose and study graph-based models for expertise retrieval with the objective of enabling search using either a topic (e.g. \"Information Extraction\") or a name (e.g. \"Bruce Croft\"). We show that graph-based ranking schemes despite being \"generic\" perform on par with expert ranking models specific to topic-based and name-based querying.",
"title": ""
},
{
"docid": "05ea7a05b620c0dc0a0275f55becfbc3",
"text": "Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a midlevel of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.",
"title": ""
},
{
"docid": "6efc8d18baa63945eac0c2394f29da19",
"text": "Deep learning subsumes algorithms that automatically learn compositional representations. The ability of these models to generalize well has ushered in tremendous advances in many fields such as natural language processing (NLP). Recent research in the software engineering (SE) community has demonstrated the usefulness of applying NLP techniques to software corpora. Hence, we motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models. Our deep learning models are applicable to source code files (since they only require lexically analyzed source code written in any programming language) and other types of artifacts. We show how a particular deep learning model can remember its state to effectively model sequential data, e.g., streaming software tokens, and the state is shown to be much more expressive than discrete tokens in a prefix. Then we instantiate deep learning models and show that deep learning induces high-quality models compared to n-grams and cache-based n-grams on a corpus of Java projects. We experiment with two of the models' hyperparameters, which govern their capacity and the amount of context they use to inform predictions, before building several committees of software language models to aid generalization. Then we apply the deep learning models to code suggestion and demonstrate their effectiveness at a real SE task compared to state-of-the-practice models. Finally, we propose avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts. Thus, our work serves as the first step toward deep learning software repositories.",
"title": ""
},
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "833ec45dfe660377eb7367e179070322",
"text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.",
"title": ""
},
{
"docid": "109a84ad1c1a541e2a0b4972b21caca2",
"text": "Our brain is a network. It consists of spatially distributed, but functionally linked regions that continuously share information with each other. Interestingly, recent advances in the acquisition and analysis of functional neuroimaging data have catalyzed the exploration of functional connectivity in the human brain. Functional connectivity is defined as the temporal dependency of neuronal activation patterns of anatomically separated brain regions and in the past years an increasing body of neuroimaging studies has started to explore functional connectivity by measuring the level of co-activation of resting-state fMRI time-series between brain regions. These studies have revealed interesting new findings about the functional connections of specific brain regions and local networks, as well as important new insights in the overall organization of functional communication in the brain network. Here we present an overview of these new methods and discuss how they have led to new insights in core aspects of the human brain, providing an overview of these novel imaging techniques and their implication to neuroscience. We discuss the use of spontaneous resting-state fMRI in determining functional connectivity, discuss suggested origins of these signals, how functional connections tend to be related to structural connections in the brain network and how functional brain communication may form a key role in cognitive performance. Furthermore, we will discuss the upcoming field of examining functional connectivity patterns using graph theory, focusing on the overall organization of the functional brain network. Specifically, we will discuss the value of these new functional connectivity tools in examining believed connectivity diseases, like Alzheimer's disease, dementia, schizophrenia and multiple sclerosis.",
"title": ""
},
{
"docid": "376c96bb9fc8c44e1489da94509116a6",
"text": "Predictive analytics techniques applied to a broad swath of student data can aid in timely intervention strategies to help prevent students from failing a course. This paper discusses a predictive analytic model that was created for the University of Phoenix. The purpose of the model is to identify students who are in danger of failing the course in which they are currently enrolled. Within the model's architecture, data from the learning management system (LMS), financial aid system, and student system are combined to calculate a likelihood of any given student failing the current course. The output can be used to prioritize students for intervention and referral to additional resources. The paper includes a discussion of the predictor and statistical tests used, validation procedures, and plans for implementation.",
"title": ""
},
{
"docid": "7019214df5d1f55b3ed6ce3405e648fc",
"text": "Cursive handwriting recognition is a challenging task for many real world applications such as document authentication, form processing, postal address recognition, reading machines for the blind, bank cheque recognition and interpretation of historical documents. Therefore, in the last few decades the researchers have put enormous effort to develop various techniques for handwriting segmentation and recognition. This review presents the segmentation strategies for automated recognition of off-line unconstrained cursive handwriting from static surfaces. This paper reviews many basic and advanced techniques and also compares the research results of various researchers in the domain of handwritten words segmentation.",
"title": ""
},
{
"docid": "5ff8d6415a2601afdc4a15c13819f5bb",
"text": "This paper studies the e ects of various types of online advertisements on purchase conversion by capturing the dynamic interactions among advertisement clicks themselves. It is motivated by the observation that certain advertisement clicks may not result in immediate purchases, but they stimulate subsequent clicks on other advertisements which then lead to purchases. We develop a stochastic model based on mutually exciting point processes, which model advertisement clicks and purchases as dependent random events in continuous time. We incorporate individual random e ects to account for consumer heterogeneity and cast the model in the Bayesian hierarchical framework. We propose a new metric of conversion probability to measure the conversion e ects of online advertisements. Simulation algorithms for mutually exciting point processes are developed to evaluate the conversion probability and for out-of-sample prediction. Model comparison results show the proposed model outperforms the benchmark model that ignores exciting e ects among advertisement clicks. We nd that display advertisements have relatively low direct e ect on purchase conversion, but they are more likely to stimulate subsequent visits through other advertisement formats. We show that the commonly used measure of conversion rate is biased in favor of search advertisements and underestimates the conversion e ect of display advertisements the most. Our model also furnishes a useful tool to predict future purchases and clicks on online",
"title": ""
},
{
"docid": "e095f0b15273dbf9abf3d03f3d6c49ff",
"text": "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.",
"title": ""
},
{
"docid": "4170ae2e077bde01f2cf1c80d60dfe63",
"text": "Y. WANG, E. GRANADOS, F. PEDACI, D. ALESSI, B. LUTHER, M. BERRILL AND J. J. ROCCA* National Science Foundation Engineering Research Center for Extreme Ultraviolet Science and Technology and Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, Colorado 80523, USA Department of Physics, Colorado State University, Fort Collins, Colorado 80523, USA *e-mail: [email protected]",
"title": ""
}
] |
scidocsrr
|
dd4369e2ed1ed7d06ed03d47799d2d74
|
Fundamental frequency estimation by least-squares harmonic model fitting
|
[
{
"docid": "e104e306d90605a5bc9d853180567917",
"text": "An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.",
"title": ""
}
] |
[
{
"docid": "89ead93b4f234e50b6d6e70ad4f54d67",
"text": "Clinical impressions of metabolic disease problems in dairy herds can be corroborated with herd-based metabolic testing. Ruminal pH should be evaluated in herds showing clinical signs associated with SARA (lame cows, thin cows, high herd removals or death loss across all stages of lactation, or milk fat depression). Testing a herd for the prevalence of SCK via blood BHB sampling in early lactation is useful in almost any dairy herd, and particularly if the herd is experiencing a high incidence of displaced abomasum or high removal rates of early lactation cows. If cows are experiencing SCK within the first 3 weeks of lactation, then consider NEFA testing of the prefresh cows to corroborate prefresh negative energy balance. Finally, monitoring cows on the day of calving for parturient hypocalcemia can provide early detection of diet-induced problems in calcium homeostasis. If hypocalcemia problems are present despite supplementing anionic salts before calving, then it may be helpful to evaluate mean urinary pH of a group of the prefresh cows. Quantitative testing strategies based on statistical analyses can be used to establish minimum sample sizes and interpretation guidelines for all of these tests.",
"title": ""
},
{
"docid": "4ef6adf0021e85d9bf94079d776d686d",
"text": "Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articles – author, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which (a) we test the relative suitability of various sentiment dictionaries and (b) we attempt to separate positive or negative opinion from good or bad news. In the experiments described here, we tested whether or not subject domain-defining vocabulary should be ignored. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance.",
"title": ""
},
{
"docid": "64b2edc9ea7a1542db72171f62bd4a6f",
"text": "Data stream management systems may be subject to higher input rates than their resources can handle. When overloaded, the system must shed load in order to maintain low-latency query results. In this paper, we describe a load shedding technique for queries consisting of one or more aggregate operators with sliding windows. We introduce a new type of drop operator, called a \"Window Drop\". This operator is aware of the window properties (i.e., window size and window slide) of its downstream aggregate operators in the query plan. Accordingly, it logically divides the input stream into windows and probabilistically decides which windows to drop. This decision is further encoded into tuples by marking the ones that are disallowed from starting new windows. Unlike earlier approaches, our approach preserves integrity of windows throughout a query plan, and always delivers subsets of original query answers with minimal degradation in result quality.",
"title": ""
},
{
"docid": "37484cdfa29c7021c07f307c695c0a77",
"text": "Deep neural networks have shown promising results for various clinical prediction tasks such as diagnosis, mortality prediction, predicting duration of stay in hospital, etc. However, training deep networks – such as those based on Recurrent Neural Networks (RNNs) – requires large labeled data, high computational resources, and significant hyperparameter tuning effort. In this work, we investigate as to what extent can transfer learning address these issues when using deep RNNs to model multivariate clinical time series. We consider transferring the knowledge captured in an RNN trained on several source tasks simultaneously using a large labeled dataset to build the model for a target task with limited labeled data. An RNN pre-trained on several tasks provides generic features, which are then used to build simpler linear models for new target tasks without training task-specific RNNs. For evaluation, we train a deep RNN to identify several patient phenotypes on time series from MIMIC-III database, and then use the features extracted using that RNN to build classifiers for identifying previously unseen phenotypes, and also for a seemingly unrelated task of in-hospital mortality. We demonstrate that (i) models trained on features extracted using pre-trained RNN outperform or, in the worst case, perform as well as task-specific RNNs; (ii) the models using features from pre-trained models are more robust to the size of labeled data than task-specific RNNs; and (iii) features extracted using pre-trained RNN are generic enough and perform better than typical statistical hand-crafted features.",
"title": ""
},
{
"docid": "ff49e2364503659cc520d7f2e5650906",
"text": "Linguists are increasingly using experiments to provide insight into linguistic representations and linguistic processing. But linguists are rarely trained to think experimentally, and designing a carefully controlled study is not trivial. This paper provides a practical introduction to experiments. We examine issues in experimental design and survey several methodologies. The goal is to provide readers with some tools for understanding and evaluating the rapidly growing literature using experimental methods, as well as for beginning to design experiments in their own research. © 2013 The Author. Language and Linguistics Compass © 2013 Blackwell Publishing Ltd.",
"title": ""
},
{
"docid": "0c61b8228c28c992746cc7b5cf3006c7",
"text": "Cytokinin phytohormones regulate a variety of developmental processes in the root such as meristem size, vascular pattern, and root architecture [1-3]. Long-distance transport of cytokinin is supported by the discovery of cytokinins in xylem and phloem sap [4] and by grafting experiments between wild-type and cytokinin biosynthesis mutants [5]. Acropetal transport of cytokinin (toward the shoot apex) has also been implicated in the control of shoot branching [6]. However, neither the mode of transport nor a developmental role has been shown for basipetal transport of cytokinin (toward the root apex). In this paper, we combine the use of a new technology that blocks symplastic connections in the phloem with a novel approach to visualize radiolabeled hormones in planta to examine the basipetal transport of cytokinin. We show that this occurs through symplastic connections in the phloem. The reduction of cytokinin levels in the phloem leads to a destabilization of the root vascular pattern in a manner similar to mutants affected in auxin transport or cytokinin signaling [7]. Together, our results demonstrate a role for long-distance basipetal transport of cytokinin in controlling polar auxin transport and maintaining the vascular pattern in the root meristem.",
"title": ""
},
{
"docid": "ec105642406ba9111485618e85f5b7cd",
"text": "We present simulations of evacuation processes using a recently introduced cellular automaton model for pedestrian dynamics. This model applies a bionics approach to describe the interaction between the pedestrians using ideas from chemotaxis. Here we study a rather simple situation, namely the evacuation from a large room with one or two doors. It is shown that the variation of the model parameters allows to describe different types of behaviour, from regular to panic. We find a nonmonotonic dependence of the evacuation times on the coupling constants. These times depend on the strength of the herding behaviour, with minimal evacuation times for some intermediate values of the couplings, i.e. a proper combination of herding and use of knowledge about the shortest way to the exit.",
"title": ""
},
{
"docid": "ee1688cc7c93f9880dca36bda7c1187a",
"text": "Drug abuse continues to be the major risk behaviour among youth and adolescents, with physical and mental health complications. Despite the known risks associated with the drugs, adolescents continue using these drugs. This paper reveals the prevalence of drug abuse among adolescent’s in Nigeria, problems associated with drugs abuse and reasons why adolescents are vulnerable to drugs abuse. Drug abuse causes a lot of risk among the adolescents; it results to gang formation, armed robbery, mental illness and cultism. Studies revealed that most of the drug addicts started smoking from their young age. As they grow older they seek new thrills and gradually go into hard drugs. There was an indication that 65 percent of high school students used drugs to have good time, 54 percent wanted to experiment to see what it is like, 20–40 percent used it to alter their moods. It concludes by prescribing some ways of curbing the menace arising from drug abuse.",
"title": ""
},
{
"docid": "2cc7e23666cdd2cd1ce13c7536269955",
"text": "Based on requirements of modern vehicle, invehicle Controller Area Network (CAN) architecture has been implemented. In order to reduce point to point wiring harness in vehicle automation, CAN is suggested as a means for data communication within the vehicle environment. The benefits of CAN bus based network over traditional point to point schemes will offer increased flexibility and expandability for future technology insertions. This paper describes the ARM7 based design and implementation of CAN Bus prototype for vehicle automation. It focus on hardware and software design of intelligent node. Hardware interface circuit mainly consists of MCP2515 stand alone CAN-Controller with SPI interface, LPC2148 microcontroller based on 32-bit ARM7 TDMI-S CPU and MCP2551 high speed CAN Transceiver. MCP2551 CAN Transceiver implements ISO-11898 standard physical layer requirements. The software design for CAN bus network are mainly the design of CAN bus data communication between nodes, and data processing for analog signals. The design of software communication module includes system initialization and CAN controller initialization unit, message sending unit, message receiving unit and the interrupt service unit. Keywords—Vehicle Automation, Controller Area Network (CAN), Electronic Control Unit (ECU), CANopen, LIN, SAE J1939.",
"title": ""
},
{
"docid": "95b9de761636ebc84ba2453791adaf05",
"text": "In this article, the term \"electric bicycle\" is used to describe \"electric-motor-powered bicycles,\" including both fully and partially motor-powered bicycles. Here, the electric bicycle market would benefit from further research both on the battery and on the drive technology and their use with electric bicycles. In the United States, electric bicycles are currently used most commonly for short trips to grocery stores or for leisurely rides.This article provides a systematic, comprehensive classification of electric bicycles that includes an overview of the state of the art of today's commercially available electric bicycles. The power requirements in different typical riding situations are also identified. The results are confirmed by experiments. From the results, the key parameters, needs, and challenges involved in improving the performance of electric bicycle are identified.The article gives the summary of the different results that can serve as a roadmap for such improvements. This summary includes both market trends and regulations and technical-science-related aspects.",
"title": ""
},
{
"docid": "f0e21ea25c795f110d3677e51835c099",
"text": "Objective: To assess the use of the Mini-Nutritional Assessment (MNA) in elderly orthopaedic patients.Design: An observation study assessing the nutritional status of female orthopaedic patients.Setting: The orthopaedic wards of the Royal Surrey County Hospital.Subjects: Forty-nine female patients aged 60–103 y; dietary records were obtained for 41 subjects and 36 subjects gave a blood sample for biochemical analysis.Major outcome methods: MNA questionnaire, anthropometry, plasma albumin, transferrin, C-reactive protein (CRP) levels and dietary analyses.Results: The group as a whole had low mean values for body weight, albumin and transferrin and high CRP levels. In addition, the group had mean energy intakes well below the estimated average requirement (EAR) and mean intakes of vitamin D, magnesium, potassium, selenium and non-starch polysaccharides (NSP) were below the lower reference nutrient intakes (LRNI). The MNA screening section categorized 69% of the patients as requiring a full assessment (scored 11 or below), but for the purposes of the study the MNA was completed on all patients. The MNA assessment categorized 16% of the group as ‘malnourished’ (scored<17 points), 47% as ‘at risk’ (scored 17.5–23.5) and 37% as ‘well nourished’ (scored>23.5). Significant differences were found between the malnourished and well nourished groups for body weight (P<0.001), body mass index (BMI) (P<0.001), demiquet (P<0.001) and mindex (P<0.001). Mean values for energy and nutrient intakes showed a clear stepwise increase across the three groups for all nutrients except sodium, with significant differences for protein (P<0.05), carbohydrate (P<0.05), riboflavin (P<0.05) niacin (P<0.05), pyridoxine (P<0.05), folate (P<0.05), calcium (P<0.05), selenium (P<0.05), iron (P<0.05) and NSP (P<0.05) intakes. Stepwise multiple regression analysis indicated that anthropometric assessments were the most predictive factors in the total MNA score. The sensitivity and specificity of the MNA was assessed in comparison with albumin levels, energy intake and mindex. The sensitivity of the MNA classification of those scoring less than 17 points in comparison with albumin levels, energy intake and mindex varied from 27 to 57% and the specificity was 66–100%. This was compared with the sensitivity and specificity of using a score of less than 23.5 on the MNA to predict malnourished individuals. Using this cut-off the sensitivity ranged from 75 to 100%, but the specificity declined to between 37 and 50%.Conclusions: The results suggest that the MNA is a useful diagnostic tool in the identification of elderly patients at risk from malnutrition and those who are malnourished in this hospital setting.Sponsorship: Nestlé Clinical Nutrition, Croydon, Surrey.European Journal of Clinical Nutrition (2000) 54, 555–562",
"title": ""
},
{
"docid": "9b05928e76a8ab764ea558947438694d",
"text": "Developing scalable solution algorithms is one of the central problems in computational game theory. We present an iterative algorithm for computing an exact Nash equilibrium for two-player zero-sum extensive-form games with imperfect information. Our approach combines two key elements: (1) the compact sequence-form representation of extensiveform games and (2) the algorithmic framework of double-oracle methods. The main idea of our algorithm is to restrict the game by allowing the players to play only selected sequences of available actions. After solving the restricted game, new sequences are added by finding best responses to the current solution using fast algorithms. We experimentally evaluate our algorithm on a set of games inspired by patrolling scenarios, board, and card games. The results show significant runtime improvements in games admitting an equilibrium with small support, and substantial improvement in memory use even on games with large support. The improvement in memory use is particularly important because it allows our algorithm to solve much larger game instances than existing linear programming methods. Our main contributions include (1) a generic sequence-form double-oracle algorithm for solving zero-sum extensive-form games; (2) fast methods for maintaining a valid restricted game model when adding new sequences; (3) a search algorithm and pruning methods for computing best-response sequences; (4) theoretical guarantees about the convergence of the algorithm to a Nash equilibrium; (5) experimental analysis of our algorithm on several games, including an approximate version of the algorithm.",
"title": ""
},
{
"docid": "79eafa032a3f0cb367a008e5a7345dd5",
"text": "Data Mining techniques are widely used in educational field to find new hidden patterns from student’s data. The hidden patterns that are discovered can be used to understand the problem arise in the educational field. This paper surveys the three elements needed to make prediction on Students’ Academic Performances which are parameters, methods and tools. This paper also proposes a framework for predicting the performance of first year bachelor students in computer science course. Naïve Bayes Classifier is used to extract patterns using the Data Mining Weka tool. The framework can be used as a basis for the system implementation and prediction of Students’ Academic Performance in Higher Learning Institutions.",
"title": ""
},
{
"docid": "494a0d57cb905f75428022ba030c225c",
"text": "Recent studies have demonstrated a relationship between fructose consumption and risk of developing metabolic syndrome. Mechanisms by which dietary fructose mediates metabolic changes are poorly understood. This study compared the effects of fructose, glucose and sucrose consumption on post-postprandial lipemia and low grade inflammation measured as hs-CRP. This was a randomized, single blinded, cross-over trial involving healthy subjects (n = 14). After an overnight fast, participants were given one of 3 different isocaloric drinks, containing 50 g of either fructose or glucose or sucrose dissolved in water. Blood samples were collected at baseline, 30, 60 and 120 minutes post intervention for the analysis of blood lipids, glucose, insulin and high sensitivity C-reactive protein (hs-CRP). Glucose and sucrose supplementation initially resulted in a significant increase in glucose and insulin levels compared to fructose supplementation and returned to near baseline values within 2 hours. Change in plasma cholesterol, LDL and HDL-cholesterol (measured as area under curve, AUC) was significantly higher when participants consumed fructose compared with glucose or sucrose (P < 0.05). AUC for plasma triglyceride levels however remained unchanged regardless of the dietary intervention. Change in AUC for hs-CRP was also significantly higher in subjects consuming fructose compared with those consuming glucose (P < 0.05), but not sucrose (P = 0.07). This study demonstrates that fructose as a sole source of energy modulates plasma lipids and hsCRP levels in healthy individuals. The significance of increase in HDL-cholesterol with a concurrent increase in LDL-cholesterol and elevated hs-CRP levels remains to be delineated when considering health effects of feeding fructose-rich diets. ACTRN 12614000431628",
"title": ""
},
{
"docid": "3820346d88cdd2186eb8493a456cff65",
"text": "This tutorial provides an overview of current evaluation techniques for schema matching and mapping tasks and tools, alongside existing and broadly used evaluation scenarios. The objective is to introduce the audience into the area of matching and mapping system evaluation, and to highlight the need for leveraging robust benchmarks and yardsticks for the comparison of the different matching and mapping tasks. Open research problems will be identified and presented. The tutorial is for both experienced researchers and unfamiliar investigators looking for a quick and complete introduction to the topic.",
"title": ""
},
{
"docid": "342a0f651fcced29849319eda07bd43c",
"text": "To test web applications, developers currently write test cases in frameworks such as Selenium. On the other hand, most web test generation techniques rely on a crawler to explore the dynamic states of the application. The first approach requires much manual effort, but benefits from the domain knowledge of the developer writing the test cases. The second one is automated and systematic, but lacks the domain knowledge required to be as effective. We believe combining the two can be advantageous. In this paper, we propose to (1) mine the human knowledge present in the form of input values, event sequences, and assertions, in the human-written test suites, (2) combine that inferred knowledge with the power of automated crawling, and (3) extend the test suite for uncovered/unchecked portions of the web application under test. Our approach is implemented in a tool called Testilizer. An evaluation of our approach indicates that Testilizer (1) outperforms a random test generator, and (2) on average, can generate test suites with improvements of up to 150% in fault detection rate and up to 30% in code coverage, compared to the original test suite.",
"title": ""
},
{
"docid": "141e9cfbbd4881a309edc3fe3e34b1f3",
"text": "OBJECTIVE\nTo evaluate the efficacy of neurodynamic techniques used as the sole therapeutic component compared with sham therapy in the treatment of mild and moderate carpal tunnel syndromes (CTS).\n\n\nDESIGN\nSingle-blinded, randomized placebo-controlled trial.\n\n\nSETTING\nSeveral medical clinics.\n\n\nPARTICIPANTS\nVolunteer sample of patients (N=250) diagnosed with CTS (n=150).\n\n\nINTERVENTIONS\nNeurodynamic techniques were used in the neurodynamic techniques group, and sham therapy was used in the sham therapy group. In the neurodynamic techniques group, neurodynamic sequences were used, and sliding and tension techniques were also used. In the sham therapy group, no neurodynamic sequences were used, and therapeutic procedures were performed in an intermediate position. Therapy was conducted twice weekly for a total of 20 therapy sessions.\n\n\nMAIN OUTCOME MEASURES\nSymptom severity (symptom severity scale) and functional status (functional status scale) of the Boston Carpal Tunnel Questionnaire.\n\n\nRESULTS\nA baseline assessment revealed no intergroup differences in all examined parameters (P>.05). After therapy, there was statistically significant intragroup improvement in nerve conduction study (sensory and motor conduction velocity and motor latency) only for the neurodynamic techniques group (P<.01). After therapy, intragroup statistically significant changes also occurred for the neurodynamic techniques group in pain assessment, 2-point discrimination sense, symptom severity scale, and functional status scale (in all cases P<.01). There were no group differences in assessment of grip and pinch strength (P>.05).\n\n\nCONCLUSIONS\nThe use of neurodynamic techniques has a better therapeutic effect than sham therapy in the treatment of mild and moderate forms of CTS.",
"title": ""
},
{
"docid": "87bd2fc53cbe92823af786e60e82f250",
"text": "Cyc is a bold attempt to assemble a massive knowledge base (on the order of 108 axioms) spanning human consensus knowledge. This article examines the need for such an undertaking and reviews the authos' efforts over the past five years to begin its construction. The methodology and history of the project are briefly discussed, followed by a more developed treatment of the current state of the representation language used (epistemological level), techniques for efficient inferencing and default reasoning (heuristic level), and the content and organization of the knowledge base.",
"title": ""
},
{
"docid": "1a7e2ca13d00b6476820ad82c2a68780",
"text": "To understand the dynamics of mental health, it is essential to develop measures for the frequency and the patterning of mental processes in every-day-life situations. The Experience-Sampling Method (ESM) is an attempt to provide a valid instrument to describe variations in self-reports of mental processes. It can be used to obtain empirical data on the following types of variables: a) frequency and patterning of daily activity, social interaction, and changes in location; b) frequency, intensity, and patterning of psychological states, i.e., emotional, cognitive, and conative dimensions of experience; c) frequency and patterning of thoughts, including quality and intensity of thought disturbance. The article reviews practical and methodological issues of the ESM and presents evidence for its short- and long-term reliability when used as an instrument for assessing the variables outlined above. It also presents evidence for validity by showing correlation between ESM measures on the one hand and physiological measures, one-time psychological tests, and behavioral indices on the other. A number of studies with normal and clinical populations that have used the ESM are reviewed to demonstrate the range of issues to which the technique can be usefully applied.",
"title": ""
}
] |
scidocsrr
|
f9f89d416dbb4afef830b1f35cbb4781
|
Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks
|
[
{
"docid": "aef25b8bc64bb624fb22ce39ad7cad89",
"text": "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.",
"title": ""
},
{
"docid": "92cc028267bc3f8d44d11035a8212948",
"text": "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.",
"title": ""
}
] |
[
{
"docid": "9e37941d333338babef6a6e9e5ed5392",
"text": "--------------------------------------------------------------ABSTRACT------------------------------------------------------Using specialized knowledge and perspectives of a set in decision-makings about issues that are qualitative is very helpful. Delphi technique is a group knowledge acquisition method, which is also used for qualitative issue decision-makings. Delphi technique can be used for qualitative research that is exploratory and identifying the nature and fundamental elements of a phenomenon is a basis for study. It is a structured process for collecting data during the successive rounds and group consensus. Despite over a half century of using Delphi in scientific and academic studies, there are still several ambiguities about it. The main problem in using the Delphi technique is lack of a clear theoretical framework for using this technique. Therefore, this study aimed to present a comprehensive theoretical framework for the application of Delphi technique in qualitative research. In this theoretical framework, the application and consensus principles of Delphi technique in qualitative research were clearly explained.",
"title": ""
},
{
"docid": "49a538fc40d611fceddd589b0c9cb433",
"text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.",
"title": ""
},
{
"docid": "c2b1dea961e3be5c4135f4eeba8c3495",
"text": "Background: Systematic literature reviews (SLRs) have become an established methodology in software engineering (SE) research however they can be very time consuming and error prone. Aim: The aims of this study are to identify and classify tools that can help to automate part or all of the SLR process within the SE domain. Method: A mapping study was performed using an automated search strategy plus snowballing to locate relevant papers. A set of known papers was used to validate the search string. Results: 14 papers were accepted into the final set. Eight presented text mining tools and six discussed the use of visualisation techniques. The stage most commonly targeted was study selection. Only two papers reported an independent evaluation of the tool presented. The majority were evaluated through small experiments and examples of their use. Conclusions: A variety of tools are available to support the SLR process although many are in the early stages of development and usage.",
"title": ""
},
{
"docid": "48fc7aabdd36ada053ebc2d2a1c795ae",
"text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.",
"title": ""
},
{
"docid": "7bda4b1ef78a70e651f74995b01c3c1e",
"text": "Given a graph, how can we extract good features for the nodes? For example, given two large graphs from the same domain, how can we use information in one to do classification in the other (i.e., perform across-network classification or transfer learning on graphs)? Also, if one of the graphs is anonymized, how can we use information in one to de-anonymize the other? The key step in all such graph mining tasks is to find effective node features. We propose ReFeX (Recursive Feature eXtraction), a novel algorithm, that recursively combines local (node-based) features with neighborhood (egonet-based) features; and outputs regional features -- capturing \"behavioral\" information. We demonstrate how these powerful regional features can be used in within-network and across-network classification and de-anonymization tasks -- without relying on homophily, or the availability of class labels. The contributions of our work are as follows: (a) ReFeX is scalable and (b) it is effective, capturing regional (\"behavioral\") information in large graphs. We report experiments on real graphs from various domains with over 1M edges, where ReFeX outperforms its competitors on typical graph mining tasks like network classification and de-anonymization.",
"title": ""
},
{
"docid": "0915e156af3bec6a401ec9bd10ab899f",
"text": "The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.",
"title": ""
},
{
"docid": "bcda77a0de7423a2a4331ff87ce9e969",
"text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.",
"title": ""
},
{
"docid": "25d14017403c96eceeafcbda1cbdfd2c",
"text": "We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks (NNs). Our model is a three-layer NN that learns to encode the non-pivot features of an input example into a lowdimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation. The low-dimensional representation is then employed in a learning algorithm for the task. Moreover, we show how to inject pre-trained word embeddings into our model in order to improve generalization across examples with similar pivot features. We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines.1",
"title": ""
},
{
"docid": "40fef2ba4ae0ecd99644cf26ed8fa37f",
"text": "Plant has plenty use in foodstuff, medicine and industry. And it is also vitally important for environmental protection. However, it is an important and difficult task to recognize plant species on earth. Designing a convenient and automatic recognition system of plants is necessary and useful since it can facilitate fast classifying plants, and understanding and managing them. In this paper, a leaf database from different plants is firstly constructed. Then, a new classification method, referred to as move median centers (MMC) hypersphere classifier, for the leaf database based on digital morphological feature is proposed. The proposed method is more robust than the one based on contour features since those significant curvature points are hard to find. Finally, the efficiency and effectiveness of the proposed method in recognizing different plants is demonstrated by experiments. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "3ce39c23ef5be4dd8fd10152ded95a6e",
"text": "Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.",
"title": ""
},
{
"docid": "c320b38a7a9181e13c07fc4da632cab5",
"text": "In this study, the authors provide a global assessment of the performance of different drought indices for monitoring drought impacts on several hydrological, agricultural, and ecological response variables. For this purpose, they compare the performance of several drought indices [the standardized precipitation index (SPI); four versions of the Palmer drought severity index (PDSI); and the standardized precipitation evapotranspiration index (SPEI)] to predict changes in streamflow, soil moisture, forest growth, and crop yield. The authors found a superior capability of the SPEI and the SPI drought * Corresponding author address: Sergio M. Vicente-Serrano, Instituto Pirenaico de Ecologı́a, Consejo Superior de Investigaciones Cientı́ficas (IPE-CSIC), Campus de Aula Dei, P.O. Box 13034, E-50059 Zaragoza, Spain. E-mail address: [email protected] Earth Interactions d Volume 16 (2012) d Paper No. 10 d Page 1 DOI: 10.1175/2012EI000434.1 Copyright 2012, Paper 16-010; 69313 words, 11 Figures, 0 Animations, 3 Tables. http://EarthInteractions.org indices, which are calculated on different time scales than the Palmer indices to capture the drought impacts on the aforementioned hydrological, agricultural, and ecological variables. They detected small differences in the comparative performance of the SPI and the SPEI indices, but the SPEI was the drought index that best captured the responses of the assessed variables to drought in summer, the season in which more drought-related impacts are recorded and in which drought monitoring is critical. Hence, the SPEI shows improved capability to identify drought impacts as compared with the SPI. In conclusion, it seems reasonable to recommend the use of the SPEI if the responses of the variables of interest to drought are not known a priori.",
"title": ""
},
{
"docid": "da1f5a7c5c39f50c70948eeba5cd9716",
"text": "Mushrooms have long been used not only as food but also for the treatment of various ailments. Although at its infancy, accumulated evidence suggested that culinary-medicinal mushrooms may play an important role in the prevention of many age-associated neurological dysfunctions, including Alzheimer's and Parkinson's diseases. Therefore, efforts have been devoted to a search for more mushroom species that may improve memory and cognition functions. Such mushrooms include Hericium erinaceus, Ganoderma lucidum, Sarcodon spp., Antrodia camphorata, Pleurotus giganteus, Lignosus rhinocerotis, Grifola frondosa, and many more. Here, we review over 20 different brain-improving culinary-medicinal mushrooms and at least 80 different bioactive secondary metabolites isolated from them. The mushrooms (either extracts from basidiocarps/mycelia or isolated compounds) reduced beta amyloid-induced neurotoxicity and had anti-acetylcholinesterase, neurite outgrowth stimulation, nerve growth factor (NGF) synthesis, neuroprotective, antioxidant, and anti-(neuro)inflammatory effects. The in vitro and in vivo studies on the molecular mechanisms responsible for the bioactive effects of mushrooms are also discussed. Mushrooms can be considered as useful therapeutic agents in the management and/or treatment of neurodegeneration diseases. However, this review focuses on in vitro evidence and clinical trials with humans are needed.",
"title": ""
},
{
"docid": "dd8fd90b433c3c260a04fe87ae548902",
"text": "Power control in a digital handset is practically implemented in a discrete fashion, and usually, such a discrete power control (DPC) scheme is suboptimal. In this paper, we first show that in a Poison-distributed ad hoc network, if DPC is properly designed with a certain condition satisfied, it can strictly work better than no power control (i.e., users use the same constant power) in terms of average signal-to-interference ratio, outage probability, and spatial reuse. This motivates us to propose an N-layer DPC scheme in a wireless clustered ad hoc network, where transmitters and their intended receivers in circular clusters are characterized by a Poisson cluster process on the plane ℝ2. The cluster of each transmitter is tessellated into N-layer annuli with transmit power Pi adopted if the intended receiver is located at the ith layer. Two performance metrics of transmission capacity (TC) and outage-free spatial reuse factor are redefined based on the N-layer DPC. The outage probability of each layer in a cluster is characterized and used to derive the optimal power scaling law Pi ∈ Θ(ηi-(α/2)), with ηi as the probability of selecting power Pi and α as the path loss exponent. Moreover, the specific design approaches to optimize Pi and N based on ηi are also discussed. Simulation results indicate that the proposed optimal N-layer DPC significantly outperforms other existing power control schemes in terms of TC and spatial reuse.",
"title": ""
},
{
"docid": "5552216832bb7315383d1c4f2bfe0635",
"text": "Semantic parsing maps sentences to formal meaning representations, enabling question answering, natural language interfaces, and many other applications. However, there is no agreement on what the meaning representation should be, and constructing a sufficiently large corpus of sentence-meaning pairs for learning is extremely challenging. In this paper, we argue that both of these problems can be avoided if we adopt a new notion of semantics. For this, we take advantage of symmetry group theory, a highly developed area of mathematics concerned with transformations of a structure that preserve its key properties. We define a symmetry of a sentence as a syntactic transformation that preserves its meaning. Semantically parsing a sentence then consists of inferring its most probable orbit under the language’s symmetry group, i.e., the set of sentences that it can be transformed into by symmetries in the group. The orbit is an implicit representation of a sentence’s meaning that suffices for most applications. Learning a semantic parser consists of discovering likely symmetries of the language (e.g., paraphrases) from a corpus of sentence pairs with the same meaning. Once discovered, symmetries can be composed in a wide variety of ways, potentially resulting in an unprecedented degree of immunity to syntactic variation.",
"title": ""
},
{
"docid": "8b50b28500a388d9913516e9dd5be719",
"text": "Scientific experiments and large-scale simulations produce massive amounts of data. Many of these scientific datasets are arrays, and are stored in file formats such as HDF5 and NetCDF. Although scientific data management systems, such as SciDB, are designed to manipulate arrays, there are challenges in integrating these systems into existing analysis workflows. Major barriers include the expensive task of preparing and loading data before querying, and converting the final results to a format that is understood by the existing post-processing and visualization tools. As a consequence, integrating a data management system into an existing scientific data analysis workflow is time-consuming and requires extensive user involvement. In this paper, we present the design of a new scientific data analysis system that efficiently processes queries directly over data stored in the HDF5 file format. This design choice eliminates the tedious and error-prone data loading process, and makes the query results readily available to the next processing steps of the analysis workflow. Our design leverages the increasing main memory capacities found in supercomputers through bitmap indexing and in-memory query execution. In addition, query processing over the HDF5 data format can be effortlessly parallelized to utilize the ample concurrency available in large-scale supercomputers and modern parallel file systems. We evaluate the performance of our system on a large supercomputing system and experiment with both a synthetic dataset and a real cosmology observation dataset. Our system frequently outperforms the relational database system that the cosmology team currently uses, and is more than 10X faster than Hive when processing data in parallel. Overall, by eliminating the data loading step, our query processing system is more effective in supporting in situ scientific analysis workflows.",
"title": ""
},
{
"docid": "8e099249047cb4e1550f8ddb287bddca",
"text": "Several arguments can be found in business intelligence literature that the use of business intelligence systems can bring multiple benefits, for example, via faster and easier access to information, savings in information technology (‘IT’) and greater customer satisfaction all the way through to the improved competitiveness of enterprises. Yet, most of these benefits are often very difficult to measure because of their indirect and delayed effects on business success. On top of the difficulties in justifying investments in information technology (‘IT’), particularly business intelligence (‘BI’), business executives generally want to know whether the investment is worth the money and if it can be economically justified. In looking for an answer to this question, various methods of evaluating investments can be employed. We can use the classic return on investment (‘ROI’) calculation, cost-benefit analysis, the net present value (‘NPV’) method, the internal rate of return (‘IRR’) and others. However, it often appears in business practice that the use of these methods alone is inappropriate, insufficient or unfeasible for evaluating an investment in business intelligence systems. Therefore, for this purpose, more appropriate methods are those based mainly on a qualitative approach, such as case studies, empirical analyses, user satisfaction analyses, and others that can be employed independently or can help us complete the whole picture in conjunction with the previously mentioned methods. Since there is no universal approach to the evaluation of an investment in information technology and business intelligence, it is necessary to approach each case in a different way based on the specific circumstances and purpose of the evaluation. This paper presents a case study in which the evaluation of an investment in on-line analytical processing (‘OLAP’) technology in the company Melamin was made through an",
"title": ""
},
{
"docid": "14ca9dfee206612e36cd6c3b3e0ca61e",
"text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.",
"title": ""
},
{
"docid": "4a3ced0711361d3267745c2b29f78ee7",
"text": "Content delivery networks must balance a number of trade-offs when deciding how to direct a client to a CDN server. Whereas DNS-based redirection requires a complex global traffic manager, anycast depends on BGP to direct a client to a CDN front-end. Anycast is simple to operate, scalable, and naturally resilient to DDoS attacks. This simplicity, however, comes at the cost of precise control of client redirection. We examine the performance implications of using anycast in a global, latency-sensitive, CDN. We analyze millions of client-side measurements from the Bing search service to capture anycast versus unicast performance to nearby front-ends. We find that anycast usually performs well despite the lack of precise control but that it directs roughly 20% of clients to a suboptimal front-end. We also show that the performance of these clients can be improved through a simple history-based prediction scheme.",
"title": ""
},
{
"docid": "006793685095c0772a1fe795d3ddbd76",
"text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.",
"title": ""
},
{
"docid": "3e63c8a5499966f30bd3e6b73494ff82",
"text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.",
"title": ""
}
] |
scidocsrr
|
0493c7dd3082a6c60012cc065512d542
|
Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image
|
[
{
"docid": "79cffed53f36d87b89577e96a2b2e713",
"text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.",
"title": ""
},
{
"docid": "ff39f9fdb98981137f93d156150e1b83",
"text": "We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.",
"title": ""
}
] |
[
{
"docid": "033253834167cecbcc2658c8ba22aa18",
"text": "Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.",
"title": ""
},
{
"docid": "bd8788c3d4adc5f3671f741e884c7f34",
"text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.",
"title": ""
},
{
"docid": "bdb9f3822ef89276b1aa1d493d1f9379",
"text": "Individual performance is of high relevance for organizations and individuals alike. Showing high performance when accomplishing tasks results in satisfaction, feelings of selfefficacy and mastery (Bandura, 1997; Kanfer et aL, 2005). Moreover, high performing individuals get promoted, awarded and honored. Career opportunities for individuals who perform well are much better than those of moderate or low performing individuals (Van Scotter et aI., 2000). This chapter summarizes research on individual performance and addresses performance as a multi-dimensional and dynamic concept. First, we define the concept of performance, next we discuss antecedents of between-individual variation of performance, and describe intraindividual change and variability in performance, and finally, we present a research agenda for future research.",
"title": ""
},
{
"docid": "9d2ec490b7efb23909abdbf5f209f508",
"text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.",
"title": ""
},
{
"docid": "0d23946f8a94db5943deee81deb3f322",
"text": "The Spatial Semantic Hierarchy is a model of knowledge of large-scale space consisting of multiple interacting representations, both qualitative and quantitative. The SSH is inspired by the properties of the human cognitive map, and is intended to serve both as a model of the human cognitive map and as a method for robot exploration and map-building. The multiple levels of the SSH express states of partial knowledge, and thus enable the human or robotic agent to deal robustly with uncertainty during both learning and problem-solving. The control level represents useful patterns of sensorimotor interaction with the world in the form of trajectory-following and hill-climbing control laws leading to locally distinctive states. Local geometric maps in local frames of reference can be constructed at the control level to serve as observers for control laws in particular neighborhoods. The causal level abstracts continuous behavior among distinctive states into a discrete model consisting of states linked by actions. The topological level introduces the external ontology of places, paths and regions by abduction to explain the observed pattern of states and actions at the causal level. Quantitative knowledge at the control, causal and topological levels supports a “patchwork map” of local geometric frames of reference linked by causal and topological connections. The patchwork map can be merged into a single global frame of reference at the metrical level when sufficient information and computational resources are available. We describe the assumptions and guarantees behind the generality of the SSH across environments and sensorimotor systems. Evidence is presented from several partial implementations of the SSH on simulated and physical robots. 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "6f95d8bcaefcc99209279dadb1beb0a6",
"text": "Public cloud software marketplaces already offer users a wealth of choice in operating systems, database management systems, financial software, and virtual networking, all deployable and configurable at the click of a button. Unfortunately, this level of customization has not extended to emerging hypervisor-level services, partly because traditional virtual machines (VMs) are fully controlled by only one hypervisor at a time. Currently, a VM in a cloud platform cannot concurrently use hypervisorlevel services from multiple third-parties in a compartmentalized manner. We propose the notion of a multihypervisor VM, which is an unmodified guest that can simultaneously use services from multiple coresident, but isolated, hypervisors. We present a new virtualization architecture, called Span virtualization, that leverages nesting to allow multiple hypervisors to concurrently control a guest’s memory, virtual CPU, and I/O resources. Our prototype of Span virtualization on the KVM/QEMU platform enables a guest to use services such as introspection, network monitoring, guest mirroring, and hypervisor refresh, with performance comparable to traditional nested VMs.",
"title": ""
},
{
"docid": "8dcb268612ba90ac420ebaa89becb879",
"text": "Recognition of a human's continuous emotional states in real time plays an important role in machine emotional intelligence and human-machine interaction. Existing real-time emotion recognition systems use stimuli with low ecological validity (e.g., picture, sound) to elicit emotions and to recognise only valence and arousal. To overcome these limitations, in this paper, we construct a standardised database of 16 emotional film clips that were selected from over one thousand film excerpts. Based on emotional categories that are induced by these film clips, we propose a real-time movie-induced emotion recognition system for identifying an individual's emotional states through the analysis of brain waves. Thirty participants took part in this study and watched 16 standardised film clips that characterise real-life emotional experiences and target seven discrete emotions and neutrality. Our system uses a 2-s window and a 50 percent overlap between two consecutive windows to segment the EEG signals. Emotional states, including not only the valence and arousal dimensions but also similar discrete emotions in the valence-arousal coordinate space, are predicted in each window. Our real-time system achieves an overall accuracy of 92.26 percent in recognising high-arousal and valenced emotions from neutrality and 86.63 percent in recognising positive from negative emotions. Moreover, our system classifies three positive emotions (joy, amusement, tenderness) with an average of 86.43 percent accuracy and four negative emotions (anger, disgust, fear, sadness) with an average of 65.09 percent accuracy. These results demonstrate the advantage over the existing state-of-the-art real-time emotion recognition systems from EEG signals in terms of classification accuracy and the ability to recognise similar discrete emotions that are close in the valence-arousal coordinate space.",
"title": ""
},
{
"docid": "225e7b608d06d218144853b900d40fd1",
"text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.",
"title": ""
},
{
"docid": "71a262b1c91c89f379527b271e45e86e",
"text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.",
"title": ""
},
{
"docid": "99d84e588208ac09629a02a8349c560a",
"text": "Psilocybin (4-phosphoryloxy-N,N-dimethyltryptamine) is the major psychoactive alkaloid of some species of mushrooms distributed worldwide. These mushrooms represent a growing problem regarding hallucinogenic drug abuse. Despite its experimental medical use in the 1960s, only very few pharmacological data about psilocybin were known until recently. Because of its still growing capacity for abuse and the widely dispersed data this review presents all the available pharmacological data about psilocybin.",
"title": ""
},
{
"docid": "31bbb42b7b1a8723f5e37c1f93fef7be",
"text": "Future 5G and Internet of Things (IoT) applications will heavily rely on long-range communication technologies such as low-power wireless area networks (LPWANs). In particular, LoRaWAN built on LoRa physical layer is gathering increasing interests, both from academia and industries, for enabling low-cost energy efficient IoT wireless sensor networks for, e.g., environmental monitoring over wide areas. While its communication range may go up to 20 kilometers, the achievable bit rates in LoRaWAN are limited to a few kilobits per second. In the event of collisions, the perceived rate is further reduced due to packet loss and retransmissions. Firstly, to alleviate the harmful impacts of collisions, we propose a decoding algorithm that enables to resolve several superposed LoRa signals. Our proposed method exploits the slight desynchronization of superposed signals and specific features of LoRa physical layer. Secondly, we design a full MAC protocol enabling collision resolution. The simulation results demonstrate that the proposed method outperforms conventional LoRaWAN jointly in terms of system throughput, energy efficiency as well as delay. These results show that our scheme is well suited for 5G and IoT systems, as one of their major goals is to provide the best trade-off among these performance objectives.",
"title": ""
},
{
"docid": "6c29473469f392079fa8406419190116",
"text": "The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists.",
"title": ""
},
{
"docid": "6b0bb5e87efacf0008918380f98cd5ae",
"text": "This paper discusses Low Power Wide Area Network technologies. The purpose of this work is a presentation of these technologies in a mutual context in order to analyse their coexistence. In this work there are described Low Power Wide Area Network terms and their representatives LoRa, Sigfox and IQRF, of which characteristics, topology and some significant technics are inspected. The technologies are also compared together in a frequency spectrum in order to detect risk bands causing collisions. A potential increased risk of collisions is found around 868.2 MHz. The main contribution of this paper is a summary of characteristics, which have an influence on the resulting coexistence.",
"title": ""
},
{
"docid": "c2baa873bc2850b14b3868cdd164019f",
"text": "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.",
"title": ""
},
{
"docid": "7359729fe4bb369798c05c8c7c258111",
"text": "By considering various situations of climatologically phenomena affecting local weather conditions in various parts of the world. These weather conditions have a direct effect on crop yield. Various researches have been done exploring the connections between large-scale climatologically phenomena and crop yield. Artificial neural networks have been demonstrated to be powerful tools for modeling and prediction, to increase their effectiveness. Crop prediction methodology is used to predict the suitable crop by sensing various parameter of soil and also parameter related to atmosphere. Parameters like type of soil, PH, nitrogen, phosphate, potassium, organic carbon, calcium, magnesium, sulphur, manganese, copper, iron, depth, temperature, rainfall, humidity. For that purpose we are used artificial neural network (ANN).",
"title": ""
},
{
"docid": "578696bf921cc5d4e831786c67845346",
"text": "Identifying and monitoring multiple disease biomarkers and other clinically important factors affecting the course of a disease, behavior or health status is of great clinical relevance. Yet conventional statistical practice generally falls far short of taking full advantage of the information available in multivariate longitudinal data for tracking the course of the outcome of interest. We demonstrate a method called multi-trajectory modeling that is designed to overcome this limitation. The method is a generalization of group-based trajectory modeling. Group-based trajectory modeling is designed to identify clusters of individuals who are following similar trajectories of a single indicator of interest such as post-operative fever or body mass index. Multi-trajectory modeling identifies latent clusters of individuals following similar trajectories across multiple indicators of an outcome of interest (e.g., the health status of chronic kidney disease patients as measured by their eGFR, hemoglobin, blood CO2 levels). Multi-trajectory modeling is an application of finite mixture modeling. We lay out the underlying likelihood function of the multi-trajectory model and demonstrate its use with two examples.",
"title": ""
},
{
"docid": "4ade01af5fd850722fd690a5d8f938f4",
"text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.",
"title": ""
},
{
"docid": "7435d1591725bbcd86fe93c607d5683c",
"text": "This study evaluated the role of breast magnetic resonance (MR) imaging in the selective study breast implant integrity. We retrospectively analysed the signs of breast implant rupture observed at breast MR examinations of 157 implants and determined the sensitivity and specificity of the technique in diagnosing implant rupture by comparing MR data with findings at surgical explantation. The linguine and the salad-oil signs were statistically the most significant signs for diagnosing intracapsular rupture; the presence of siliconomas/seromas outside the capsule and/or in the axillary lymph nodes calls for immediate explantation. In agreement with previous reports, we found a close correlation between imaging signs and findings at explantation. Breast MR imaging can be considered the gold standard in the study of breast implants. Scopo del nostro lavoro è stato quello di valutare il ruolo della risonanza magnetica (RM) mammaria nello studio selettivo dell’integrità degli impianti protesici. è stata eseguita una valutazione retrospettiva dei segni di rottura documentati all’esame RM effettuati su 157 protesi mammarie, al fine di stabilire la sensibilità e specificità nella diagnosi di rottura protesica, confrontando tali dati RM con i reperti riscontrati in sala operatoria dopo la rimozione della protesi stessa. Il linguine sign e il salad-oil sign sono risultati i segni statisticamente più significativi nella diagnosi di rottura protesica intracapsulare; la presenza di siliconomi/sieromi extracapsulari e/o nei linfonodi ascellari impone l’immediato intervento chirurgico di rimozione della protesi rotta. I dati ottenuti dimostrano, in accordo con la letteratura, una corrispondenza tra i segni dell’imaging e i reperti chirurgici, confermando il ruolo di gold standard della RM nello studio delle protesi mammarie.",
"title": ""
},
{
"docid": "390ebc9975960ff7a817efc8412bd8da",
"text": "OBJECTIVE\nPhysical activity is critical for health, yet only about half of the U.S. adult population meets basic aerobic physical activity recommendations and almost a third are inactive. Mindfulness meditation is gaining attention for its potential to facilitate health-promoting behavior and may address some limitations of existing interventions for physical activity. However, little evidence exists on mindfulness meditation and physical activity. This study assessed whether mindfulness meditation is uniquely associated with physical activity in a nationally representative sample.\n\n\nMETHOD\nCross-sectional data from the adult sample (N = 34,525) of the 2012 National Health Interview Survey were analyzed. Logistic regression models tested whether past-year use of mindfulness meditation was associated with (a) inactivity and (b) meeting aerobic physical activity recommendations, after accounting for sociodemographics, another health-promoting behavior, and 2 other types of meditation. Data were weighted to represent the U.S. civilian, noninstitutionalized adult population.\n\n\nRESULTS\nAccounting for covariates, U.S. adults who practiced mindfulness meditation in the past year were less likely to be inactive and more likely to meet physical activity recommendations. Mindfulness meditation showed stronger associations with these indices of physical activity than the 2 other types of meditation.\n\n\nCONCLUSIONS\nThese results suggest that mindfulness meditation specifically, beyond meditation in general, is associated with physical activity in U.S adults. Future research should test whether intervening with mindfulness meditation-either as an adjunctive component or on its own-helps to increase or maintain physical activity. (PsycINFO Database Record",
"title": ""
}
] |
scidocsrr
|
6dc25cce5e69a89a3b8e06723b61693b
|
Predictive translation memory: a mixed-initiative system for human language translation
|
[
{
"docid": "90fc941f6db85dd24b47fa06dd0bb0aa",
"text": "Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.",
"title": ""
}
] |
[
{
"docid": "d2d4b51e3d7d0172946140dacad82db8",
"text": "The integration of supply chains offers many benefits; yet, it may also render organisations more vulnerable to electronic fraud (e-fraud). E-fraud can drain on organisations’ financial resources, and can have a significant adverse effect on the ability to achieve their strategic objectives. Therefore, efraud control should be part of corporate board-level due diligence, and should be integrated into organisations’ practices and business plans. Management is responsible for taking into consideration the relevant cultural, strategic and implementation elements that inter-relate with each other and to coordinating the human, technological and financial resources necessary to designing and implementing policies and procedures for controlling e-fraud. Due to the characteristics of integrated supply chains, a move from the traditional vertical approach to a systemic, horizontal-vertical approach is necessary. Although the e-fraud risk cannot be eliminated, risk mitigation policies and processes tailored to an organisation’s particular vulnerabilities can significantly reduce the risk and may even preclude certain classes of frauds. In this paper, a conceptual framework of e-fraud control in an integrated supply chain is proposed. The proposed conceptual framework can help managers and practitioners better understand the issues and plan the activities involved in a systemic, horizontal-vertical approach to e-fraud control in an integrated supply chain, and can be a basis upon which empirical studies can be build.",
"title": ""
},
{
"docid": "0a31ab53b887cf231d7ca1a286763e5f",
"text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.",
"title": ""
},
{
"docid": "79cdd24d14816f45b539f31606a3d5ee",
"text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.",
"title": ""
},
{
"docid": "4acc30bade98c1257ab0a904f3695f3d",
"text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.",
"title": ""
},
{
"docid": "045a56e333b1fe78677b8f4cc4c20ecc",
"text": "Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.",
"title": ""
},
{
"docid": "c798c5c19dddb968f15f7bc7734ac2e4",
"text": "Information extraction relevant to the user queries is the challenging task in the ontology environment due to data varieties such as image, video, and text. The utilization of appropriate semantic entities enables the content-based search on annotated text. Recently, the automatic extraction of textual content in the audio-visual content is an advanced research area in a multimedia (MM) environment. The annotation of the video includes several tags and comments. This paper proposes the Collaborative Tagging (CT) model based on the Block Acquiring Page Segmentation (BAPS) method to retrieve the tag-based information. The information extraction in this model includes the Ontology-Based Information Extraction (OBIE) based on the single ontology utilization. The semantic annotation phase in the proposed work inserts the metadata with limited machine-readable terms. The insertion process is split into two major processes such as database uploading to server and extraction of images/web pages based on the results of semantic phase. Novel weight-based novel clustering algorithms are introduced to extract knowledge from MM contents. The ranking based on the weight value in the semantic annotation phase supports the image/web page retrieval process effectively. The comparative analysis of the proposed BAPS-CT with the existing information retrieval (IR) models regarding the average precision rate, time cost, and storage space rate assures the effectiveness of BAPS-CT in OMIR.",
"title": ""
},
{
"docid": "87835d75704f493639744abbf0119bdb",
"text": "Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "259972cd20a1f763b07bef4619dc7f70",
"text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.",
"title": ""
},
{
"docid": "4161b52b832c0b80d0815b9e80a5dda0",
"text": "Machine Comprehension (MC) is a challenging task in Natural Language Processing field, which aims to guide the machine to comprehend a passage and answer the given question. Many existing approaches on MC task are suffering the inefficiency in some bottlenecks, such as insufficient lexical understanding, complex question-passage interaction, incorrect answer extraction and so on. In this paper, we address these problems from the viewpoint of how humans deal with reading tests in a scientific way. Specifically, we first propose a novel lexical gating mechanism to dynamically combine the words and characters representations. We then guide the machines to read in an interactive way with attention mechanism and memory network. Finally we add a checking layer to refine the answer for insurance. The extensive experiments on two popular datasets SQuAD and TriviaQA show that our method exceeds considerable performance than most stateof-the-art solutions at the time of submission.",
"title": ""
},
{
"docid": "abbafaaf6a93e2a49a692690d4107c9a",
"text": "Virtual teams have become a ubiquitous form of organizing, but the impact of social structures within and between teams on group performance remains understudied. This paper uses the case study of a massively multiplayer online game and server log data from over 10,000 players to examine the connection between group social capital (operationalized through guild network structure measures) and team effectiveness, given a variety of in-game social networks. Three different networks, social, task, and exchange networks, are compared and contrasted while controlling for group size, group age, and player experience. Team effectiveness is maximized at a roughly moderate level of closure across the networks, suggesting that this is the optimal level of the groupâs network density. Guilds with high brokerage, meaning they have diverse connections with other groups, were more effective in achievement-oriented networks. In addition, guilds with central leaders were more effective when they teamed up with other guild leaders.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "8dc3ba4784ea55183e96b466937d050b",
"text": "One of the major problems that clinical neuropsychology has had in memory clinics is to apply ecological, easily administrable and sensitive tests that can make the diagnosis of dementia both precocious and reliable. Often the choice of the best neuropsychological test is hard because of a number of variables that can influence a subject’s performance. In this regard, tests originally devised to investigate cognitive functions in healthy adults are not often appropriate to analyze cognitive performance in old subjects with low education because of their intrinsically complex nature. In the present paper, we present normative values for the Rey–Osterrieth Complex Figure B Test (ROCF-B) a simple test that explores constructional praxis and visuospatial memory. We collected normative data of copy, immediate and delayed recall of the ROCF-B in a group of 346 normal Italian subjects above 40 years. A multiple regression analysis was performed to evaluate the potential effect of age, sex, and education on the three tasks administered to the subjects. Age and education had a significant effect on copying, immediate recall, and delayed recall as well as on the rate of forgetting. Correction grids and equivalent scores with cut-off values relative to each task are available. The availability of normative values can make the ROCF-B a valid instrument to assess non-verbal memory in adults and in the elderly for whom the commonly used ROCF-A is too demanding.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "5dec0745ee631ec4ffbed6402093e35b",
"text": "BACKGROUND\nAdolescent breast hypertrophy can have long-term negative medical and psychological impacts. In select patients, breast reduction surgery is the best treatment. Unfortunately, many in the general and medical communities hold certain misconceptions regarding the indications and timing of this procedure. Several etiologies of adolescent breast hypertrophy, including juvenile gigantomastia, adolescent macromastia, and obesity-related breast hypertrophy, complicate the issue. It is our hope that this paper will clarify these misconceptions through a combined retrospective and literature review.\n\n\nMETHODS\nA retrospective review was conducted looking at adolescent females (≤18 years old) who had undergone bilateral breast reduction surgery. Their preoperative comorbidities, BMI, reduction volume, postoperative complications, and subjective satisfaction were recorded. In addition, a literature review was completed.\n\n\nRESULTS\n34 patients underwent bilateral breast reduction surgery. The average BMI was 29.5 kg/m(2). The average volume resected during bilateral breast reductions was 1820.9 g. Postoperative complications include dehiscence (9%), infection (3%), and poor scarring (6%). There were no cases of recurrence or need for repeat operation. Self-reported patient satisfaction was 97%. All patients described significant improvements in self body-image and participation in social activities. The literature review yielded 25 relevant reported articles, 24 of which are case studies.\n\n\nCONCLUSION\nReduction mammaplasty is safe and effective. It is the preferred treatment method for breast hypertrophy in the adolescent female and may be the only way to alleviate the increased social, psychological, and physical strain caused by this condition.",
"title": ""
},
{
"docid": "353fae3edb830aa86db682f28f64fd90",
"text": "The penetration of renewable resources in power system has been increasing in recent years. Many of these resources are uncontrollable and variable in nature, wind in particular, are relatively unpredictable. At high penetration levels, volatility of wind power production could cause problems for power system to maintain system security and reliability. One of the solutions being proposed to improve reliability and performance of the system is to integrate energy storage devices into the network. In this paper, unit commitment and dispatch schedule in power system with and without energy storage is examined for different level of wind penetration. Battery energy storage (BES) is considered as an alternative solution to store energy. The SCUC formulation and solution technique with wind power and BES is presented. The proposed formulation and model is validated with eight-bus system case study. Further, a discussion on the role of BES on locational pricing, economic, peak load shaving, and transmission congestion management had been made.",
"title": ""
},
{
"docid": "260e574e9108e05b98df7e4ed489e5fc",
"text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.",
"title": ""
},
{
"docid": "60ff841b0b13442c2afd5dd73178145a",
"text": "Detecting inferences in documents is critical for ensuring privacy when sharing information. In this paper, we propose a refined and practical model of inference detection using a reference corpus. Our model is inspired by association rule mining: inferences are based on word co-occurrences. Using the model and taking the Web as the reference corpus, we can find inferences and measure their strength through web-mining algorithms that leverage search engines such as Google or Yahoo!.\n Our model also includes the important case of private corpora, to model inference detection in enterprise settings in which there is a large private document repository. We find inferences in private corpora by using analogues of our Web-mining algorithms, relying on an index for the corpus rather than a Web search engine.\n We present results from two experiments. The first experiment demonstrates the performance of our techniques in identifying all the keywords that allow for inference of a particular topic (e.g. \"HIV\") with confidence above a certain threshold. The second experiment uses the public Enron e-mail dataset. We postulate a sensitive topic and use the Enron corpus and the Web together to find inferences for the topic.\n These experiments demonstrate that our techniques are practical, and that our model of inference based on word co-occurrence is well-suited to efficient inference detection.",
"title": ""
},
{
"docid": "f82a49434548e1aa09792877d84b296c",
"text": "Rats and mice have a tendency to interact more with a novel object than with a familiar object. This tendency has been used by behavioral pharmacologists and neuroscientists to study learning and memory. A popular protocol for such research is the object-recognition task. Animals are first placed in an apparatus and allowed to explore an object. After a prescribed interval, the animal is returned to the apparatus, which now contains the familiar object and a novel object. Object recognition is distinguished by more time spent interacting with the novel object. Although the exact processes that underlie this 'recognition memory' requires further elucidation, this method has been used to study mutant mice, aging deficits, early developmental influences, nootropic manipulations, teratological drug exposure and novelty seeking.",
"title": ""
},
{
"docid": "b42cd71b23c933f7b07d270edc1ce53b",
"text": "We propose a modification of the cost function of the Hopfield model whose salient features shine in its Taylor expansion and result in more than pairwise interactions with alternate signs, suggesting a unified framework for handling both with deep learning and network pruning. In our analysis, we heavily rely on the Hamilton-Jacobi correspondence relating the statistical model with a mechanical system. In this picture, our model is nothing but the relativistic extension of the original Hopfield model (whose cost function is a quadratic form in the Mattis magnetization which mimics the non-relativistic Hamiltonian for a free particle). We focus on the low-storage regime and solve the model analytically by taking advantage of the mechanical analogy, thus obtaining a complete characterization of the free energy and the associated self-consistency equations in the thermodynamic limit. On the numerical side, we test the performances of our proposal with MC simulations, showing that the stability of spurious states (limiting the capabilities of the standard Hebbian construction) is sensibly reduced due to presence of unlearning contributions in this extended framework.",
"title": ""
}
] |
scidocsrr
|
089a32ca1f138c1934cbdcd560a04a76
|
RelTextRank: An Open Source Framework for Building Relational Syntactic-Semantic Text Pair Representations
|
[
{
"docid": "50648acbc0ec1d4a8c3c86f2456f4d14",
"text": "We present DKPro Similarity, an open source framework for text similarity. Our goal is to provide a comprehensive repository of text similarity measures which are implemented using standardized interfaces. DKPro Similarity comprises a wide variety of measures ranging from ones based on simple n-grams and common subsequences to high-dimensional vector comparisons and structural, stylistic, and phonetic measures. In order to promote the reproducibility of experimental results and to provide reliable, permanent experimental conditions for future studies, DKPro Similarity additionally comes with a set of full-featured experimental setups which can be run out-of-the-box and be used for future systems to built upon.",
"title": ""
}
] |
[
{
"docid": "3a9d639e87d6163c18dd52ef5225b1a6",
"text": "A variety of approaches have been recently proposed to automatically infer users’ personality from their user generated content in social media. Approaches differ in terms of the machine learning algorithms and the feature sets used, type of utilized footprint, and the social media environment used to collect the data. In this paper, we perform a comparative analysis of state-of-the-art computational personality recognition methods on a varied set of social media ground truth data from Facebook, Twitter and YouTube. We answer three questions: (1) Should personality prediction be treated as a multi-label prediction task (i.e., all personality traits of a given user are predicted at once), or should each trait be identified separately? (2) Which predictive features work well across different on-line environments? and (3) What is the decay in accuracy when porting models trained in one social media environment to another?",
"title": ""
},
{
"docid": "32ae0b0c5b3ca3a7ede687872d631d29",
"text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)",
"title": ""
},
{
"docid": "9422f8c85859aca10e7d2a673b0377ba",
"text": "Many adolescents are experiencing a reduction in sleep as a consequence of a variety of behavioral factors (e.g., academic workload, social and employment opportunities), even though scientific evidence suggests that the biological need for sleep increases during maturation. Consequently, the ability to effectively interact with peers while learning and processing novel information may be diminished in many sleepdeprived adolescents. Furthermore, sleep deprivation may account for reductions in cognitive efficiency in many children and adolescents with special education needs. In response to recognition of this potential problem by parents, educators, and scientists, some school districts have implemented delayed bus schedules and school start times to allow for increased sleep duration for high school students, in an effort to increase academic performance and decrease behavioral problems. The long-term effects of this change are yet to be determined; however, preliminary studies suggest that the short-term impact on learning and behavior has been beneficial. Thus, many parents, teachers, and scientists are supporting further consideration of this information to formulate policies that may maximize learning and developmental opportunities for children. Although changing school start times may be an effective method to combat sleep deprivation in most adolescents, some adolescents experience sleep deprivation and consequent diminished daytime performance because of common underlying sleep disorders (e.g., asthma or sleep apnea). In such cases, surgical, pharmaceutical, or respiratory therapy, or a combination of the three, interventions are required to restore normal sleep and daytime performance.",
"title": ""
},
{
"docid": "b17015641d4ae89767bedf105802d838",
"text": "We propose prefix constraints, a novel method to enforce constraints on target sentences in neural machine translation. It places a sequence of special tokens at the beginning of target sentence (target prefix), while side constraints (Sennrich et al., 2016) places a special token at the end of source sentence (source suffix). Prefix constraints can be predicted from source sentence jointly with target sentence, while side constraints must be provided by the user or predicted by some other methods. In both methods, special tokens are designed to encode arbitrary features on target-side or metatextual information. We show that prefix constraints are more flexible than side constraints and can be used to control the behavior of neural machine translation, in terms of output length, bidirectional decoding, domain adaptation, and unaligned target word generation.",
"title": ""
},
{
"docid": "215d3a65099a39f5489ef05a48dd7344",
"text": "In this paper an automated video surveillance system for human posture recognition using active contours and neural networks is presented. Localization of moving objects in the scene and human posture estimation are key features of the proposed architecture. The system architecture consists of five sequential modules that include the moving target detection process, two levels of segmentation process for interested element localization, features extraction of the object shape and a human posture classification system based on the radial basis functions neural network. Moving objects are detected by using an adaptive background subtraction method with an automatic background adaptation speed parameter and a new fast gradient vector flow snake algorithm for the elements segmentation is proposed. The developed system has been tested for the classification of three different postures such as standing, bending and squatting considering different kinds of feature. Results are promising and the architecture is also useful for the discrimination of human activities.",
"title": ""
},
{
"docid": "334cc321181669085ef1aa83e69ec475",
"text": "The energy required to crush rocks is proportional to the amount of new surface area that is created; hence, a very important percentage of the energy consumed to produce construction aggregates is spent in producing non-commercial fines. Data gathered during visits to quarries, an extensive survey and laboratory experiments are used to explore the role of mineralogy and fracture mode in fines production during the crushing of single aggregates and aggregates within granular packs. Results show that particle-level loading conditions determine the failure mode, resulting particle shape and fines generation. Point loading (both single particles and grains in loose packings) produces clean fractures and a small percentage of fines. In choked operations, high inter-particle coordination controls particle-level loading conditions, causesmicro-fractures on new aggregate faces and generates a large amount of fines. The generation of fines increases when shear is imposed during crushing. Aggregates produced in current crushing operations show the effects of multiple loading conditions and fracture modes. Results support the producers' empirical observations that the desired cubicity of aggregates is obtained at the expense of increased fines generation when standard equipment is used. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4fa1b8c7396e636216d0c1af0d1adf15",
"text": "Modern smartphone platforms have millions of apps, many of which request permissions to access private data and resources, like user accounts or location. While these smartphone platforms provide varying degrees of control over these permissions, the sheer number of decisions that users are expected to manage has been shown to be unrealistically high. Prior research has shown that users are often unaware of, if not uncomfortable with, many of their permission settings. Prior work also suggests that it is theoretically possible to predict many of the privacy settings a user would want by asking the user a small number of questions. However, this approach has neither been operationalized nor evaluated with actual users before. We report on a field study (n=72) in which we implemented and evaluated a Personalized Privacy Assistant (PPA) with participants using their own Android devices. The results of our study are encouraging. We find that 78.7% of the recommendations made by the PPA were adopted by users. Following initial recommendations on permission settings, participants were motivated to further review and modify their settings with daily “privacy nudges.” Despite showing substantial engagement with these nudges, participants only changed 5.1% of the settings previously adopted based on the PPA’s recommendations. The PPA and its recommendations were perceived as useful and usable. We discuss the implications of our results for mobile permission management and the design of personalized privacy assistant solutions.",
"title": ""
},
{
"docid": "f638a8691d79874f4440aa349e28cbfa",
"text": "Semantic segmentation requires a detailed labeling of image pixels by object category. Information derived from local image patches is necessary to describe the detailed shape of individual objects. However, this information is ambiguous and can result in noisy labels. Global inference of image content can instead capture the general semantic concepts present. We advocate that holistic inference of image concepts provides valuable information for detailed pixel labeling. We propose a generic framework to leverage holistic information in the form of a LabelBank for pixellevel segmentation. We show the ability of our framework to improve semantic segmentation performance in a variety of settings. We learn models for extracting a holistic LabelBank from visual cues, attributes, and/or textual descriptions. We demonstrate improvements in semantic segmentation accuracy on standard datasets across a range of state-of-the-art segmentation architectures and holistic inference approaches.",
"title": ""
},
{
"docid": "f698eb36fb75c6eae220cf02e41bdc44",
"text": "In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "310076f963d9591a083edac1734c38cb",
"text": "The ganglion impar is an unpaired sympathetic structure located at the level of the sacrococcygeal joint. Blockade of this structure has been utilised to treat chronic perineal pain. Methods to achieve this block often involve the use of fluoroscopy which is associated with radiation exposure of staff involved in providing these procedures. We report a combined loss of resistance injection technique in association with ultrasound guidance to achieve the block. Ultrasound was used to identify the sacrococcygeal joint and a needle was shown to enter this region. Loss of resistance was then used to demonstrate that the needle tip lies in a presacral space. The implication being that any injectate would be located in an adequate position. The potential exception would be a neurodestructive procedure as radiographic control of needle tip in relation to the rectum should be performed and recorded. However when aiming for a diagnostic or local anaesthetic based treatment option we feel that this may become an accepted method.",
"title": ""
},
{
"docid": "107960c3c2e714804133f5918ac03b74",
"text": "This paper reports on a data-driven motion planning approach for interaction-aware, socially-compliant robot navigation among human agents. Autonomous mobile robots navigating in workspaces shared with human agents require motion planning techniques providing seamless integration and smooth navigation in such. Smooth integration in mixed scenarios calls for two abilities of the robot: predicting actions of others and acting predictably for them. The former requirement requests trainable models of agent behaviors in order to accurately forecast their actions in the future, taking into account their reaction on the robot's decisions. A human-like navigation style of the robot facilitates other agents-most likely not aware of the underlying planning technique applied-to predict the robot motion vice versa, resulting in smoother joint navigation. The approach presented in this paper is based on a feature-based maximum entropy model and is able to guide a robot in an unstructured, real-world environment. The model is trained to predict joint behavior of heterogeneous groups of agents from onboard data of a mobile platform. We evaluate the benefit of interaction-aware motion planning in a realistic public setting with a total distance traveled of over 4 km. Interestingly the motion models learned from human-human interaction did not hold for robot-human interaction, due to the high attention and interest of pedestrians in testing basic braking functionality of the robot.",
"title": ""
},
{
"docid": "5a4aa3f4ff68fab80d7809ff04a25a3b",
"text": "OBJECTIVE\nThe technique of short segment pedicle screw fixation (SSPSF) has been widely used for stabilization in thoracolumbar burst fractures (TLBFs), but some studies reported high rate of kyphosis recurrence or hardware failure. This study was to evaluate the results of SSPSF including fractured level and to find the risk factors concerned with the kyphosis recurrence in TLBFs.\n\n\nMETHODS\nThis study included 42 patients, including 25 males and 17 females, who underwent SSPSF for stabilization of TLBFs between January 2003 and December 2010. For radiologic assessments, Cobb angle (CA), vertebral wedge angle (VWA), vertebral body compression ratio (VBCR), and difference between VWA and Cobb angle (DbVC) were measured. The relationships between kyphosis recurrence and radiologic parameters or demographic features were investigated. Frankel classification and low back outcome score (LBOS) were used for assessment of clinical outcomes.\n\n\nRESULTS\nThe mean follow-up period was 38.6 months. CA, VWA, and VBCR were improved after SSPSF, and these parameters were well maintained at the final follow-up with minimal degree of correction loss. Kyphosis recurrence showed a significant increase in patients with Denis burst type A, load-sharing classification (LSC) score >6 or DbVC >6 (p<0.05). There were no patients who worsened to clinical outcome, and there was no significant correlation between kyphosis recurrence and clinical outcome in this series.\n\n\nCONCLUSION\nSSPSF including the fractured vertebra is an effective surgical method for restoration and maintenance of vertebral column stability in TLBFs. However, kyphosis recurrence was significantly associated with Denis burst type A fracture, LSC score >6, or DbVC >6.",
"title": ""
},
{
"docid": "32b2cd6b63c6fc4de5b086772ef9d319",
"text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.",
"title": ""
},
{
"docid": "e13d935c4950323a589dce7fd5bce067",
"text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.",
"title": ""
},
{
"docid": "b0a206b80b63c509cbad8e60701a3760",
"text": "For most businesses there are costs involved when acquiring new customers and having longer relationships with customers is therefore often more profitable. Predicting if an individual is prone to leave the business is then a useful tool to help any company take actions to mitigate this cost. The event when a person ends their relationship with a business is called attrition or churn. Predicting peoples actions is however hard and many different factors can affect their choices. This paper investigates different machine learning methods for predicting attrition in the customer base of a bank. Four different methods are chosen based on the results they have shown in previous research and these are then tested and compared to find which works best for predicting these events. Four different datasets from two different products and with two different applications are created from real world data from a European bank. All methods are trained and tested on each dataset. The results of the tests are then evaluated and compared to find what works best. The methods found in previous research to most reliably achieve good results in predicting churn in banking customers are the Support Vector Machine, Neural Network, Balanced Random Forest, and the Weighted Random Forest. The results show that the Balanced Random Forest achieves the best results with an average AUC of 0.698 and an average F-score of 0.376. The accuracy and precision of the model are concluded to not be enough to make definite decisions but can be used with other factors such as profitability estimations to improve the effectiveness of any actions taken to prevent the negative effects of churn.",
"title": ""
},
{
"docid": "a5e52fc842c9b1780282efc071d87b0e",
"text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points and concepts are represented by regions in a (potentially) high-dimensional space. Based on our recent formalization, we present a comprehensive implementation of the conceptual spaces framework that is not only capable of representing concepts with inter-domain correlations, but that also offers a variety of operations on these concepts.",
"title": ""
},
{
"docid": "89db58eb8793baf03bb86d382d76326e",
"text": "Embedded phishing exercises, which send test phishing emails, are utilized by organizations to reduce the susceptibility of its employees to this type of attack. Research studies seeking to evaluate the effectiveness of these exercises have generally been limited by small sample sizes. These studies have not been able to measure possible factors that might bias results. As a result, companies have had to create their own design and evaluation methods, with no framework to guide their efforts. Lacking such guidelines, it can often be difficult to determine whether these types of exercises are truly effective, and if reported results are statistically reliable. In this paper, we conduct a systematic analysis of data from a large real world embedded phishing exercise that involved 19,180 participants from a single organization, and utilized 115,080 test phishing emails. The first part of our study focuses on developing methodologies to correct some sources of bias, enabling sounder evaluations of the efficacy of embedded phishing exercises and training. We then use these methods to perform an analysis of the effectiveness of this embedded phishing exercise, and through our analysis, identify how the design of these exercises might be improved.",
"title": ""
},
{
"docid": "d3d58715498167d3fbf863b9f6423fcd",
"text": "In this paper, we focus on online detection and isolation of erroneous values reported by medical wireless sensors. We propose a lightweight approach for online anomaly detection in collected data, able to raise alarms only when patients enter in emergency situation and to discard faulty measurements. The proposed approach is based on Haar wavelet decomposition and Hampel filter for spatial analysis, and on boxplot for temporal analysis. Our objective is to reduce false alarms resulted from unreliable measurements. We apply our proposed approach on real physiological data set. Our experimental results prove the effectiveness of our approach to achieve good detection accuracy with low false alarm rate.",
"title": ""
},
{
"docid": "e9353d465c5dfd8af684d4e09407ea28",
"text": "An overview of the main contributions that introduced the use of nonresonating modes for the realization of pseudoelliptic narrowband waveguide filters is presented. The following are also highlighted: early work using asymmetric irises; oversized H-plane cavity; transverse magnetic cavity; TM dual-mode cavity; and multiple cavity filters.",
"title": ""
},
{
"docid": "ca8c13c0a7d637234460f20caaa15df5",
"text": "This paper presents a nonlinear control law for an automobile to autonomously track a trajectory, provided in real-time, on rapidly varying, off-road terrain. Existing methods can suffer from a lack of global stability, a lack of tracking accuracy, or a dependence on smooth road surfaces, any one of which could lead to the loss of the vehicle in autonomous off-road driving. This work treats automobile trajectory tracking in a new manner, by considering the orientation of the front wheels - not the vehicle's body - with respect to the desired trajectory, enabling collocated control of the system. A steering control law is designed using the kinematic equations of motion, for which global asymptotic stability is proven. This control law is then augmented to handle the dynamics of pneumatic tires and of the servo-actuated steering wheel. To control vehicle speed, the brake and throttle are actuated by a switching proportional integral (PI) controller. The complete control system consumes a negligible fraction of a computer's resources. It was implemented on a Volkswagen Touareg, \"Stanley\", the Stanford Racing Team's entry in the DARPA Grand Challenge 2005, a 132 mi autonomous off-road race. Experimental results from Stanley demonstrate the ability of the controller to track trajectories between obstacles, over steep and wavy terrain, through deep mud puddles, and along cliff edges, with a typical root mean square (RMS) crosstrack error of under 0.1 m. In the DARPA National Qualification Event 2005, Stanley was the only vehicle out of 40 competitors to not hit an obstacle or miss a gate, and in the DARPA Grand Challenge 2005 Stanley had the fastest course completion time.",
"title": ""
}
] |
scidocsrr
|
ec43b1b7a7ead9699dd1ffe663e8e08c
|
Active Learning to Rank using Pairwise Supervision
|
[
{
"docid": "14838947ee3b95c24daba5a293067730",
"text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.",
"title": ""
},
{
"docid": "f1a162f64838817d78e97a3c3087fae4",
"text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.",
"title": ""
}
] |
[
{
"docid": "252b8722acd43c9f61a6b10019715392",
"text": "Semantic segmentation is an important step of visual scene understanding for autonomous driving. Recently, Convolutional Neural Network (CNN) based methods have successfully applied in semantic segmentation using narrow-angle or even wide-angle pinhole camera. However, in urban traffic environments, autonomous vehicles need wider field of view to perceive surrounding things and stuff, especially at intersections. This paper describes a CNN-based semantic segmentation solution using fisheye camera which covers a large field of view. To handle the complex scene in the fisheye image, Overlapping Pyramid Pooling (OPP) module is proposed to explore local, global and pyramid local region context information. Based on the OPP module, a network structure called OPP-net is proposed for semantic segmentation. The net is trained and evaluated on a fisheye image dataset for semantic segmentation which is generated from an existing dataset of urban traffic scenes. In addition, zoom augmentation, a novel data augmentation policy specially designed for fisheye image, is proposed to improve the net's generalization performance. Experiments demonstrate the outstanding performance of the OPP-net for urban traffic scenes and the effectiveness of the zoom augmentation.",
"title": ""
},
{
"docid": "b5097e718754c02cddd02a1c147c6398",
"text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.",
"title": ""
},
{
"docid": "cc17b3548d2224b15090ead8c398f808",
"text": "Malaria is a global health problem that threatens 300–500 million people and kills more than one million people annually. Disease control is hampered by the occurrence of multi-drug-resistant strains of the malaria parasite Plasmodium falciparum. Synthetic antimalarial drugs and malarial vaccines are currently being developed, but their efficacy against malaria awaits rigorous clinical testing. Artemisinin, a sesquiterpene lactone endoperoxide extracted from Artemisia annua L (family Asteraceae; commonly known as sweet wormwood), is highly effective against multi-drug-resistant Plasmodium spp., but is in short supply and unaffordable to most malaria sufferers. Although total synthesis of artemisinin is difficult and costly, the semi-synthesis of artemisinin or any derivative from microbially sourced artemisinic acid, its immediate precursor, could be a cost-effective, environmentally friendly, high-quality and reliable source of artemisinin. Here we report the engineering of Saccharomyces cerevisiae to produce high titres (up to 100 mg l-1) of artemisinic acid using an engineered mevalonate pathway, amorphadiene synthase, and a novel cytochrome P450 monooxygenase (CYP71AV1) from A. annua that performs a three-step oxidation of amorpha-4,11-diene to artemisinic acid. The synthesized artemisinic acid is transported out and retained on the outside of the engineered yeast, meaning that a simple and inexpensive purification process can be used to obtain the desired product. Although the engineered yeast is already capable of producing artemisinic acid at a significantly higher specific productivity than A. annua, yield optimization and industrial scale-up will be required to raise artemisinic acid production to a level high enough to reduce artemisinin combination therapies to significantly below their current prices.",
"title": ""
},
{
"docid": "b4978b2fbefc79fba6e69ad8fd55ebf9",
"text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.",
"title": ""
},
{
"docid": "9516cf7ea68b16380669d47d6aee472b",
"text": "In this paper, we survey the work that has been done in threshold concepts in computing since they were first discussed in 2005: concepts that have been identified, methodologies used, and issues discussed. Based on this survey, we then identify some promising unexplored areas for future work.",
"title": ""
},
{
"docid": "c9fc05c0587a15a63b325ef6095aa0cb",
"text": "Background:Recent epidemiological results suggested an increase of cancer risk after receiving computed tomography (CT) scans in childhood or adolescence. Their interpretation is questioned due to the lack of information about the reasons for examination. Our objective was to estimate the cancer risk related to childhood CT scans, and examine how cancer-predisposing factors (PFs) affect assessment of the radiation-related risk.Methods:The cohort included 67 274 children who had a first scan before the age of 10 years from 2000 to 2010 in 23 French departments. Cumulative X-rays doses were estimated from radiology protocols. Cancer incidence was retrieved through the national registry of childhood cancers; PF from discharge diagnoses.Results:During a mean follow-up of 4 years, 27 cases of tumours of the central nervous system, 25 of leukaemia and 21 of lymphoma were diagnosed; 32% of them among children with PF. Specific patterns of CT exposures were observed according to PFs. Adjustment for PF reduced the excess risk estimates related to cumulative doses from CT scans. No significant excess risk was observed in relation to CT exposures.Conclusions:This study suggests that the indication for examinations, whether suspected cancer or PF management, should be considered to avoid overestimation of the cancer risks associated with CT scans.",
"title": ""
},
{
"docid": "807564cfc2e90dee21a3efd8dc754ba3",
"text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.",
"title": ""
},
{
"docid": "ce404452a843d18e4673d0dcf6cf01b1",
"text": "We propose a formal mathematical model for sparse representations in neocortex based on a neuron model and associated operations. The design of our model neuron is inspired by recent experimental findings on active dendritic processing and NMDA spikes in pyramidal neurons. We derive a number of scaling laws that characterize the accuracy of such neurons in detecting activation patterns in a neuronal population under adverse conditions. We introduce the union property which shows that synapses for multiple patterns can be randomly mixed together within a segment and still lead to highly accurate recognition. We describe simulation results that provide overall insight into sparse representations as well as two primary results. First we show that pattern recognition by a neuron can be extremely accurate and robust with high dimensional sparse inputs even when using a tiny number of synapses to recognize large patterns. Second, equations representing recognition accuracy of a dendrite predict optimal NMDA spiking thresholds under a generous set of assumptions. The prediction tightly matches NMDA spiking thresholds measured in the literature. Our model neuron matches many of the known properties of pyramidal neurons. As such the theory provides a unified and practical mathematical framework for understanding the benefits and limits of sparse representations in cortical networks.",
"title": ""
},
{
"docid": "44b7ed6c8297b6f269c8b872b0fd6266",
"text": "vii",
"title": ""
},
{
"docid": "b8b2d68955d6ed917900d30e4e15f71e",
"text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b2c299e13eff8776375c14357019d82e",
"text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.",
"title": ""
},
{
"docid": "c043e7a5d5120f5a06ef6decc06c184a",
"text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures",
"title": ""
},
{
"docid": "e0f6878845e02e966908311e6818dbe9",
"text": "Smart Home is one of emerging application domains of The Internet of things which following the computer and Internet. Although home automation technologies have been commercially available already, they are basically designed for signal-family smart homes with a high cost, and along with the constant growth of digital appliances in smart home, we merge smart home into smart-home-oriented Cloud to release the stress on the smart home system which mostly installs application software on their local computers. In this paper, we present a framework for Cloud-based smart home for enabling home automation, household mobility and interconnection which easy extensible and fit for future demands. Through subscribing services of the Cloud, smart home consumers can easily enjoy smart home services without purchasing computers which owns strong power and huge storage. We focus on the overall Smart Home framework, the features and architecture of the components of Smart Home, the interaction and cooperation between them in detail.",
"title": ""
},
{
"docid": "cccecb08c92f8bcec4a359373a20afcb",
"text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.",
"title": ""
},
{
"docid": "fb63ab21fa40b125c1a85b9c3ed1dd8d",
"text": "The two central topics of information theory are the compression and the transmission of data. Shannon, in his seminal work, formalized both these problems and determined their fundamental limits. Since then the main goal of coding theory has been to find practical schemes that approach these limits. Polar codes, recently invented by Arıkan, are the first “practical” codes that are known to achieve the capacity for a large class of channels. Their code construction is based on a phenomenon called “channel polarization”. The encoding as well as the decoding operation of polar codes can be implemented with O(N log N) complexity, where N is the blocklength of the code. We show that polar codes are suitable not only for channel coding but also achieve optimal performance for several other important problems in information theory. The first problem we consider is lossy source compression. We construct polar codes that asymptotically approach Shannon’s rate-distortion bound for a large class of sources. We achieve this performance by designing polar codes according to the “test channel”, which naturally appears in Shannon’s formulation of the rate-distortion function. The encoding operation combines the successive cancellation algorithm of Arıkan with a crucial new ingredient called “randomized rounding”. As for channel coding, both the encoding as well as the decoding operation can be implemented with O(N log N) complexity. This is the first known “practical” scheme that approaches the optimal rate-distortion trade-off. We also construct polar codes that achieve the optimal performance for the Wyner-Ziv and the Gelfand-Pinsker problems. Both these problems can be tackled using “nested” codes and polar codes are naturally suited for this purpose. We further show that polar codes achieve the capacity of asymmetric channels, multi-terminal scenarios like multiple access channels, and degraded broadcast channels. For each of these problems, our constructions are the first known “practical” schemes that approach the optimal performance. The original polar codes of Arıkan achieve a block error probability decaying exponentially in the square root of the block length. For source coding, the gap between the achieved distortion and the limiting distortion also vanishes exponentially in the square root of the blocklength. We explore other polarlike code constructions with better rates of decay. With this generalization,",
"title": ""
},
{
"docid": "460d6a8a5f78e6fa5c42fb6c219b3254",
"text": "Generative Adversarial Networks (GANs) have been successfully applied to the problem of policy imitation in a model-free setup. However, the computation graph of GANs, that include a stochastic policy as the generative model, is no longer differentiable end-to-end, which requires the use of high-variance gradient estimation. In this paper, we introduce the Modelbased Generative Adversarial Imitation Learning (MGAIL) algorithm. We show how to use a forward model to make the computation fully differentiable, which enables training policies using the exact gradient of the discriminator. The resulting algorithm trains competent policies using relatively fewer expert samples and interactions with the environment. We test it on both discrete and continuous action domains and report results that surpass the state-of-the-art.",
"title": ""
},
{
"docid": "4753ea589bd7dd76d3fb08ba8dce65ff",
"text": "Frequent Patterns are very important in knowledge discovery and data mining process such as mining of association rules, correlations etc. Prefix-tree based approach is one of the contemporary approaches for mining frequent patterns. FP-tree is a compact representation of transaction database that contains frequency information of all relevant Frequent Patterns (FP) in a dataset. Since the introduction of FP-growth algorithm for FP-tree construction, three major algorithms have been proposed, namely AFPIM, CATS tree, and CanTree, that have adopted FP-tree for incremental mining of frequent patterns. All of the three methods perform incremental mining by processing one transaction of the incremental database at a time and updating it to the FP-tree of the initial (original) database. Here in this paper we propose a novel method to take advantage of FP-tree representation of incremental transaction database for incremental mining. We propose “Batch Incremental Tree (BIT)” algorithm to merge two small consecutive duration FP-trees to obtain a FP-tree that is equivalent of FP-tree obtained when the entire database is processed at once from the beginning of the first duration",
"title": ""
},
{
"docid": "6052c0f2adfe4b75f96c21a5ee128bf5",
"text": "I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \\simulated tempering\", the \\tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the ineeciency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling eeciency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \\deceptive\".",
"title": ""
},
{
"docid": "1acc97afa9facf77289ddf1015b1e110",
"text": "This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making existential quantification implicit, lambda DCS logical forms are generally more compact than those in lambda calculus.",
"title": ""
},
{
"docid": "322141533594ed1927f36b850b8d963f",
"text": "Microelectrodes are widely used in the physiological recording of cell field potentials. As microelectrode signals are generally in the μV range, characteristics of the cell-electrode interface are important to the recording accuracy. Although the impedance of the microelectrode-solution interface has been well studied and modeled in the past, no effective model has been experimentally verified to estimate the noise of the cell-electrode interface. Also in existing interface models, spectral information is largely disregarded. In this work, we developed a model for estimating the noise of the cell-electrode interface from interface impedances. This model improves over existing noise models by including the cell membrane capacitor and frequency dependent impedances. With low-noise experiment setups, this model is verified by microelectrode array (MEA) experiments with mouse muscle myoblast cells. Experiments show that the noise estimated from this model has <;10% error, which is much less than estimations from existing models. With this model, noise of the cell-electrode interface can be estimated by simply measuring interface impedances. This model also provides insights for micro- electrode design to achieve good recording signal-to-noise ratio.",
"title": ""
}
] |
scidocsrr
|
5f5b949a4f90253e6585c69ecc2325e1
|
Four Principles of Memory Improvement : A Guide to Improving Learning Efficiency
|
[
{
"docid": "660d47a9ffc013f444954f3f210de05e",
"text": "Taking tests enhances learning. But what happens when one cannot answer a test question-does an unsuccessful retrieval attempt impede future learning or enhance it? The authors examined this question using materials that ensured that retrieval attempts would be unsuccessful. In Experiments 1 and 2, participants were asked fictional general-knowledge questions (e.g., \"What peace treaty ended the Calumet War?\"). In Experiments 3-6, participants were shown a cue word (e.g., whale) and were asked to guess a weak associate (e.g., mammal); the rare trials on which participants guessed the correct response were excluded from the analyses. In the test condition, participants attempted to answer the question before being shown the answer; in the read-only condition, the question and answer were presented together. Unsuccessful retrieval attempts enhanced learning with both types of materials. These results demonstrate that retrieval attempts enhance future learning; they also suggest that taking challenging tests-instead of avoiding errors-may be one key to effective learning.",
"title": ""
},
{
"docid": "4d7cd44f2bbe9896049a7868165bd415",
"text": "Testing previously studied information enhances long-term memory, particularly when the information is successfully retrieved from memory. The authors examined the effect of unsuccessful retrieval attempts on learning. Participants in 5 experiments read an essay about vision. In the test condition, they were asked about embedded concepts before reading the passage; in the extended study condition, they were given a longer time to read the passage. To distinguish the effects of testing from attention direction, the authors emphasized the tested concepts in both conditions, using italics or bolded keywords or, in Experiment 5, by presenting the questions but not asking participants to answer them before reading the passage. Posttest performance was better in the test condition than in the extended study condition in all experiments--a pretesting effect--even though only items that were not successfully retrieved on the pretest were analyzed. The testing effect appears to be attributable, in part, to the role unsuccessful tests play in enhancing future learning.",
"title": ""
},
{
"docid": "3faeedfe2473dc837ab0db9eb4aefc4b",
"text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "42d5712d781140edbc6a35703d786e15",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "244745da710e8c401173fe39359c7c49",
"text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.",
"title": ""
},
{
"docid": "bd3374fefa94fbb11d344d651c0f55bc",
"text": "Extensive study has been conducted in the detection of license plate for the applications in intelligent transportation system (ITS). However, these results are all based on images acquired at a resolution of 640 times 480. In this paper, a new method is proposed to extract license plate from the surveillance video which is shot at lower resolution (320 times 240) as well as degraded by video compression. Morphological operations of bottom-hat and morphology gradient are utilized to detect the LP candidates, and effective schemes are applied to select the correct one. The average rates of correct extraction and false alarms are 96.62% and 1.77%, respectively, based on the experiments using more than four hours of video. The experimental results demonstrate the effectiveness and robustness of the proposed method",
"title": ""
},
{
"docid": "e776c87ec35d67c6acbdf79d8a5cac0a",
"text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.",
"title": ""
},
{
"docid": "512d29a398f51041466884f4decec84a",
"text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2",
"title": ""
},
{
"docid": "113b8cfda23cf7e8b3d7b4821d549bf7",
"text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.",
"title": ""
},
{
"docid": "dc5bb80426556e3dd9090a705d3e17b4",
"text": "OBJECTIVES\nThe aim of this study was to locate the scientific literature dealing with addiction to the Internet, video games, and cell phones and to characterize the pattern of publications in these areas.\n\n\nMETHODS\nOne hundred seventy-nine valid articles were retrieved from PubMed and PsycINFO between 1996 and 2005 related to pathological Internet, cell phone, or video game use.\n\n\nRESULTS\nThe years with the highest numbers of articles published were 2004 (n = 42) and 2005 (n = 40). The most productive countries, in terms of number of articles published, were the United States (n = 52), China (n = 23), the United Kingdom (n = 17), Taiwan (n = 13), and South Korea (n = 9). The most commonly used language was English (65.4%), followed by Chinese (12.8%) and Spanish (4.5%). Articles were published in 96 different journals, of which 22 published 2 or more articles. The journal that published the most articles was Cyberpsychology & Behavior (n = 41). Addiction to the Internet was the most intensely studied (85.3%), followed by addiction to video games (13.6%) and cell phones (2.1%).\n\n\nCONCLUSIONS\nThe number of publications in this area is growing, but it is difficult to conduct precise searches due to a lack of clear terminology. To facilitate retrieval, bibliographic databases should include descriptor terms referring specifically to Internet, video games, and cell phone addiction as well as to more general addictions involving communications and information technologies and other behavioral addictions.",
"title": ""
},
{
"docid": "b240041ea6a885151fd39d863b9217dc",
"text": "Engaging in a test over previously studied information can serve as a potent learning event, a phenomenon referred to as the testing effect. Despite a surge of research in the past decade, existing theories have not yet provided a cohesive account of testing phenomena. The present study uses meta-analysis to examine the effects of testing versus restudy on retention. Key results indicate support for the role of effortful processing as a contributor to the testing effect, with initial recall tests yielding larger testing benefits than recognition tests. Limited support was found for existing theoretical accounts attributing the testing effect to enhanced semantic elaboration, indicating that consideration of alternative mechanisms is warranted in explaining testing effects. Future theoretical accounts of the testing effect may benefit from consideration of episodic and contextually derived contributions to retention resulting from memory retrieval. Additionally, the bifurcation model of the testing effect is considered as a viable framework from which to characterize the patterns of results present across the literature.",
"title": ""
},
{
"docid": "43ef67c897e7f998b1eb7d3524d514f4",
"text": "This brief proposes a delta-sigma modulator that operates at extremely low voltage without using a clock boosting technique. To maintain the advantages of a discrete-time integrator in oversampled data converters, a mixed differential difference amplifier (DDA) integrator is developed that removes the input sampling switch in a switched-capacitor integrator. Conventionally, many low-voltage delta-sigma modulators have used high-voltage generating circuits to boost the clock voltage levels. A mixed DDA integrator with both a switched-resistor and a switched-capacitor technique is developed to implement a discrete-time integrator without clock boosted switches. The proposed mixed DDA integrator is demonstrated by a third-order delta-sigma modulator with a feedforward topology. The fabricated modulator shows a 68-dB signal-to-noise-plus-distortion ratio for a 20-kHz signal bandwidth with an oversampling ratio of 80. The chip consumes 140 μW of power at a true 0.4-V power supply, which is the lowest voltage without a clock boosting technique among the state-of-the-art modulators in this signal band.",
"title": ""
},
{
"docid": "106fefb169c7e95999fb411b4e07954e",
"text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.",
"title": ""
},
{
"docid": "e797fbf7b53214df32d5694527ce5ba3",
"text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.",
"title": ""
},
{
"docid": "2f17160c9f01aa779b1745a57e34e1aa",
"text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.",
"title": ""
},
{
"docid": "0b5f0cd5b8d49d57324a0199b4925490",
"text": "Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease.",
"title": ""
},
{
"docid": "06502355f6db37b73806e9e57476e749",
"text": "BACKGROUND\nBecause the trend of pharmacotherapy is toward controlling diet rather than administration of drugs, in our study we examined the probable relationship between Creatine (Cr) or Whey (Wh) consumption and anesthesia (analgesia effect of ketamine). Creatine and Wh are among the most favorable supplements in the market. Whey is a protein, which is extracted from milk and is a rich source of amino acids. Creatine is an amino acid derivative that can change to ATP in the body. Both of these supplements result in Nitric Oxide (NO) retention, which is believed to be effective in N-Methyl-D-aspartate (NMDA) receptor analgesia.\n\n\nOBJECTIVES\nThe main question of this study was whether Wh and Cr are effective on analgesic and anesthetic characteristics of ketamine and whether this is related to NO retention or amino acids' features.\n\n\nMATERIALS AND METHODS\nWe divided 30 male Wistar rats to three (n = 10) groups; including Cr, Wh and sham (water only) groups. Each group was administered (by gavage) the supplements for an intermediate dosage during 25 days. After this period, they became anesthetized using a Ketamine-Xylazine (KX) and their time to anesthesia and analgesia, and total sleep time were recorded.\n\n\nRESULTS\nData were analyzed twice using the SPSS 18 software with Analysis of Variance (ANOVA) and post hoc test; first time we expunged the rats that didn't become anesthetized and the second time we included all of the samples. There was a significant P-value (P < 0.05) for total anesthesia time in the second analysis. Bonferroni multiple comparison indicated that the difference was between Cr and Sham groups (P < 0.021).\n\n\nCONCLUSIONS\nThe data only indicated that there might be a significant relationship between Cr consumption and total sleep time. Further studies, with rats of different gender and different dosage of supplement and anesthetics are suggested.",
"title": ""
},
{
"docid": "5bf2c4a187b35ad5c4e69aef5eb9ffea",
"text": "In the last decade, the research of the usability of mobile phones has been a newly evolving area with few established methodologies and realistic practices that ensure capturing usability in evaluation. Thus, there exists growing demand to explore appropriate evaluation methodologies that evaluate the usability of mobile phones quickly as well as comprehensively. This study aims to develop a task-based usability checklist based on heuristic evaluations in views of mobile phone user interface (UI) practitioners. A hierarchical structure of UI design elements and usability principles related to mobile phones were developed and then utilized to develop the checklist. To demonstrate the practical effectiveness of the proposed checklist, comparative experiments were conducted on the usability checklist and usability testing. The majority of usability problems found by usability testing and additional problems were discovered by the proposed checklist. It is expected that the usability checklist proposed in this study could be used quickly and efficiently by usability practitioners to evaluate the mobile phone UI in the middle of the mobile phone development process.",
"title": ""
},
{
"docid": "35ae4e59fd277d57c2746dfccf9b26b0",
"text": "In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.",
"title": ""
},
{
"docid": "cd3d9bb066729fc7107c0fef89f664fe",
"text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.",
"title": ""
},
{
"docid": "f04682957e97b8ccb4f40bf07dde2310",
"text": "This paper introduces a dataset gathered entirely in urban scenarios with a car equipped with one stereo camera and five laser scanners, among other sensors. One distinctive feature of the present dataset is the existence of high-resolution stereo images grabbed at high rate (20 fps) during a 36.8 km trajectory, which allows the benchmarking of a variety of computer vision techniques. We describe the employed sensors and highlight some applications which could be benchmarked with the presented work. Both plain text and binary files are provided, as well as open source tools for working with the binary versions. The dataset is available for download in http://www.mrpt.org/MalagaUrbanDataset.",
"title": ""
},
{
"docid": "644d2fcc7f2514252c2b9da01bb1ef42",
"text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1",
"title": ""
},
{
"docid": "e289d20455fd856ce4cf72589b3e206b",
"text": "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field1.",
"title": ""
}
] |
scidocsrr
|
94bc3baaf884e3038c21f1fe51cdd7ae
|
Sample Compression, Learnability, and the Vapnik-Chervonenkis Dimension
|
[
{
"docid": "b74e8a911368384ccf7126c0dcbf55fd",
"text": "Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.",
"title": ""
}
] |
[
{
"docid": "c10d33abc6ed1d47c11bf54ed38e5800",
"text": "The past decade has seen a steady growth of interest in statistical language models for information retrieval, and much research work has been conducted on this subject. This book by ChengXiang Zhai summarizes most of this research. It opens with an introduction covering the basic concepts of information retrieval and statistical languagemodels, presenting the intuitions behind these concepts. This introduction is then followed by a chapter providing an overview of:",
"title": ""
},
{
"docid": "5339554b6f753b69b5ace705af0263cd",
"text": "We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase perclass performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible.",
"title": ""
},
{
"docid": "4a51fa781609c0fab79fff536a14aa43",
"text": "Recently end-to-end speech recognition has obtained much attention. One of the popular models to achieve end-to-end speech recognition is attention based encoder-decoder model, which usually generating output sequences iteratively by attending the whole representations of the input sequences. However, predicting outputs until receiving the whole input sequence is not practical for online or low time latency speech recognition. In this paper, we present a simple but effective attention mechanism which can make the encoder-decoder model generate outputs without attending the entire input sequence and can apply to online speech recognition. At each prediction step, the attention is assumed to be a time-moving gaussian window with variable size and can be predicted by using previous input and output information instead of the content based computation on the whole input sequence. To further improve the online performance of the model, we employ deep convolutional neural networks as encoder. Experiments show that the gaussian prediction based attention works well and under the help of deep convolutional neural networks the online model achieves 19.5% phoneme error rate in TIMIT ASR task.",
"title": ""
},
{
"docid": "d999bb4717dd07b2560a85c7c775eb0e",
"text": "We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.",
"title": ""
},
{
"docid": "94014090d66c6dc4ec46da2c1de2a605",
"text": "Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference. Most state-of-the-art neural models for these tasks rely on pretrained word embedding and compose sentence-level semantics in varied ways; however, few works have attempted to verify whether we really need pretrained embeddings in these tasks. In this paper, we study how effective subwordlevel (character and character n-gram) representations are in sentence pair modeling. Though it is well-known that subword models are effective in tasks with single sentence input, including language modeling and machine translation, they have not been systematically studied in sentence pair modeling tasks where the semantic and string similarities between texts matter. Our experiments show that subword models without any pretrained word embedding can achieve new state-of-the-art results on two social media datasets and competitive results on news data for paraphrase identification.",
"title": ""
},
{
"docid": "be68222ba029a46cf9c7463b0f233db2",
"text": "Solar panels have been improving in efficiency and dropping in price, and are therefore becoming more common and economically viable. However, the performance of solar panels depends not only on the weather, but also on other external factors such as shadow, dirt, dust, etc. In this paper, we describe a simple and practical data-driven method for classifying anomalies in the power output of solar panels. In particular, we propose and experimentally verify (using two solar panel arrays in Ontario, Canada) a simple classification rule based on physical properties of solar radiation that can distinguish between shadows and direct covering of the panel, e.g,. by dirt or snow.",
"title": ""
},
{
"docid": "4e2c466fac826f5e32a51f09355d7585",
"text": "Congested networks involve complex traffic dynamics that can be accurately captured with detailed simulation models. However, when performing optimization of such networks the use of simulators is limited due to their stochastic nature and their relatively high evaluation cost. This has lead to the use of general-purpose analytical metamodels, that are cheaper to evaluate and easier to integrate within a classical optimization framework, but do not capture the specificities of the underlying congested conditions. In this paper, we argue that to perform efficient optimization for congested networks it is important to develop analytical surrogates specifically tailored to the context at hand so that they capture the key components of congestion (e.g. its sources, its propagation, its impact) while achieving a good tradeoff between realism and tractability. To demonstrate this, we present a surrogate that provides a detailed description of congestion by capturing the main interactions between the different network components while preserving analytical tractable. In particular, we consider the optimization of vehicle traffic in an urban road network. The proposed surrogate model is an approximate queueing network model that resorts to finite capacity queueing theory to account for congested conditions. Existing analytic queueing models for urban networks are formulated for a single intersection, and thus do not take into account the interactions between queues. The proposed model considers a set of intersections and analytically captures these interactions. We show that this level of detail is sufficient for optimization in the context of signal control for peak hour traffic. Although there is a great variety of signal control methodologies in the literature, there is still a need for solutions that are appropriate and efficient under saturated conditions, where the performance of signal control strategies and the formation and propagation of queues are strongly related. We formulate a fixed-time signal control problem where the network model is included as a set of constraints. We apply this methodology to a subnetwork of the Lausanne city center and use a microscopic traffic simulator to validate its performance. We also compare it with several other methods. As congestion increases, the new method leads to improved average performance measures. The results highlight the importance of taking the interaction between consecutive roads into account when deriving signal plans for congested urban road networks.",
"title": ""
},
{
"docid": "31404322fb03246ba2efe451191e29fa",
"text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.",
"title": ""
},
{
"docid": "f0d17b259b699bc7fb7e8f525ec64db0",
"text": "Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term “deep”; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: handwritten digits (benchmark known as MNIST) and speech recognition.",
"title": ""
},
{
"docid": "dbe5661d99798b24856c61b93ddb2392",
"text": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.",
"title": ""
},
{
"docid": "f514d5177f234e786b9bfc295359c852",
"text": "Biological sequence comparison is a very important operation in Bioinformatics. Even though there do exist exact methods to compare biological sequences, these methods are often neglected due to their quadratic time and space complexity. In order to accelerate these methods, many GPU algorithms were proposed in the literature. Nevertheless, all of them restrict the size of the smallest sequence in such a way that Megabase genome comparison is prevented. In this paper, we propose and evaluate CUDAlign, a GPU algorithm that is able to compare Megabase biological sequences with an exact Smith-Waterman affine gap variant. CUDAlign was implemented in CUDA and tested in two GPU boards, separately. For real sequences whose size range from 1MBP (Megabase Pairs) to 47MBP, a close to uniform GCUPS (Giga Cells Updates per Second) was obtained, showing the potential scalability of our approach. Also, CUDAlign was able to compare the human chromosome 21 and the chimpanzee chromosome 22. This operation took 21 hours on GeForce GTX 280, resulting in a peak performance of 20.375 GCUPS. As far as we know, this is the first time such huge chromosomes are compared with an exact method.",
"title": ""
},
{
"docid": "7bbffa53f71207f0f218a09f18586541",
"text": "Myelotoxicity induced by chemotherapy may become life-threatening. Neutropenia may be prevented by granulocyte colony-stimulating factors (GCSF), and epoetin may prevent anemia, but both cause substantial side effects and increased costs. According to non-established data, wheat grass juice (WGJ) may prevent myelotoxicity when applied with chemotherapy. In this prospective matched control study, 60 patients with breast carcinoma on chemotherapy were enrolled and assigned to an intervention or control arm. Those in the intervention arm (A) were given 60 cc of WGJ orally daily during the first three cycles of chemotherapy, while those in the control arm (B) received only regular supportive therapy. Premature termination of treatment, dose reduction, and starting GCSF or epoetin were considered as \"censoring events.\" Response rate to chemotherapy was calculated in patients with evaluable disease. Analysis of the results showed that five censoring events occurred in Arm A and 15 in Arm B (P = 0.01). Of the 15 events in Arm B, 11 were related to hematological events. No reduction in response rate was observed in patients who could be assessed for response. Side effects related to WGJ were minimal, including worsening of nausea in six patients, causing cessation of WGJ intake. In conclusion, it was found that WGJ taken during FAC chemotherapy may reduce myelotoxicity, dose reductions, and need for GCSF support, without diminishing efficacy of chemotherapy. These preliminary results need confirmation in a phase III study.",
"title": ""
},
{
"docid": "60a7e9be448a0ac4e25d1eed5b075de9",
"text": "Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.",
"title": ""
},
{
"docid": "92fcc4d21872dca232c624a11eb3988c",
"text": "Most automobile manufacturers maintain many vehicle types to keep a successful position on the market. Through the further development all vehicle types gain a diverse amount of new functionality. Additional features have to be supported by the car’s software. For time efficient accomplishment, usually the existing electronic control unit (ECU) code is extended. In the majority of cases this evolutionary development process is accompanied by a constant decay of the software architecture. This effect known as software erosion leads to an increasing deviation from the requirements specifications. To counteract the erosion it is necessary to continuously restore the architecture in respect of the specification. Automobile manufacturers cope with the erosion of their ECU software with varying degree of success. Successfully we applied a methodical and structured approach of architecture restoration in the specific case of the brake servo unit (BSU). Software product lines from existing BSU variants were extracted by explicit projection of the architecture variability and decomposition of the original architecture. After initial application, this approach was capable to restore the BSU architecture recurrently.",
"title": ""
},
{
"docid": "6ac8d9cfe3c1f6e6a6a2fd32b675c89a",
"text": "Each discrete cosine transform (DCT) uses N real basis vectors whose components are cosines. In the DCT-4, for example, the jth component of vk is cos(j+ 2 )(k+ 1 2 ) π N . These basis vectors are orthogonal and the transform is extremely useful in image processing. If the vector x gives the intensities along a row of pixels, its cosine series ∑ ckvk has the coefficients ck = (x,vk)/N . They are quickly computed from a Fast Fourier Transform. But a direct proof of orthogonality, by calculating inner products, does not reveal how natural these cosine vectors are. We prove orthogonality in a different way. Each DCT basis contains the eigenvectors of a symmetric “second difference” matrix. By varying the boundary conditions we get the established transforms DCT-1 through DCT-4. Other combinations lead to four additional cosine transforms. The type of boundary condition (Dirichlet or Neumann, centered at a meshpoint or a midpoint) determines the applications that are appropriate for each transform. The centering also determines the period: N − 1 or N in the established transforms, N− 2 or N+ 1 2 in the other four. The key point is that all these “eigenvectors of cosines” come from simple and familiar matrices.",
"title": ""
},
{
"docid": "ab07b74740f5353f006e93547a7931c8",
"text": "Separation of business logic from any technical platform is an important principle to cope with complexity, and to achieve the required engineering quality factors such as adaptability, maintainability, and reusability. In this context, Model Driven Architecture (MDA) is a framework defined by the OMG for designing high quality software systems. In this paper we are going to present a model-driven approach to the development of the MVC2 web applications especially Spring MVC based on the uml class diagramme. The transformation language is ATL (Atlas Transformation Language). The transformation rules defined in this paper can generate from, the class diagramme, an XML file respecting the architecture MVC2 (Model-View-Controller), this file can be used to generate the end-to-end necessary Spring MVC code of a web application.",
"title": ""
},
{
"docid": "d158d2d0b24fe3766b6ddb9bff8e8010",
"text": "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.",
"title": ""
},
{
"docid": "5894fd2d3749df78afb49b27ad26f459",
"text": "Information security policy compliance (ISP) is one of the key concerns that face organizations today. Although technical and procedural measures help improve information security, there is an increased need to accommodate human, social and organizational factors. Despite the plethora of studies that attempt to identify the factors that motivate compliance behavior or discourage abuse and misuse behaviors, there is a lack of studies that investigate the role of ethical ideology per se in explaining compliance behavior. The purpose of this research is to investigate the role of ethics in explaining Information Security Policy (ISP) compliance. In that regard, a model that integrates behavioral and ethical theoretical perspectives is developed and tested. Overall, analyses indicate strong support for the validation of the proposed theoretical model.",
"title": ""
},
{
"docid": "2c2ae81ab314b39dd6523e4b6c546d3f",
"text": "The China Brain Project covers both basic research on neural mechanisms underlying cognition and translational research for the diagnosis and intervention of brain diseases as well as for brain-inspired intelligence technology. We discuss some emerging themes, with emphasis on unique aspects.",
"title": ""
},
{
"docid": "93314112049e3bccd7853e63afc97f73",
"text": "In this paper, we address the challenging task of scene segmentation. In order to capture the rich contextual dependencies over image regions, we propose Directed Acyclic Graph-Recurrent Neural Networks (DAG-RNN) to perform context aggregation over locally connected feature maps. More specifically, DAG-RNN is placed on top of pre-trained CNN (feature extractor) to embed context into local features so that their representative capability can be enhanced. In comparison with plain CNN (as in Fully Convolutional Networks-FCN), DAG-RNN is empirically found to be significantly more effective at aggregating context. Therefore, DAG-RNN demonstrates noticeably performance superiority over FCNs on scene segmentation. Besides, DAG-RNN entails dramatically less parameters as well as demands fewer computation operations, which makes DAG-RNN more favorable to be potentially applied on resource-constrained embedded devices. Meanwhile, the class occurrence frequencies are extremely imbalanced in scene segmentation, so we propose a novel class-weighted loss to train the segmentation network. The loss distributes reasonably higher attention weights to infrequent classes during network training, which is essential to boost their parsing performance. We evaluate our segmentation network on three challenging public scene segmentation benchmarks: Sift Flow, Pascal Context and COCO Stuff. On top of them, we achieve very impressive segmentation performance.",
"title": ""
}
] |
scidocsrr
|
359eb65bdd0ebf6d9cc212b42f53cbba
|
Virtual Network Function placement for resilient Service Chain provisioning
|
[
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "cbe9729b403a07386a76447c4339c5f3",
"text": "Network appliances perform different functions on network flows and constitute an important part of an operator's network. Normally, a set of chained network functions process network flows. Following the trend of virtualization of networks, virtualization of the network functions has also become a topic of interest. We define a model for formalizing the chaining of network functions using a context-free language. We process deployment requests and construct virtual network function graphs that can be mapped to the network. We describe the mapping as a Mixed Integer Quadratically Constrained Program (MIQCP) for finding the placement of the network functions and chaining them together considering the limited network resources and requirements of the functions. We have performed a Pareto set analysis to investigate the possible trade-offs between different optimization objectives.",
"title": ""
}
] |
[
{
"docid": "2a09d97b350fa249fc6d4bbf641697e2",
"text": "The goal of this study was to investigate the effect of lead and the influence of chelating agents,meso 2, 3-dimercaptosuccinic acid (DMSA) and D-Penicillamine, on the biochemical contents of the brain tissues of Catla catla fingerlings by Fourier Transform Infrared Spectroscopy. FT-IR spectra revealed significant differences in absorbance intensities between control and lead-intoxicated brain tissues, reflecting a change in protein and lipid contents in the brain tissues due to lead toxicity. In addition, the administration of chelating agents, DMSA and D-Penicillamine, improved the protein and lipid contents in the brain tissues compared to lead-intoxicated tissues. Further, DMSA was more effective in reducing the body burden of lead. The protein secondary structure analysis revealed that lead intoxication causes an alteration in protein profile with a decrease in α-helix and an increase in β-sheet structure of Catla catla brain. In conclusion, the study demonstrated that FT-IR spectroscopy could differentiate the normal and lead-intoxicated brain tissues due to intrinsic differences in intensity.",
"title": ""
},
{
"docid": "0612db6f5e30d37122d37b26e2a2bb0a",
"text": "This paper presents a novel approach to procedural generation of urban maps for First Person Shooter (FPS) games. A multi-agent evolutionary system is employed to place streets, buildings and other items inside the Unity3D game engine, resulting in playable video game levels. A computational agent is trained using machine learning techniques to capture the intent of the game designer as part of the multi-agent system, and to enable a semi-automated aesthetic selection for the underlying genetic algorithm.",
"title": ""
},
{
"docid": "7844d2e53deba7bcfef03f06a6bced59",
"text": "In power line communications (PLCs), the multipath-induced dispersion and the impulsive noise are the two fundamental impediments in the way of high-integrity communications. The conventional orthogonal frequency-division multiplexing (OFDM) system is capable of mitigating the multipath effects in PLCs, but it fails to suppress the impulsive noise effects. Therefore, in order to mitigate both the multipath effects and the impulsive effects in PLCs, in this paper, a compressed impairment sensing (CIS)-assisted and interleaved-double-FFT (IDFFT)-aided system is proposed for indoor broadband PLC. Similar to classic OFDM, data symbols are transmitted in the time-domain, while the equalization process is employed in the frequency domain in order to achieve the maximum attainable multipath diversity gain. In addition, a specifically designed interleaver is employed in the frequency domain in order to mitigate the impulsive noise effects, which relies on the principles of compressed sensing (CS). Specifically, by taking advantage of the interleaving process, the impairment impulsive samples can be estimated by exploiting the principle of CS and then cancelled. In order to improve the estimation performance of CS, we propose a beneficial pilot design complemented by a pilot insertion scheme. Finally, a CIS-assisted detector is proposed for the IDFFT system advocated. Our simulation results show that the proposed CIS-assisted IDFFT system is capable of achieving a significantly improved performance compared with the conventional OFDM. Furthermore, the tradeoffs to be struck in the design of the CIS-assisted IDFFT system are also studied.",
"title": ""
},
{
"docid": "3f7c6490ccb6d95bd22644faef7f452f",
"text": "A blockchain is a distributed, decentralised database of records of digital events (transactions) that took place and were shared among the participating parties. Each transaction in the public ledger is verified by consensus of a majority of the participants in the system. Bitcoin may not be that important in the future, but blockchain technology's role in Financial and Non-financial world can't be undermined. In this paper, we provide a holistic view of how Blockchain technology works, its strength and weaknesses, and its role to change the way the business happens today and tomorrow.",
"title": ""
},
{
"docid": "5ebdda11fbba5d0633a86f2f52c7a242",
"text": "What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.",
"title": ""
},
{
"docid": "76a9799863bd944fb969539e8817cccd",
"text": "This paper investigates the application of non-orthogonal multiple access (NOMA) in millimeter wave (mm-Wave) communications by exploiting beamforming, user scheduling, and power allocation. Random beamforming is invoked for reducing the feedback overhead of the considered system. A non-convex optimization problem for maximizing the sum rate is formulated, which is proved to be NP-hard. The branch and bound approach is invoked to obtain the $\\epsilon$ -optimal power allocation policy, which is proved to converge to a global optimal solution. To elaborate further, a low-complexity suboptimal approach is developed for striking a good computational complexity-optimality tradeoff, where the matching theory and successive convex approximation techniques are invoked for tackling the user scheduling and power allocation problems, respectively. Simulation results reveal that: 1) the proposed low complexity solution achieves a near-optimal performance and 2) the proposed mm-Wave NOMA system is capable of outperforming conventional mm-Wave orthogonal multiple access systems in terms of sum rate and the number of served users.",
"title": ""
},
{
"docid": "8b12c633e6c9fb177459bb9609afeb1a",
"text": "Chronic osteomyelitis of the jaw is a rare entity in the healthy population of the developed world. It is normally associated with radiation and bisphosphonates ingestion and occurs in immunosuppressed individuals such as alcoholics or diabetics. Two cases are reported of chronic osteomyelitis in healthy individuals with no adverse medical conditions. The management of these cases are described.",
"title": ""
},
{
"docid": "4dbbcaf264cc9beda8644fa926932d2e",
"text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.",
"title": ""
},
{
"docid": "385922d94a35c37776ba816645e964c7",
"text": "In this paper, we develop a unified vision system for small-scale aircraft, known broadly as Micro Air Vehicl es (MAVs), that not only addresses basic flight stability and control, but also enables more intelligent missions, such as ground o bject recognition and moving-object tracking. The proposed syst em defines a framework for real-time image feature extraction, horizon detection and sky/ground segmentation, and contex tual ground object detection. Multiscale Linear Discriminant Analysis (MLDA) defines the first stage of the vision system, and generates a multiscale description of images, incorporati ng both color and texture through a dynamic representation of image details. This representation is ideally suited for horizondetection and sky/ground segmentation of images, which we accomplish through the probabilistic representation of tree-structured belief networks (TSBN). Specifically, we propose incomplete meta TSBNs (IMTSBN) to accommodate the properties of our MLDA representation and to enhance the descriptive component of these statistical models. In the last stage of the vision processi ng, we seamlessly extend this probabilistic framework to perfo rm computationally efficient detection and recognition of obj ects in the segmented ground region, through the idea of visual contexts. By exploiting the concept of visual contexts, we c an quickly focus on candidate regions, where objects of intere st may be found, and then compute additional features through the Complex Wavelet Transform (CWT) and HSI color space for those regions, only. These additional features, while n ot necessary for global regions, are useful in accurate detect ion and recognition of smaller objects. Throughout, our approach is heavily influenced by real-time constraints and robustne ss to transient video noise.",
"title": ""
},
{
"docid": "4520316ecef3051305e547d50fadbb7a",
"text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.",
"title": ""
},
{
"docid": "9afc8df23892162a220b1804fe415a36",
"text": "Social entrepreneurship is gradually becoming a crucial element in the worldwide discussion on volunteerism and civic commitment. It interleaves the passion of a common cause with industrial ethics and is notable and different from the present other types of entrepreneurship models due to its quest for mission associated influence. The previous few years have noticed a striking and surprising progress in the field of social entrepreneurship and has amplified attention ranging throughout all the diverse sectors. The critical difference between social and traditional entrepreneurship can be seen in the founding mission of the venture and the market impressions. Social entrepreneurs emphasize on ways to relieve or eradicate societal pressures and produce progressive externalities or public properties. This study focuses mainly on the meaning of social entrepreneurship to different genres and where does it stand in respect to other forms of entrepreneurship in today’s times.",
"title": ""
},
{
"docid": "b51a1df32ce34ae3f1109a9053b4bc1f",
"text": "Nowadays many automobile manufacturers are switching to Electric Power Steering (EPS) for its advantages on performance and cost. In this paper, a mathematical model of a column type EPS system is established, and its state-space expression is constructed. Then three different control methods are implemented and performance, robustness and disturbance rejection properties of the EPS control systems are investigated. The controllers are tested via simulation and results show a modified Linear Quadratic Gaussian (LQG) controller can track the characteristic curve well and effectively attenuate external disturbances.",
"title": ""
},
{
"docid": "f513a112b7fe4ffa2599a0f144b2e112",
"text": "A defined software process is needed to provide organizations with a consistent framework for performing their work and improving the way they do it. An overall framework for modeling simplifies the task of producing process models, permits them to be tailored to individual needs, and facilitates process evolution. This paper outlines the principles of entity process models and suggests ways in which they can help to address some of the problems with more conventional approaches to modeling software processes.",
"title": ""
},
{
"docid": "fce6ac500501d0096aac3513639c2627",
"text": "Recent technological advances made necessary the use of the robots in various types of applications. Currently, the traditional robot-like scenarios dedicated to industrial applications with repetitive tasks, were replaced by applications which require human interaction. The main field of such applications concerns the rehabilitation and aid of elderly persons. In this study, we present a state-of-the-art of the main research advances in lower limbs actuated orthosis/wearable robots in the literature. This will include a review on researches covering full limb exoskeletons, lower limb exoskeletons and particularly the knee joint orthosis. Rehabilitation using treadmill based device and use of Functional Electrical Stimulation (FES) are also investigated. We discuss finally the challenges not yet solved such as issues related to portability, energy consumption, social constraints and high costs of theses devices.",
"title": ""
},
{
"docid": "e79e94549bca30e3a4483f7fb9992932",
"text": "The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköping University. Department of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden",
"title": ""
},
{
"docid": "bd882f762be5a9cb67191a7092fc88e3",
"text": "This study tested the criterion validity of the inventory, Mental Toughness 48, by assessing the correlation between mental toughness and physical endurance for 41 male undergraduate sports students. A significant correlation of .34 was found between scores for overall mental toughness and the time a relative weight could be held suspended. Results support the criterion-related validity of the Mental Toughness 48.",
"title": ""
},
{
"docid": "fa604c528539ac5cccdbd341a9aebbf7",
"text": "BACKGROUND\nAn understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts.\n\n\nMETHODS\nThe uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles.\n\n\nRESULTS/CONCLUSIONS\nP-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.",
"title": ""
},
{
"docid": "0d6165524d748494a5c4d0d2f0675c42",
"text": "In Saudi Arabia, breast cancer is diagnosed at advanced stage compared to Western countries. Nevertheless, the perceived barriers to delayed presentation have been poorly examined. Additionally, available breast cancer awareness data are lacking validated measurement tool. The aim of this study is to evaluate the level of breast cancer awareness and perceived barriers to seeking medical care among Saudi women, using internationally validated tool. A cross-sectional study was conducted among adult Saudi women attending a primary care center in Riyadh during February 2014. Data were collected using self-administered questionnaire based on the Breast Cancer Awareness Measure (CAM-breast). Out of 290 women included, 30 % recognized five or more (out of nine) non-lump symptoms of breast cancer, 31 % correctly identified the risky age of breast cancer (set as 50 or 70 years), 28 % reported frequent (at least once a month) breast checking. Considering the three items of the CAM-breast, only 5 % were completely aware while 41 % were completely unaware of breast cancer. The majority (94 %) reported one or more barriers. The most frequently reported barrier was the difficulty of getting a doctor appointment (39 %) followed by worries about the possibility of being diagnosed with breast cancer (31 %) and being too busy to seek medical help (26 %). We are reporting a major gap in breast cancer awareness and several logistic and emotional barriers to seeking medical care among adult Saudi women. The current findings emphasized the critical need for an effective national breast cancer education program to increase public awareness and early diagnosis.",
"title": ""
},
{
"docid": "660f957b70e53819724e504ed3de0776",
"text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dd9f40db5e52817b25849282ffdafe26",
"text": "Pattern classification methods based on learning-from-examples have been widely applied to character recognition from the 1990s and have brought forth significant improvements of recognition accuracies. This kind of methods include statistical methods, artificial neural networks, support vector machines, multiple classifier combination, etc. In this chapter, we briefly review the learning-based classification methods that have been successfully applied to character recognition, with a special section devoted to the classification of large category set. We then discuss the characteristics of these methods, and discuss the remaining problems in character recognition that can be potentially solved by machine learning methods.",
"title": ""
}
] |
scidocsrr
|
93cb75342bfe9ae9a2e6faea0f043b3e
|
Chimera: Large-Scale Classification using Machine Learning, Rules, and Crowdsourcing
|
[
{
"docid": "cf7c5ae92a0514808232e4e9d006024a",
"text": "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.",
"title": ""
},
{
"docid": "d5f2cb3839a8e129253e3433b9e9a5bc",
"text": "Product classification in Commerce search (\\eg{} Google Product Search, Bing Shopping) involves associating categories to offers of products from a large number of merchants. The categorized offers are used in many tasks including product taxonomy browsing and matching merchant offers to products in the catalog. Hence, learning a product classifier with high precision and recall is of fundamental importance in order to provide high quality shopping experience. A product offer typically consists of a short textual description and an image depicting the product. Traditional approaches to this classification task is to learn a classifier using only the textual descriptions of the products. In this paper, we show that the use of images, a weaker signal in our setting, in conjunction with the textual descriptions, a more discriminative signal, can considerably improve the precision of the classification task, irrespective of the type of classifier being used. We present a novel classification approach, \\Cross Adapt{} (\\CrossAdaptAcro{}), that is cognizant of the disparity in the discriminative power of different types of signals and hence makes use of the confusion matrix of dominant signal (text in our setting) to prudently leverage the weaker signal (image), for an improved performance. Our evaluation performed on data from a major Commerce search engine's catalog shows a 12\\% (absolute) improvement in precision at 100\\% coverage, and a 16\\% (absolute) improvement in recall at 90\\% precision compared to classifiers that only use textual description of products. In addition, \\CrossAdaptAcro{} also provides a more accurate classifier based only on the dominant signal (text) that can be used in situations in which only the dominant signal is available during application time.",
"title": ""
}
] |
[
{
"docid": "97e2077fc8b801656f046f8619fe6647",
"text": "In this paper we present a fairy tale corpus that was semantically organized and tagged. The proposed method uses latent semantic mapping to represent the stories and a top-n item-to-item recommendation algorithm to define clusters of similar stories. Each story can be placed in more than one cluster and stories in the same cluster are related to the same concepts. The results were manually evaluated regarding the groupings as perceived by human judges. The evaluation resulted in a precision of 0.81, a recall of 0.69, and an f-measure of 0.75 when using tf*idf for word frequency. Our method is topicand language-independent, and, contrary to traditional clustering methods, automatically defines the number of clusters based on the set of documents. This method can be used as a setup for traditional clustering or classification. The resulting corpus will be used for recommendation purposes, although it can also be used for emotion extraction, semantic role extraction, meaning extraction, text classification, among others.",
"title": ""
},
{
"docid": "dbc7e759ce30307475194adb4ca37f1f",
"text": "Pharyngeal arches appear in the 4th and 5th weeks of development of the human embryo. The 1st pharyngeal arch develops into the incus and malleus, premaxilla, maxilla, zygomatic bone; part of the temporal bone, the mandible and it contributes to the formation of bones of the middle ear. The musculature of the 1st pharyngeal arch includes muscles of mastication, anterior belly of the digastric mylohyoid, tensor tympani and tensor palatini. The second pharyngeal arch gives rise to the stapes, styloid process of the temporal bone, stylohyoid ligament, the lesser horn and upper part of the body of the hyoid bone. The stapedius muscle, stylohyoid, posterior belly of the digastric, auricular and muscles of facial expressional all derive from the 2nd pharyngeal arch. Otocephaly has been classified as a defect of blastogenesis, with structural defects primarily involving the first and second branchial arch derivatives. It may also result in dysmorphogenesis of other midline craniofacial field structures, such as the forebrain and axial body structures.",
"title": ""
},
{
"docid": "a91ba04903c584a1165867c7215385d0",
"text": "The INLA approach for approximate Bayesian inference for latent Gaussian models has been shown to give fast and accurate estimates of posterior marginals and also to be a valuable tool in practice via the R-package R-INLA. In this paper we formalize new developments in the R-INLA package and show how these features greatly extend the scope of models that can be analyzed by this interface. We also discuss the current default method in R-INLA to approximate posterior marginals of the hyperparameters using only a modest number of evaluations of the joint posterior distribution of the hyperparameters, without any need for numerical integration.",
"title": ""
},
{
"docid": "54314e448a1dd146289c6c4859ab9791",
"text": "The article investigates how the difficulties caused by the flexibility of the endoscope shaft could be solved and to provide a categorized overview of designs that potentially provide a solution. The following are discussed: paradoxical problem of flexible endoscopy; NOTES or hybrid endoscopy surgery; design challenges; shaft-guidance: guiding principles; virtual track guidance; physical track guidance; shaft-guidance: rigidity control; material stiffening; structural stiffening; and hybrid stiffening.",
"title": ""
},
{
"docid": "3a322129019eed67686018404366fe0b",
"text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.",
"title": ""
},
{
"docid": "ade0742bcb8fa3a195b142ba39d245ce",
"text": "We describe a new approach to solving the click-through rate (CTR) prediction problem in sponsored search by means of MatrixNet, the proprietary implementation of boosted trees. This problem is of special importance for the search engine, because choosing the ads to display substantially depends on the predicted CTR and greatly affects the revenue of the search engine and user experience. We discuss different issues such as evaluating and tuning MatrixNet algorithm, feature importance, performance, accuracy and training data set size. Finally, we compare MatrixNet with several other methods and present experimental results from the production system.",
"title": ""
},
{
"docid": "435307df5495b497ff9065e9d98af044",
"text": "Recent breakthroughs in word representation methods have generated a new spark of enthusiasm amidst the computational linguistic community, with methods such as Word2Vec have indeed shown huge potential to compress insightful information on words’ contextual meaning in lowdimensional vectors. While the success of these representations has mainly been harvested for traditional NLP tasks such as word prediction or sentiment analysis, recent studies have begun using these representations to track the dynamics of language and meaning over time. However, recent works have also shown these embeddings to be extremely noisy and training-set dependent, thus considerably restricting the scope and significance of this potential application. In this project, building upon the work presented by [1] in 2015, we thus propose to investigate ways of defining interpretable embeddings, and as well as alternative ways of assessing the dynamics of semantic changes so as to endow more statistical power to the analysis. 1 Problem Statement, Motivation and Prior Work The recent success of Neural-Network-generated word embeddings (word2vec, Glove, etc.) for traditional NLP tasks such as word prediction or text sentiment analysis has motivated the scientific community to use these representations as a way to analyze language itself. Indeed, if these low-dimensional word representations have proven to successfully carry both semantic and syntactic information, such a successful information compression could thus potentially be harvested to tackle more complex linguistic problems, such as monitoring language dynamics over time or space. In particular, in [1], [5], and [7], word embeddings are used to capture drifts of word meanings over time through the analysis of the temporal evolution of any given word’ closest neighbors. Other studies [6] use them to relate semantic shifts to geographical considerations. However, as highlighted by Hahn and Hellrich in [3], the inherent randomness of the methods used to encode these representations results in the high variability of any given word’s closest neighbors, thus considerably narrowing the statistical power of the study: how can we detect real semantic changes from the ambient jittering inherent to the embeddings’ representations? Can we try to provide a perhaps more interpretable and sounder basis of comparison than the neighborhoods to detect these changes? Building upon the methodology developed by Hamilton and al [1] to study language dynamics and the observations made by Hahn and Hellrich [3], we propose to tackle this problem from a mesoscopic scale: the intuition would be that if local neighborhoods are too unstable, we should thus look at information contained in the overall embedding matrix to build our statistical framework. In particular, a first idea is that we should try to evaluate the existence of a potentially ”backbone” structure of the embeddings. Indeed, it would seem intuitive that if certain words –such as “gay” or “asylum” (as observed by Hamilton et al) have exhibited important drifts in meaning throughout the 20th century, another large set of words – such as “food”,“house” or “people” – have undergone very little semantic change over time. As such, we should expect the relative distance between atoms in this latter set (as defined by the acute angle between their respective embeddings) to remain relatively constant from decade to decade. Hence, one could try to use this stable backbone graph as a way to triangulate the movement of the other word vectors over time, thus hopefully inducing more interpretable changes over time. Such an approach could also be used to answer the question of assessing the validity of our embeddings for linguistic purposes: how well do these embeddings capture similarity and nuances between words? A generally",
"title": ""
},
{
"docid": "001b3155f0d67fd153173648cd483ac2",
"text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.",
"title": ""
},
{
"docid": "a35efadff207d320af4ae6a5be2e1689",
"text": "Human-Robot interaction brings new challenges to motion planning. The human, who is generally considered as an obstacle for the robot, needs to be considered as a separate entity that has a position, a posture, a field of view and an activity. These properties can be represented as new constraints to the motion generation mechanisms. In this paper we present three human related constraints to the motion planning for object hand over scenarios. We also describe a new planning method to consider these constraints. The resulting system automatically computes where the object should be transferred to the human, and the motion of the whole robot considering human’s comfort.",
"title": ""
},
{
"docid": "7ec2f6b720cdcabbcdfb7697dbdd25ae",
"text": "To help marketers to build and manage their brands in a dramatically changing marketing communications environment, the customer-based brand equity model that emphasizes the importance of understanding consumer brand knowledge structures is put forth. Specifically, the brand resonance pyramid is reviewed as a means to track how marketing communications can create intense, active loyalty relationships and affect brand equity. According to this model, integrating marketing communications involves mixing and matching different communication options to establish the desired awareness and image in the minds of consumers. The versatility of on-line, interactive marketing communications to marketers in brand building is also addressed.",
"title": ""
},
{
"docid": "43233e45f07b80b8367ac1561356888d",
"text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.",
"title": ""
},
{
"docid": "c274b4396b73d076e38cb79a0799c943",
"text": "This paper addresses the development of a model that reproduces the dynamic behaviour of a redundant, 7 degrees of freedom robotic manipulator, namely the Kuka Lightweight Robot IV, in the Robotic Surgery Laboratory of the Instituto Superior Técnico. For this purpose, the control architecture behind the Lightweight Robot (LWR) is presented, as well as, the joint and the Cartesian level impedance control aspects. Then, the manipulator forward and inverse kinematic models are addressed, in which the inverse kinematics relies on the Closed Loop Inverse Kinematic method (CLIK). Redundancy resolution methods are used to ensure that the joint angle values remain bounded considering their physical limits. The joint level model is the first presented, followed by the Cartesian level model. The redundancy inherent to the Cartesian model is compensated by a null space controller, developed by employing the impedance superposition method. Finally, the effect of possible faults occurring in the system are simulated using the derived model.",
"title": ""
},
{
"docid": "523677ed6d482ab6551f6d87b8ad761e",
"text": "To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this “deep Web ” query interfaces generally form complex matchings between attribute groups (e.g., {author} corresponds to {first name, last name} in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., {first name, last name}) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such “noisy” schemas, we integrate it with a novel “ensemble” approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the “ensemblization” indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.",
"title": ""
},
{
"docid": "eacfd15ac85517311bca0c3706fc55d9",
"text": "Numerous applications require a self-contained personal navigation system that works in indoor and outdoor environments, does not require any infrastructure support, and is not susceptible to jamming. Posture tracking with an array of inertial/magnetic sensors attached to individual human limb segments has been successfully demonstrated. The \"sourceless\" nature of this technique makes possible full body posture tracking in an area of unlimited size with no supporting infrastructure. Such sensor modules contain three orthogonally mounted angular rate sensors, three orthogonal linear accelerometers and three orthogonal magnetometers. This paper describes a method for using accelerometer data combined with orientation estimates from the same modules to calculate position during walking and running. The periodic nature of these motions includes short periods of zero foot velocity when the foot is in contact with the ground. This pattern allows for precise drift error correction. Relative position is calculated through double integration of drift corrected accelerometer data. Preliminary experimental results for various types of motion including walking, side stepping, and running document accuracy of distance and position estimates.",
"title": ""
},
{
"docid": "704c62beaf6b9b09265c0daacde69abc",
"text": "This paper investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of local binary patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering and local phase quantization. The goal is to distinguish between diabetic retinopathy (DR), age-related macular degeneration (AMD), and normal fundus images analyzing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD, and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.",
"title": ""
},
{
"docid": "2951dc312799671c8feaf6d5086d5564",
"text": "There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for “explicable”, “legible”, “predictable” and “transparent” planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on “security” and “privacy” of plans which are also trying to answer the same question, but from the opposite point of view – i.e. when the agent is trying to hide instead of reveal its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.",
"title": ""
},
{
"docid": "65aa93b6ca41fe4ca54a4a7dee508db2",
"text": "The field of deep learning has seen significant advancement in recent years. However, much of the existing work has been focused on real-valued numbers. Recent work has shown that a deep learning system using the complex numbers can be deeper for a fixed parameter budget compared to its real-valued counterpart. In this work, we explore the benefits of generalizing one step further into the hyper-complex numbers, quaternions specifically, and provide the architecture components needed to build deep quaternion networks. We develop the theoretical basis by reviewing quaternion convolutions, developing a novel quaternion weight initialization scheme, and developing novel algorithms for quaternion batch-normalization. These pieces are tested in a classification model by end-to-end training on the CIFAR −10 and CIFAR −100 data sets and a segmentation model by end-to-end training on the KITTI Road Segmentation data set. These quaternion networks show improved convergence compared to real-valued and complex-valued networks, especially on the segmentation task, while having fewer parameters.",
"title": ""
},
{
"docid": "e76a82bcf7ff1a151c438d16640ae286",
"text": "Bioinformaticists use the Basic Local Alignment Search Tool (BLAST) to characterize an unknown sequence by comparing it against a database of known sequences, thus detecting evolutionary relationships and biological properties. mpiBLAST is a widely-used, high-performance, open-source parallelization of BLAST that runs on a computer cluster delivering super-linear speedups. However, the Achilles heel of mpiBLAST is its lack of modularity, thus adversely affecting maintainability and extensibility. Alleviating this shortcoming requires an architectural refactoring to improve maintenance and extensibility while preserving high performance. Toward that end, this paper evaluates five different software architectures and details how each satisfies our design objectives. In addition, we introduce a novel approach to using mixin layers to enable mixing-and-matching of modules in constructing sequence-search applications for a variety of high-performance computing systems. Our design, which we call \"mixin layers with refined roles\", utilizes mixin layers to separate functionality into complementary modules and the refined roles in each layer improve the inherently modular design by precipitating flexible and structured parallel development, a necessity for an open-source application. We believe that this new software architecture for mpiBLAST-2.0 will benefit both the users and developers of the package and that our evaluation of different software architectures will be of value to other software engineers faced with the challenges of creating maintainable and extensible, high-performance, bioinformatics software.",
"title": ""
},
{
"docid": "9292f1925de5d6df9eb89b2157842e5c",
"text": "According to Breast Cancer Institute (BCI), Breast Cancer is one of the most dangerous type of diseases that is very effective for women in the world. As per clinical expert detecting this cancer in its first stage helps in saving lives. As per cancer.net offers individualized guides for more than 120 types of cancer and related hereditary syndromes. For detecting breast cancer mostly machine learning techniques are used. In this paper we proposed adaptive ensemble voting method for diagnosed breast cancer using Wisconsin Breast Cancer database. The aim of this work is to compare and explain how ANN and logistic algorithm provide better solution when its work with ensemble machine learning algorithms for diagnosing breast cancer even the variables are reduced. In this paper we used the Wisconsin Diagnosis Breast Cancer dataset. When compared to related work from the literature. It is shown that the ANN approach with logistic algorithm is achieved 98.50% accuracy from another machine learning algorithm.",
"title": ""
},
{
"docid": "2e0262fce0a7ba51bd5ccf9e1397b0ca",
"text": "We present a topology detection method combining smart meter sensor information and sparse line measurements. The problem is formulated as a spanning tree identification problem over a graph given partial nodal and edge power flow information. In the deterministic case of known nodal power consumption and edge power flow we provide sensor placement criterion which guarantees correct identification of all spanning trees. We then present a detection method which is polynomial in complexity to the size of the graph. In the stochastic case where loads are given by forecasts derived from delayed smart meter data, we provide a combinatorial complexity MAP detector and a polynomial complexity approximate MAP detector which is shown to work near optimum in all numerical cases.",
"title": ""
}
] |
scidocsrr
|
9eeac0fa8aacf08b2adf89d5eacb302c
|
Information Hiding Techniques: A Tutorial Review
|
[
{
"docid": "efc1a6efe55805609ffc5c0fb6e3115b",
"text": "A Note to All Readers This is not an original electronic copy of the master's thesis, but a reproduced version of the authentic hardcopy of the thesis. I lost the original electronic copy during transit from India to USA in December 1999. I could get hold of some of the older version of the files and figures. Some of the missing figures have been scanned from the photocopy version of the hardcopy of the thesis. The scanned figures have been earmarked with an asterisk. Acknowledgement I would like to profusely thank my guide Prof. K. R. Ramakrishnan for is timely advice and encouragement throughout my project work. I would also like to acknowledge Prof. M. Kankanhalli for reviewing my work from time to time. A special note of gratitude goes to Dr. S. H. Srinivas for the support he extended to this work. I would also like to thank all who helped me during my project work.",
"title": ""
}
] |
[
{
"docid": "a0c1f5a7e283e1deaff38edff2d8a3b2",
"text": "BACKGROUND\nEarly detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening.\n\n\nMETHODS\nWe searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria.\n\n\nRESULTS\nA total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%.\n\n\nCONCLUSIONS\nIn 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of low sensitivity and late identification of abused children when they have already serious consequences of maltreatment. Development of valid screening instruments is a pre-requisite before considering screening programs.",
"title": ""
},
{
"docid": "c6b6b7c1955cafa70c4a0c2498591934",
"text": "In all Fitzgerald’s fiction women characters are decorative figures of seemingly fragile beauty, though in fact they are often vain, egoistical, even destructive and ruthless and thus frequently the survivors. As prime consumers, they are never capable of idealism or intellectual or artistic interests, nor do they experience passion. His last novel, The Last Tycoon, shows some development; for the first time the narrator is a young woman bent on trying to find the truth about the ruthless social and economic complexity of 1920s Hollywood, but she has no adult role to play in its sexual, artistic or political activities. Women characters are marginalized into purely personal areas of experience.",
"title": ""
},
{
"docid": "b3166dafafda819052f1d40ef04cc304",
"text": "Convolutional neural networks (CNNs) have been widely deployed in the fields of computer vision and pattern recognition because of their high accuracy. However, large convolution operations are computing intensive and often require a powerful computing platform such as a graphics processing unit. This makes it difficult to apply CNNs to portable devices. The state-of-the-art CNNs, such as MobileNetV2 and Xception, adopt depthwise separable convolution to replace the standard convolution for embedded platforms, which significantly reduces operations and parameters with only limited loss in accuracy. This highly structured model is very suitable for field-programmable gate array (FPGA) implementation. In this brief, a scalable high performance depthwise separable convolution optimized CNN accelerator is proposed. The accelerator can be fit into an FPGA of different sizes, provided the balancing between hardware resources and processing speed. As an example, MobileNetV2 is implemented on Arria 10 SoC FPGA, and the results show this accelerator can classify each picture from ImageNet in 3.75 ms, which is about 266.6 frames per second. The FPGA design achieves 20x speedup if compared to CPU.",
"title": ""
},
{
"docid": "5454fbb1a924f3360a338c11a88bea89",
"text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.",
"title": ""
},
{
"docid": "1bd9cedbbbd26d670dd718fe47c952e7",
"text": "Recent advances in conversational systems have changed the search paradigm. Traditionally, a user poses a query to a search engine that returns an answer based on its index, possibly leveraging external knowledge bases and conditioning the response on earlier interactions in the search session. In a natural conversation, there is an additional source of information to take into account: utterances produced earlier in a conversation can also be referred to and a conversational IR system has to keep track of information conveyed by the user during the conversation, even if it is implicit. We argue that the process of building a representation of the conversation can be framed as a machine reading task, where an automated system is presented with a number of statements about which it should answer questions. The questions should be answered solely by referring to the statements provided, without consulting external knowledge. The time is right for the information retrieval community to embrace this task, both as a stand-alone task and integrated in a broader conversational search setting. In this paper, we focus on machine reading as a stand-alone task and present the Attentive Memory Network (AMN), an end-to-end trainable machine reading algorithm. Its key contribution is in efficiency, achieved by having an hierarchical input encoder, iterating over the input only once. Speed is an important requirement in the setting of conversational search, as gaps between conversational turns have a detrimental effect on naturalness. On 20 datasets commonly used for evaluating machine reading algorithms we show that the AMN achieves performance comparable to the state-of-theart models, while using considerably fewer computations.",
"title": ""
},
{
"docid": "9c37d9388908cd15c2e4d639de686371",
"text": "In this paper, novel small-signal averaged models for dc-dc converters operating at variable switching frequency are derived. This is achieved by separately considering the on-time and the off-time of the switching period. The derivation is shown in detail for a synchronous buck converter and the model for a boost converter is also presented. The model for the buck converter is then used for the design of two digital feedback controllers, which exploit the additional insight in the converter dynamics. First, a digital multiloop PID controller is implemented, where the design is based on loop-shaping of the proposed frequency-domain transfer functions. And second, the design and the implementation of a digital LQG state-feedback controller, based on the proposed time-domain state-space model, is presented for the same converter topology. Experimental results are given for the digital multiloop PID controller integrated on an application-specified integrated circuit in a 0.13 μm CMOS technology, as well as for the state-feedback controller implemented on an FPGA. Tight output voltage regulation and an excellent dynamic performance is achieved, as the dynamics of the converter under variable frequency operation are considered during the design of both implementations.",
"title": ""
},
{
"docid": "0f71e64aaf081b6624f442cb95b2220c",
"text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.",
"title": ""
},
{
"docid": "4a86a0707e6ac99766f89e81cccc5847",
"text": "Magnetic core loss is an emerging concern for integrated POL converters. As switching frequency increases, core loss is comparable to or even higher than winding loss. Accurate measurement of core loss is important for magnetic design and converter loss estimation. And exploring new high frequency magnetic materials need a reliable method to evaluate their losses. However, conventional method is limited to low frequency due to sensitivity to phase discrepancy. In this paper, a new method is proposed for high frequency (1MHz∼50MHz) core loss measurement. The new method reduces the phase induced error from over 100% to <5%. So with the proposed methods, the core loss can be accurately measured.",
"title": ""
},
{
"docid": "b5d54f10aebd99d898dfb52d75e468e6",
"text": "As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them.",
"title": ""
},
{
"docid": "3a9bba31f77f4026490d7a0faf4aeaa4",
"text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.",
"title": ""
},
{
"docid": "26deedfae0fd167d35df79f28c75e09c",
"text": "In content-based image retrieval, SIFT feature and the feature from deep convolutional neural network (CNN) have demonstrated promising performance. To fully explore both visual features in a unified framework for effective and efficient retrieval, we propose a collaborative index embedding method to implicitly integrate the index matrices of them. We formulate the index embedding as an optimization problem from the perspective of neighborhood sharing and solve it with an alternating index update scheme. After the iterative embedding, only the embedded CNN index is kept for on-line query, which demonstrates significant gain in retrieval accuracy, with very economical memory cost. Extensive experiments have been conducted on the public datasets with million-scale distractor images. The experimental results reveal that, compared with the recent state-of-the-art retrieval algorithms, our approach achieves competitive accuracy performance with less memory overhead and efficient query computation.",
"title": ""
},
{
"docid": "710febdd18f40c9fc82f8a28039362cc",
"text": "The paper deals with engineering an electric wheelchair from a common wheelchair and then developing a Brain Computer Interface (BCI) between the electric wheelchair and the human brain. A portable EEG headset and firmware signal processing together facilitate the movement of the wheelchair integrating mind activity and frequency of eye blinks of the patient sitting on the wheelchair with the help of Microcontroller Unit (MCU). The target population for the mind controlled wheelchair is the patients who are paralyzed below the neck and are unable to use conventional wheelchair interfaces. This project aims at creating a cost efficient solution, later intended to be distributed as an add-on conversion unit for a common manual wheelchair. A Neurosky mind wave headset is used to pick up EEG signals from the brain. This is a commercialized version of the Open-EEG Project. The signal obtained from EEG sensor is processed by the ARM microcontroller FRDM KL-25Z, a Freescale board. The microcontroller takes decision for determining the direction of motion of wheelchair based on floor detection and obstacle avoidance sensors mounted on wheelchair’s footplate. The MCU shows real time information on a color LCD interfaced to it. Joystick control of the wheelchair is also provided as an additional interface option that can be chosen from the menu system of the project.",
"title": ""
},
{
"docid": "49f42fd1e0b684f24714bd9c1494fe4a",
"text": "We propose a transition-based model for joint word segmentation, POS tagging and text normalization. Different from previous methods, the model can be trained on standard text corpora, overcoming the lack of annotated microblog corpora. To evaluate our model, we develop an annotated corpus based on microblogs. Experimental results show that our joint model can help improve the performance of word segmentation on microblogs, giving an error reduction in segmentation accuracy of 12.02%, compared to the traditional approach.",
"title": ""
},
{
"docid": "9071d7349dccb07a5c3f93075e8d9458",
"text": "AIM\nA discussion on how nurse leaders are using social media and developing digital leadership in online communities.\n\n\nBACKGROUND\nSocial media is relatively new and how it is used by nurse leaders and nurses in a digital space is under explored.\n\n\nDESIGN\nDiscussion paper.\n\n\nDATA SOURCES\nSearches used CINAHL, the Royal College of Nursing webpages, Wordpress (for blogs) and Twitter from 2000-2015. Search terms used were Nursing leadership + Nursing social media.\n\n\nIMPLICATIONS FOR NURSING\nUnderstanding the development and value of nursing leadership in social media is important for nurses in formal and informal (online) leadership positions. Nurses in formal leadership roles in organizations such as the National Health Service are beginning to leverage social media. Social media has the potential to become a tool for modern nurse leadership, as it is a space where can you listen on a micro level to each individual. In addition to listening, leadership can be achieved on a much larger scale through the use of social media monitoring tools and exploration of data and crowd sourcing. Through the use of data and social media listening tools nursing leaders can seek understanding and insight into a variety of issues. Social media also places nurse leaders in a visible and accessible position as role models.\n\n\nCONCLUSION\nSocial media and formal nursing leadership do not have to be against each other, but they can work in harmony as both formal and online leadership possess skills that are transferable. If used wisely social media has the potential to become a tool for modern nurse leadership.",
"title": ""
},
{
"docid": "5876bb91b0cbe851b8af677c93c5e708",
"text": "This paper proposes an effective end-to-end face detection and recognition framework based on deep convolutional neural networks for home service robots. We combine the state-of-the-art region proposal based deep detection network with the deep face embedding network into an end-to-end system, so that the detection and recognition networks can share the same deep convolutional layers, enabling significant reduction of computation through sharing convolutional features. The detection network is robust to large occlusion, and scale, pose, and lighting variations. The recognition network does not require explicit face alignment, which enables an effective training strategy to generate a unified network. A practical robot system is also developed based on the proposed framework, where the system automatically asks for a minimum level of human supervision when needed, and no complicated region-level face annotation is required. Experiments are conducted over WIDER and LFW benchmarks, as well as a personalized dataset collected from an office setting, which demonstrate state-of-the-art performance of our system.",
"title": ""
},
{
"docid": "0528bc602b9a48e30fbce70da345c0ee",
"text": "The power system is a dynamic system and it is constantly being subjected to disturbances. It is important that these disturbances do not drive the system to unstable conditions. For this purpose, additional signal derived from deviation, excitation deviation and accelerating power are injected into voltage regulators. The device to provide these signals is referred as power system stabilizer. The use of power system stabilizer has become very common in operation of large electric power systems. The conventional PSS which uses lead-lag compensation, where gain setting designed for specific operating conditions, is giving poor performance under different loading conditions. Therefore, it is very difficult to design a stabilizer that could present good performance in all operating points of electric power systems. In an attempt to cover a wide range of operating conditions, Fuzzy logic control has been suggested as a possible solution to overcome this problem, thereby using linguist information and avoiding a complex system mathematical model, while giving good performance under different operating conditions.",
"title": ""
},
{
"docid": "6d56e0db0ebdfe58152cb0faa73453c4",
"text": "Chatbot is a computer application that interacts with users using natural language in a similar way to imitate a human travel agent. A successful implementation of a chatbot system can analyze user preferences and predict collective intelligence. In most cases, it can provide better user-centric recommendations. Hence, the chatbot is becoming an integral part of the future consumer services. This paper is an implementation of an intelligent chatbot system in travel domain on Echo platform which would gather user preferences and model collective user knowledge base and recommend using the Restricted Boltzmann Machine (RBM) with Collaborative Filtering. With this chatbot based on DNN, we can improve human to machine interaction in the travel domain.",
"title": ""
},
{
"docid": "81780f32d64eb7c5e3662268f48a67ec",
"text": "Mobile ad hoc network (MANET) is a group of mobile nodes which communicates with each other without any supporting infrastructure. Routing in MANET is extremely challenging because of MANETs dynamic features, its limited bandwidth and power energy. Nature-inspired algorithms (swarm intelligence) such as ant colony optimization (ACO) algorithms have shown to be a good technique for developing routing algorithms for MANETs. Swarm intelligence is a computational intelligence technique that involves collective behavior of autonomous agents that locally interact with each other in a distributed environment to solve a given problem in the hope of finding a global solution to the problem. In this paper, we propose a hybrid routing algorithm for MANETs based on ACO and zone routing framework of bordercasting. The algorithm, HOPNET, based on ants hopping from one zone to the next, consists of the local proactive route discovery within a node’s neighborhood and reactive communication between the neighborhoods. The algorithm has features extracted from ZRP and DSR protocols and is simulated on GlomoSim and is compared to AODV routing protocol. The algorithm is also compared to the well known hybrid routing algorithm, AntHocNet, which is not based on zone routing framework. Results indicate that HOPNET is highly scalable for large networks compared to AntHocNet. The results also indicate that the selection of the zone radius has considerable impact on the delivery packet ratio and HOPNET performs significantly better than AntHocNet for high and low mobility. The algorithm has been compared to random way point model and random drunken model and the results show the efficiency and inefficiency of bordercasting. Finally, HOPNET is compared to ZRP and the strength of nature-inspired algorithm",
"title": ""
},
{
"docid": "404a662b55baea9402d449fae6192424",
"text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.",
"title": ""
},
{
"docid": "456fd41267a82663fee901b111ff7d47",
"text": "The tagging of Named Entities, the names of particular things or classes, is regarded as an important component technology for many NLP applications. The first Named Entity set had 7 types, organization, location, person, date, time, money and percent expressions. Later, in the IREX project artifact was added and ACE added two, GPE and facility, to pursue the generalization of the technology. However, 7 or 8 kinds of NE are not broad enough to cover general applications. We proposed about 150 categories of NE (Sekine et al. 2002) and now we have extended it again to 200 categories. Also we have developed dictionaries and an automatic tagger for NEs in Japanese.",
"title": ""
}
] |
scidocsrr
|
7e380c297fa3bd050b8775eb5853f45a
|
Addressing vital sign alarm fatigue using personalized alarm thresholds
|
[
{
"docid": "913b3e09f6b12744a8044d95a67d8dc7",
"text": "Research has demonstrated that 72% to 99% of clinical alarms are false. The high number of false alarms has led to alarm fatigue. Alarm fatigue is sensory overload when clinicians are exposed to an excessive number of alarms, which can result in desensitization to alarms and missed alarms. Patient deaths have been attributed to alarm fatigue. Patient safety and regulatory agencies have focused on the issue of alarm fatigue, and it is a 2014 Joint Commission National Patient Safety Goal. Quality improvement projects have demonstrated that strategies such as daily electrocardiogram electrode changes, proper skin preparation, education, and customization of alarm parameters have been able to decrease the number of false alarms. These and other strategies need to be tested in rigorous clinical trials to determine whether they reduce alarm burden without compromising patient safety.",
"title": ""
}
] |
[
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "596949afaabdbcc68cd8bda175400f30",
"text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.",
"title": ""
},
{
"docid": "78e8d8b0508e011f5dc0e63fa1f0a1ee",
"text": "This paper proposes chordal surface transform for representation and discretization of thin section solids, such as automobile bodies, plastic injection mold components and sheet metal parts. A multiple-layered all-hex mesh with a high aspect ratio is a typical requirement for mold flow simulation of thin section objects. The chordal surface transform reduces the problem of 3D hex meshing to 2D quad meshing on the chordal surface. The chordal surface is generated by cutting a tet mesh of the input CAD model at its mid plane. Radius function and curvature of the chordal surface are used to provide sizing function for quad meshing. Two-way mapping between the chordal surface and the boundary is used to sweep the quad elements from the chordal surface onto the boundary, resulting in a layered all-hex mesh. The algorithm has been tested on industrial models, whose chordal surface is 2-manifold. The graphical results of the chordal surface and the multiple-layered all-hex mesh are presented along with the quality measures. The results show geometrically adaptive high aspect ratio all-hex mesh, whose average scaled Jacobean, is close to 1.0.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "20fbb79c467e70dccf28f438e3c99efb",
"text": "Surface water is a source of drinking water in most rural communities in Nigeria. This study evaluated the total heterotrophic bacteria (THB) counts and some physico-chemical characteristics of Rivers surrounding Wilberforce Island, Nigeria.Samples were collected in July 2007 and analyzed using standard procedures. The result of the THB ranged from 6.389 – 6.434Log cfu/ml. The physico-chemical parameters results ranged from 6.525 – 7.105 (pH), 56.075 – 64.950μS/cm (Conductivity), 0.010 – 0.050‰ (Salinity), 103.752 – 117.252 NTU (Turbidity), 27.250 – 27.325 oC (Temperature), 10.200 – 14.225 mg/l (Dissolved oxygen), 28.180 – 32.550 mg/l (Total dissolved solid), 0.330 – 0.813 mg/l (Nitrate), 0.378 – 0.530 mg/l (Ammonium). Analysis of variance showed that there were significant variation (P<0.05) in the physicochemical properties except for Salinity and temperature between the two rivers. Also no significant different (P>0.05) exist in the THB density of both rivers; upstream (Agudama-Ekpetiama) and downstream (Akaibiri) of River Nun with regard to ammonium and nitrate. Significant positive correlation (P<0.01) exist between dissolved oxygen with ammonium, Conductivity with salinity and total dissolved solid, salinity with total dissolved solid, turbidity with nitrate, and pH with nitrate. The positive correlation (P<0.05) also exist between pH with turbidity. High turbidity and bacteria density in the water samples is an indication of pollution and contamination respectively. Hence, the consumption of these surface water without treatment could cause health related effects. Keyword: Drinking water sources, microorganisms, physico-chemistry, surface water, Wilberforce Island",
"title": ""
},
{
"docid": "a310039e0fd3f732805a6088ad3d1777",
"text": "Unsupervised learning of visual similarities is of paramount importance to computer vision, particularly due to lacking training data for fine-grained similarities. Deep learning of similarities is often based on relationships between pairs or triplets of samples. Many of these relations are unreliable and mutually contradicting, implying inconsistencies when trained without supervision information that relates different tuples or triplets to each other. To overcome this problem, we use local estimates of reliable (dis-)similarities to initially group samples into compact surrogate classes and use local partial orders of samples to classes to link classes to each other. Similarity learning is then formulated as a partial ordering task with soft correspondences of all samples to classes. Adopting a strategy of self-supervision, a CNN is trained to optimally represent samples in a mutually consistent manner while updating the classes. The similarity learning and grouping procedure are integrated in a single model and optimized jointly. The proposed unsupervised approach shows competitive performance on detailed pose estimation and object classification.",
"title": ""
},
{
"docid": "d73b277bf829a3295dfa86b33ad19c4a",
"text": "Biodiesel is a renewable and environmentally friendly liquid fuel. However, the feedstock, predominantly crop oil, is a limited and expensive food resource which prevents large scale application of biodiesel. Development of non-food feedstocks are therefore, needed to fully utilize biodiesel’s potential. In this study, the larvae of a high fat containing insect, black soldier fly (Hermetia illucens) (BSFL), was evaluated for biodiesel production. Specifically, the BSFL was grown on organic wastes for 10 days and used for crude fat extraction by petroleum ether. The extracted crude fat was then converted into biodiesel by acid-catalyzed (1% H2SO4) esterification and alkaline-catalyzed (0.8% NaOH) transesterification, resulting in 35.5 g, 57.8 g and 91.4 g of biodiesel being produced from 1000 BSFL growing on 1 kg of cattle manure, pig manure and chicken manure, respectively. The major ester components of the resulting biodiesel were lauric acid methyl ester (35.5%), oleinic acid methyl ester (23.6%) and palmitic acid methyl ester (14.8%). Fuel properties of the BSFL fat-based biodiesel, such as density (885 kg/m), viscosity (5.8 mm/s), ester content (97.2%), flash point (123 C), and cetane number (53) were comparable to those of rapeseed-oil-based biodiesel. These results demonstrated that the organic waste-grown BSFL could be a feasible non-food feedstock for biodiesel production. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dfc26119288ee136d00c6306377b93f6",
"text": "Part-of-speech tagging is a basic step in Natural Language Processing that is often essential. Labeling the word forms of a text with fine-grained word-class information adds new value to it and can be a prerequisite for downstream processes like a dependency parser. Corpus linguists and lexicographers also benefit greatly from the improved search options that are available with tagged data. The Albanian language has some properties that pose difficulties for the creation of a part-of-speech tagset. In this paper, we discuss those difficulties and present a proposal for a part-of-speech tagset that can adequately represent the underlying linguistic phenomena.",
"title": ""
},
{
"docid": "62999806021ff2533ddf7f06117f7d1a",
"text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.",
"title": ""
},
{
"docid": "bd3cc8370fd8669768f62d465f2c5531",
"text": "Cognitive radio technology has been proposed to improve spectrum efficiency by having the cognitive radios act as secondary users to opportunistically access under-utilized frequency bands. Spectrum sensing, as a key enabling functionality in cognitive radio networks, needs to reliably detect signals from licensed primary radios to avoid harmful interference. However, due to the effects of channel fading/shadowing, individual cognitive radios may not be able to reliably detect the existence of a primary radio. In this paper, we propose an optimal linear cooperation framework for spectrum sensing in order to accurately detect the weak primary signal. Within this framework, spectrum sensing is based on the linear combination of local statistics from individual cognitive radios. Our objective is to minimize the interference to the primary radio while meeting the requirement of opportunistic spectrum utilization. We formulate the sensing problem as a nonlinear optimization problem. By exploiting the inherent structures in the problem formulation, we develop efficient algorithms to solve for the optimal solutions. To further reduce the computational complexity and obtain solutions for more general cases, we finally propose a heuristic approach, where we instead optimize a modified deflection coefficient that characterizes the probability distribution function of the global test statistics at the fusion center. Simulation results illustrate significant cooperative gain achieved by the proposed strategies. The insights obtained in this paper are useful for the design of optimal spectrum sensing in cognitive radio networks.",
"title": ""
},
{
"docid": "1e30732092d2bcdeff624364c27e4c9c",
"text": "Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.",
"title": ""
},
{
"docid": "0382ad43b6d31a347d9826194a7261ce",
"text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.",
"title": ""
},
{
"docid": "faf0e45405b3c31135a20d7bff6e7a5a",
"text": "Law enforcement is in a perpetual race with criminals in the application of digital technologies, and requires the development of tools to systematically search digital devices for pertinent evidence. Another part of this race, and perhaps more crucial, is the development of a methodology in digital forensics that encompasses the forensic analysis of all genres of digital crime scene investigations. This paper explores the development of the digital forensics process, compares and contrasts four particular forensic methodologies, and finally proposes an abstract model of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstractionmodel of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstraction Introduction The digital age can be characterized as the application of computer technology as a tool that enhances traditional methodologies. The incorporation of computer systems as a tool into private, commercial, educational, governmental, and other facets of modern life has improved",
"title": ""
},
{
"docid": "3ca2d95885303f1ab395bd31d32df0c2",
"text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.",
"title": ""
},
{
"docid": "0ca477c017da24940bb5af79b2c8826a",
"text": "Code comprehension is critical in software maintenance. Towards providing tools and approaches to support maintenance tasks, researchers have investigated various research lines related to how software code can be described in an abstract form. So far, studies on change pattern mining, code clone detection, or semantic patch inference have mainly adopted text-, tokenand tree-based representations as the basis for computing similarity among code fragments. Although, in general, existing techniques form clusters of “similar” code, our experience in patch mining has revealed that clusters of patches formed by such techniques do not usually carry explainable semantics that can be associated to bug-fixing patterns. In this paper, we propose a novel, automated approach for mining semantically-relevant fix patterns based on an iterative, three-fold, clustering strategy. Our technique, FixMiner, leverages different tree representations for each round of clustering: the Abstract syntax tree, the edit actions tree, and the code context tree. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in AST diff trees. Eventually, FixMiner yields patterns which can be associated to the semantics of the bugs that the associated patches address. We further leverage the mined patterns to implement an automated program repair pipeline with which we are able to correctly fix 25 bugs from the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 80% of FixMiner’s A. Koyuncu, K. Liu, T. F. Bissyandé, D. Kim, J. Klein, K. Kim, and Y. Le Traon SnT, University of Luxembourg E-mail: {firstname.lastname}@uni.lu M. Monperrus KTH Royal Institute of Technology E-mail: [email protected] ar X iv :1 81 0. 01 79 1v 1 [ cs .S E ] 3 O ct 2 01 8 2 Anil Koyuncu et al. generated plausible patches are correct, while the closest related works, namely HDRepair and SimFix, achieve respectively 26% and 70% of correctness.",
"title": ""
},
{
"docid": "79eafa032a3f0cb367a008e5a7345dd5",
"text": "Data Mining techniques are widely used in educational field to find new hidden patterns from student’s data. The hidden patterns that are discovered can be used to understand the problem arise in the educational field. This paper surveys the three elements needed to make prediction on Students’ Academic Performances which are parameters, methods and tools. This paper also proposes a framework for predicting the performance of first year bachelor students in computer science course. Naïve Bayes Classifier is used to extract patterns using the Data Mining Weka tool. The framework can be used as a basis for the system implementation and prediction of Students’ Academic Performance in Higher Learning Institutions.",
"title": ""
},
{
"docid": "3e70a22831b064bff3ff784a932d068b",
"text": "An ultrawideband (UWB) antenna that rejects extremely sharply the two narrow and closely-spaced U.S. WLAN 802.11a bands is presented. The antenna is designed on a single surface (it is uniplanar) and uses only linear sections for easy scaling and fine-tuning. Distributed-element and lumped-element equivalent circuit models of this dual band-reject UWB antenna are presented and used to support the explanation of the physical principles of operation of the dual band-rejection mechanism thoroughly. The circuits are evaluated by comparing with the response of the presented UWB antenna that has very high selectivity and achieves dual-frequency rejection of the WLAN 5.25 GHz and 5.775 GHz bands, while it receives signal from the intermediate band between 5.35-5.725 GHz. The rejection is achieved using double open-circuited stubs, which is uncommon and are chosen based on their narrowband performance. The antenna was fabricated on a single side of a thin, flexible, LCP substrate. The measured achieved rejection is the best reported for a dual-band reject antenna with so closely-spaced rejected bands. The measured group delay of the antenna validates its suitability for UWB links. Such antennas improve both UWB and WLAN communication links at practically zero cost.",
"title": ""
},
{
"docid": "77ce917536f59d5489d0d6f7000c7023",
"text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.",
"title": ""
},
{
"docid": "cc4458a843a2a6ffa86b4efd1956ffca",
"text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters",
"title": ""
},
{
"docid": "5d9112213e6828d5668ac4a33d4582f9",
"text": "This paper describes four patients whose chief symptoms were steatorrhoea and loss of weight. Despite the absence of a history of abdominal pain investigations showed that these patients had chronic pancreatitis, which responded to medical treatment. The pathological findings in two of these cases and in six which came to necropsy are reported.",
"title": ""
}
] |
scidocsrr
|
e6d071e6e5af864fea7e80e4f0cde8a5
|
The Hidden Geometry of Deformed Grids
|
[
{
"docid": "cd068158b6bebadfb8242b6412ec5bbb",
"text": "artefacts, 65–67 built environments and, 67–69 object artefacts, 65–66 structuralism and, 66–67 See also Non–discursive technique Asymmetry, 88–89, 91 Asynchronous systems, 187 Autonomous architecture, 336–338",
"title": ""
}
] |
[
{
"docid": "d7bbccdf4b93cc9722b1efcbb8013024",
"text": "OBJECTIVE\nThe aim of the study was to develop and validate, by consensus, the construct and content of an observations chart for nurses incorporating a modified early warning scoring (MEWS) system for physiological parameters to be used for bedside monitoring on general wards in a public hospital in South Africa.\n\n\nMETHODS\nDelphi and modified face-to-face nominal group consensus methods were used to develop and validate a prototype observations chart that incorporated an existing UK MEWS. This informed the development of the Cape Town ward MEWS chart.\n\n\nPARTICIPANTS\nOne specialist anaesthesiologist, one emergency medicine specialist, two critical care nurses and eight senior ward nurses with expertise in bedside monitoring (N = 12) were purposively sampled for consensus development of the MEWS. One general surgeon declined and one neurosurgeon replaced the emergency medicine specialist in the final round.\n\n\nRESULTS\nFive consensus rounds achieved ≥70% agreement for cut points in five of seven physiological parameters respiratory and heart rates, systolic BP, temperature and urine output. For conscious level and oxygen saturation a relaxed rule of <70% agreement was applied. A reporting algorithm was established and incorporated in the MEWS chart representing decision rules determining the degree of urgency. Parameters and cut points differed from those in MEWS used in developed countries.\n\n\nCONCLUSIONS\nA MEWS for developing countries should record at least seven parameters. Experts from developing countries are best placed to stipulate cut points in physiological parameters. Further research is needed to explore the ability of the MEWS chart to identify physiological and clinical deterioration.",
"title": ""
},
{
"docid": "6cdd6ff86c085cad630ae278ca964ecd",
"text": "Parametric statistical models of continuous or discrete valued data are often not properly normalized, that is, they do not integrate or sum to unity. The normalization is essential for maximum likelihood estimation. While in principle, models can always be normalized by dividing them by their integral or sum (their partition function), this can in practice be extremely difficult. We have been developing methods for the estimation of unnormalized models which do not approximate the partition function using numerical integration. We review these methods, score matching and noise-contrastive estimation, point out extensions and connections both between them and methods by other authors, and discuss their pros and cons.",
"title": ""
},
{
"docid": "f7c4b71b970b7527cd2650ce1e05ab1b",
"text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.",
"title": ""
},
{
"docid": "299242a092512f0e9419ab6be13f9b93",
"text": "In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset.\n We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques.",
"title": ""
},
{
"docid": "d4563e034ae0fb98f037625ca1b5b50a",
"text": "This book focuses on the super resolution of images and video. The authors’ use of the term super resolution (SR) is used to describe the process of obtaining a high resolution (HR) image, or a sequence of HR images, from a set of low resolution (LR) observations. This process has also been referred to in the literature as resolution enhancement (RE). SR has been applied primarily to spatial and temporal RE, but also to hyperspectral image enhancement. This book concentrates on motion based spatial RE, although the authors also describe motion free and hyperspectral image SR problems. Also examined is the very recent research area of SR for compression, which consists of the intentional downsampling, during pre-processing, of a video sequence to be compressed and the application of SR techniques, during post-processing, on the compressed sequence. It is clear that there is a strong interplay between the tools and techniques developed for SR and a number of other inverse problems encountered in signal processing (e.g., image restoration, motion estimation). SR techniques are being applied to a variety of fields, such as obtaining improved still images from video sequences (video printing), high definition television, high performance color Liquid Crystal Display (LCD) screens, improvement of the quality of color images taken by one CCD, video surveillance, remote sensing, and medical imaging. The authors believe that the SR/RE area has matured enough to develop a body of knowledge that can now start to provide useful and practical solutions to challenging real problems and that SR techniques can be an integral part of an image and video codec and can drive the development of new coder-decoders (codecs) and standards.",
"title": ""
},
{
"docid": "0619308f0a79fb33d91a3a8db2a0db14",
"text": "FPGA CAD tool parameters controlling synthesis optimizations, place and route effort, mapping criteria along with user-supplied physical constraints can affect timing results of the circuit by as much as 70% without any change in original source code. A correct selection of these parameters across a diverse set of benchmarks with varying characteristics and design goals is challenging. The sheer number of parameters and option values that can be selected is large (thousands of combinations for modern CAD tools) with often conflicting interactions. In this paper, we present InTime, a machine-learning approach supported by a cloud-based (or cluster-based) compilation infrastructure for automating the selection of these parameters effectively to minimize timing costs. InTime builds a database of results from a series of preliminary runs based on canned configurations of CAD options. It then learns from these runs to predict the next series of CAD tool options to improve timing results. Towards the end, we rely on a limited degree of statistical sampling of certain options like placer and synthesis seeds to further tighten results. Using our approach, we show 70% reduction in final timing results across industrial benchmark problems for the Altera CAD flow. This is 30% better than vendor-supplied design space exploration tools that attempts a similar optimization using canned heuristics.",
"title": ""
},
{
"docid": "508fb3c75f0d92ae27b9c735c02d66d6",
"text": "The remarkable developmental potential and replicative capacity of human embryonic stem (ES) cells promise an almost unlimited supply of specific cell types for transplantation therapies. Here we describe the in vitro differentiation, enrichment, and transplantation of neural precursor cells from human ES cells. Upon aggregation to embryoid bodies, differentiating ES cells formed large numbers of neural tube–like structures in the presence of fibroblast growth factor 2 (FGF-2). Neural precursors within these formations were isolated by selective enzymatic digestion and further purified on the basis of differential adhesion. Following withdrawal of FGF-2, they differentiated into neurons, astrocytes, and oligodendrocytes. After transplantation into the neonatal mouse brain, human ES cell–derived neural precursors were incorporated into a variety of brain regions, where they differentiated into both neurons and astrocytes. No teratoma formation was observed in the transplant recipients. These results depict human ES cells as a source of transplantable neural precursors for possible nervous system repair.",
"title": ""
},
{
"docid": "9b167e23bbe72f8ff0da12d43f55b33c",
"text": "Appropriately planned vegan diets can satisfy nutrient needs of infants. The American Dietetic Association and The American Academy of Pediatrics state that vegan diets can promote normal infant growth. It is important for parents to provide appropriate foods for vegan infants, using guidelines like those in this article. Key considerations when working with vegan families include composition of breast milk from vegan women, appropriate breast milk substitutes, supplements, type and amount of dietary fat, and solid food introduction. Growth of vegan infants appears adequate with post-weaning growth related to dietary adequacy. Breast milk composition is similar to that of non-vegetarians except for fat composition. For the first 4 to 6 months, breast milk should be the sole food with soy-based infant formula as an alternative. Commercial soymilk should not be the primary beverage until after age 1 year. Breastfed vegan infants may need supplements of vitamin B-12 if maternal diet is inadequate; older infants may need zinc supplements and reliable sources of iron and vitamins D and B-12. Timing of solid food introduction is similar to that recommended for non-vegetarians. Tofu, dried beans, and meat analogs are introduced as protein sources around 7-8 months. Vegan diets can be planned to be nutritionally adequate and support growth for infants.",
"title": ""
},
{
"docid": "dcece9a321b4483de7327de29a641fd2",
"text": "A class of optimal control problems for quasilinear elliptic equations is considered, where the coefficients of the elliptic differential operator depend on the state function. Firstand second-order optimality conditions are discussed for an associated control-constrained optimal control problem. In particular, the Pontryagin maximum principle and second-order sufficient optimality conditions are derived. One of the main difficulties is the non-monotone character of the state equation.",
"title": ""
},
{
"docid": "51ece87cfa463cd76c6fd60e2515c9f4",
"text": "In a 1998 speech before the California Science Center in Los Angeles, then US VicePresident Al Gore called for a global undertaking to build a multi-faceted computing system for education and research, which he termed “Digital Earth.” The vision was that of a system providing access to what is known about the planet and its inhabitants’ activities – currently and for any time in history – via responses to queries and exploratory tools. Furthermore, it would accommodate modeling extensions for predicting future conditions. Organized efforts towards realizing that vision have diminished significantly since 2001, but progress on key requisites has been made. As the 10 year anniversary of that influential speech approaches, we re-examine it from the perspective of a systematic software design process and find the envisioned system to be in many respects inclusive of concepts of distributed geolibraries and digital atlases. A preliminary definition for a particular digital earth system as: “a comprehensive, distributed geographic information and knowledge organization system,” is offered and discussed. We suggest that resumption of earlier design and focused research efforts can and should be undertaken, and may prove a worthwhile “Grand Challenge” for the GIScience community.",
"title": ""
},
{
"docid": "2019018e22e8ebc4c1546c87f36e31e2",
"text": "Many alternative modulation schemes have been investigated to replace OFDM for radio systems. But they all have some weak points. In this paper, we present a novel modulation scheme, which minimizes the predecessors' drawbacks, while still keeping their advantages.",
"title": ""
},
{
"docid": "260f7258c3739efec1910028ec429471",
"text": "Cryptography is considered to be a disciple of science of achieving security by converting sensitive information to an un-interpretable form such that it cannot be interpreted by anyone except the transmitter and intended recipient. An innumerable set of cryptographic schemes persist in which each of it has its own affirmative and feeble characteristics. In this paper we have we have developed a traditional or character oriented Polyalphabetic cipher by using a simple algebraic equation. In this we made use of iteration process and introduced a key K0 obtained by permuting the elements of a given key seed value. This key strengthens the cipher and it does not allow the cipher to be broken by the known plain text attack. The cryptanalysis performed clearly indicates that the cipher is a strong one.",
"title": ""
},
{
"docid": "46613dd249ed10d84b7be8c1b46bf5b4",
"text": "Today, a predictive controller becomes one of the state of the art in power electronics control techniques. The performance of this powerful control approach will be pushed forward by simplifying the main control criterion and objective function, and decreasing the number of calculations per sampling time. Recently, predictive control has been incorporated in the Z-source inverter (ZSI) family. For example, in quasi ZSI, the inverter capacitor voltage, inductor current, and output load currents are controlled to their setting points through deciding the required state; active or shoot through. The proposed algorithm reduces the number of calculations, where it decides the shoot-through (ST) case without checking the other possible states. The ST case is roughly optimized every two sampling periods. Through the proposed strategy, about 50% improvement in the computational power has been achieved as compared with the previous algorithm. Also, the objective function for the proposed algorithm consists of one weighting factor for the capacitor voltage without involving the inductor current term in the main objective function. The proposed algorithm is investigated with the simulation results based on MATLAB/SIMULINK software. A prototype of qZSI is constructed in the laboratory to obtain the experimental results using the Digital Signal Processor F28335.",
"title": ""
},
{
"docid": "ca834698dfca01d82e9ac4d0fd69eb59",
"text": "*Correspondence: Aryadeep Roychoudhury, Post Graduate Department of Biotechnology, St. Xavier’s College (Autonomous), 30, Mother Teresa Sarani, Kolkata 700016, West Bengal, India e-mail: [email protected] Reactive oxygen species (ROS) were initially recognized as toxic by-products of aerobic metabolism. In recent years, it has become apparent that ROS plays an important signaling role in plants, controlling processes such as growth, development and especially response to biotic and abiotic environmental stimuli. The major members of the ROS family include free radicals like O•− 2 , OH • and non-radicals like H2O2 and O2. The ROS production in plants is mainly localized in the chloroplast, mitochondria and peroxisomes. There are secondary sites as well like the endoplasmic reticulum, cell membrane, cell wall and the apoplast. The role of the ROS family is that of a double edged sword; while they act as secondary messengers in various key physiological phenomena, they also induce oxidative damages under several environmental stress conditions like salinity, drought, cold, heavy metals, UV irradiation etc., when the delicate balance between ROS production and elimination, necessary for normal cellular homeostasis, is disturbed. The cellular damages are manifested in the form of degradation of biomolecules like pigments, proteins, lipids, carbohydrates, and DNA, which ultimately amalgamate in plant cellular death. To ensure survival, plants have developed efficient antioxidant machinery having two arms, (i) enzymatic components like superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidase (APX), guaiacol peroxidase (GPX), glutathione reductase (GR), monodehydroascorbate reductase (MDHAR), and dehydroascorbate reductase (DHAR); (ii) non-enzymatic antioxidants like ascorbic acid (AA), reduced glutathione (GSH), α-tocopherol, carotenoids, flavonoids, and the osmolyte proline. These two components work hand in hand to scavenge ROS. In this review, we emphasize on the different types of ROS, their cellular production sites, their targets, and their scavenging mechanism mediated by both the branches of the antioxidant systems, highlighting the potential role of antioxidants in abiotic stress tolerance and cellular survival. Such a comprehensive knowledge of ROS action and their regulation on antioxidants will enable us to develop strategies to genetically engineer stress-tolerant plants.",
"title": ""
},
{
"docid": "d676b25f9704fe89d5d8fe929c639829",
"text": "The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges thatwill need to be addressed for realising the potential of next generation cloud systems. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "52017fa7d6cf2e6a18304b121225fc6f",
"text": "In comparison to dense matrices multiplication, sparse matrices multiplication real performance for CPU is roughly 5–100 times lower when expressed in GFLOPs. For sparse matrices, microprocessors spend most of the time on comparing matrices indices rather than performing floating-point multiply and add operations. For 16-bit integer operations, like indices comparisons, computational power of the FPGA significantly surpasses that of CPU. Consequently, this paper presents a novel theoretical study how matrices sparsity factor influences the indices comparison to floating-point operation workload ratio. As a result, a novel FPGAs architecture for sparse matrix-matrix multiplication is presented for which indices comparison and floating-point operations are separated. We also verified our idea in practice, and the initial implementations results are very promising. To further decrease hardware resources required by the floating-point multiplier, a reduced width multiplication is proposed in the case when IEEE-754 standard compliance is not required.",
"title": ""
},
{
"docid": "10b857d497759f7b49d35155e79734f9",
"text": "Disclaimer Mention of any company or product does not constitute endorsement by the National Institute for Occupational Safety and Health (NIOSH). In addition, citations to Web sites external to NIOSH do not constitute NIOSH endorsement of the sponsoring organizations or their programs or products. Furthermore, NIOSH is not responsible for the content of these Web sites. All Web addresses referenced in this document were accessible as of the publication date. To receive documents or other information about occupational safety and health topics, contact NIOSH at ACPH air changes per hour ACGIH American Conference of Governmental Industrial Hygienists CT computed tomography HEPA high efficiency particulate air HVAC heating, ventilation, and air conditioning IARC International Agency for Research on Cancer LEV local exhaust ventilation LHD load-haul-dump MSHA Mine Safety and Health Administration NIOSH National Institute for Occupational Safety and Health OASIS overhead air supply island system PDM personal dust monitor pDR personal DataRAM PEL permissible exposure limit PMF progressive massive fibrosis PPE personal protective equipment PVC poly vinyl chloride TEOM tapered-element oscillating microbalance TMVS total mill ventilation system XRD X-ray diffraction UNIT OF MEASURE ABBREVIATIONS USED IN THIS REPORT cfm cubic foot per minute fpm foot per minute gpm gallon per minute in w.g. inches water gauge lpm liter per minute mg/m 3 milligram per cubic meter mm millimeter mph miles per hour µg/m 3 microgram per cubic meter psi pound-force per square inch INTRODUCTION Respirable silica dust exposure has long been known to be a serious health threat to workers in many industries. Overexposure to respirable silica dust can lead to the development of silicosis— a lung disease that can be disabling and fatal in its most severe form. Once contracted, there is no cure for silicosis so the goal must be to prevent development by limiting a worker's exposure to respirable silica dust. In addition, the International Agency for Research on Cancer (IARC) has concluded that there is sufficient evidence to classify silica as a human carcinogen.",
"title": ""
},
{
"docid": "f3cd5e9a47f5a693fa29c7f03afe8ecf",
"text": "Cloud computing provides a revolutionary model for the deployment of enterprise applications and Web services alike. In this new model, cloud users save on the cost of purchasing and managing base infrastructure, while the cloud providers save on the cost of maintaining underutilized CPU, memory, and network resources. In migrating to this new model, users face a variety of issues. Commercial clouds provide several support models to aide users in resolving the reported issues This paper arises from our quest to understand how to design IaaS support models for more efficient user troubleshooting. Using a data driven approach, we start our exploration into this issue with an investigation into the problems encountered by users and the methods utilized by the cloud support’s staff to resolve these problems. We examine message threads appearing in the forum of a large IaaS provider over a 3 year period. We argue that the lessons derived from this study point to a set of principles that future IaaS offerings can implement to provide users with a more efficient support model. This data driven approach enables us to propose a set of principles that are pertinent to the experiences of users and that we believe could vastly improve the SLA observed by the users.",
"title": ""
},
{
"docid": "c24e523997eac6d1be9e2a2f38150fc0",
"text": "We address the assessment and improvement of the software maintenance function by proposing improvements to the software maintenance standards and introducing a proposed maturity model for daily software maintenance activities: Software Maintenance Maturity Model (SM). The software maintenance function suffers from a scarcity of management models to facilitate its evaluation, management, and continuous improvement. The SM addresses the unique activities of software maintenance while preserving a structure similar to that of the CMMi4 maturity model. It is designed to be used as a complement to this model. The SM is based on practitioners experience, international standards, and the seminal literature on software maintenance. We present the models purpose, scope, foundation, and architecture, followed by its initial validation.",
"title": ""
},
{
"docid": "751e95c13346b18714c5ce5dcb4d1af2",
"text": "Purpose – The purpose of this paper is to propose how to minimize the risks of implementing business process reengineering (BPR) by measuring readiness. For this purpose, the paper proposes an assessment approach for readiness in BPR efforts based on the critical success and failure factors. Design/methodology/approach – A relevant literature review, which investigates success and failure indicators in BPR efforts is carried out and a new categorized list of indicators are proposed. This is a base for conducting a survey to measure the BPR readiness, which has been run in two companies and compared based on a diamond model. Findings – In this research, readiness indicators are determined based on critical success and failure factors. The readiness indicators include six categories. The first five categories, egalitarian leadership, collaborative working environment, top management commitment, supportive management, and use of information technology are positive indicators. The sixth category, resistance to change has a negative role. This paper reports survey results indicating BPR readiness in two Iranian companies. After comparing the position of the two cases, the paper offers several guidelines for amplifying the success points and decreasing failure points and hence, increasing the rate of success. Originality/value – High-failure rate of BPR has been introduced as a main barrier in reengineering processes. In addition, it makes a fear, which in turn can be a failure factor. This paper tries to fill the gap in the literature on decreasing risk in BPR projects by introducing a BPR readiness assessment approach. In addition, the proposed questionnaire is generic and can be utilized in a facilitated manner.",
"title": ""
}
] |
scidocsrr
|
4a18fff6dc9be4eb4aba0d0c8816ea1b
|
Implantable RF Medical Devices: The Benefits of High-Speed Communication and Much Greater Communication Distances in Biomedical Applications
|
[
{
"docid": "f29ed3c9f3de56bd3e8ec7a24860043b",
"text": "Antennas implanted in a human body are largely applicable to hyperthermia and biotelemetry. To make practical use of antennas inside a human body, resonance characteristics of the implanted antennas and their radiation signature outside the body must be evaluated through numerical analysis and measurement setup. Most importantly, the antenna must be designed with an in-depth consideration given to its surrounding environment. In this paper, the spherical dyadic Green's function (DGF) expansions and finite-difference time-domain (FDTD) code are applied to analyze the electromagnetic characteristics of dipole antennas and low-profile patch antennas implanted in the human head and body. All studies to characterize and design the implanted antennas are performed at the biomedical frequency band of 402-405 MHz. By comparing the results from two numerical methodologies, the accuracy of the spherical DGF application for a dipole antenna at the center of the head is evaluated. We also consider how much impact a shoulder has on the performance of the dipole inside the head using FDTD. For the ease of the design of implanted low-profile antennas, simplified planar geometries based on a real human body are proposed. Two types of low-profile antennas, i.e., a spiral microstrip antenna and a planar inverted-F antenna, with superstrate dielectric layers are initially designed for medical devices implanted in the chest of the human body using FDTD simulations. The radiation performances of the designed low-profile antennas are estimated in terms of radiation patterns, radiation efficiency, and specific absorption rate. Maximum available power calculated to characterize the performance of a communication link between the designed antennas and an exterior antenna show how sensitive receivers are required to build a reliable telemetry link.",
"title": ""
},
{
"docid": "fbb164c5c0b4db853b92e0919c260331",
"text": "The dielectric properties of tissues have been extracted from the literature of the past five decades and presented in a graphical format. The purpose is to assess the current state of knowledge, expose the gaps there are and provide a basis for the evaluation and analysis of corresponding data from an on-going measurement programme.",
"title": ""
}
] |
[
{
"docid": "881dab0a689c9ee094f9953ec56e82f8",
"text": "In Games of Empire, Nick Dyer-Witheford and Greig de Peuter expand an earlier study of “the video game industry as an aspect of an emerging postindustrial, post-Fordist capitalism” (xxix) to argue that videogames are “exemplary media of Empire” (xxix). Their notion of “Empire” is based on Michael Hardt and Antonio Negri’s Empire (2000), which characterizes the contemporary world order as a decentralized system of global economic, political, and economic power that transcends national boundaries. In the view of the authors, Hardt and Negri’s account sets itself apart from other analyses of international politics by offering a “comprehensive account of conditions of work, forms of subjectivity, and types of struggle in contemporary capital” (xx). Dyer-Witheford and de Peuter thus divide Games of Empire into three sections to reflect each of these aspects of Empire’s argument, seeing their work as the first account to explore “virtual games within a system of global ownership, privatized property, coercive class relations, military operations, and radical struggle” (xxix). Books in the University of Minnesota Press’s Electronic Mediations series “explore the humanistic and social implications” of new technologies that spark “significant changes in society and culture, politics and economics, thinking and being.” That’s an abstract goal, and one that Games of Empire surely meets. Yet at times it can be difficult to understand exactly which specific intervention this book seeks to make. It is unclear whether the authors of Games of Empire see their work as an indictment of game scholars’ lack of attention to political economy, or whether it’s a work of political theory hoping to show that other disciplines should be paying more attention to the production and play of videogames. Because Dyer-Witheford and de Peuter don’t directly criticize previous political critiques of gaming—such as Edward Castronova’s exploration of the “porous membrane” allowing two-way traffic between virtual economies and those of the real world, or Alexander Galloway’s unraveling of control protocols in algorithmic design, or Ian Bogost’s idea of political procedural rhetoric—they sometimes leave the reader wondering how to situate the book in the broader discourse on games and politics, either separately or together. Thus, Games of Empire might best be seen as an introduction to the writings of Hardt and Negri for game players and designers, rather than as a text for communicating the basic concepts of Games of Empire:",
"title": ""
},
{
"docid": "61070d45e6e72c0f03cae9cafdcd862f",
"text": "Brain extraction is a fundamental step for most brain imaging studies. In this paper, we investigate the problem of skull stripping and propose complementary segmentation networks (CompNets) to accurately extract the brain from T1-weighted MRI scans, for both normal and pathological brain images. The proposed networks are designed in the framework of encoder-decoder networks and have two pathways to learn features from both the brain tissue and its complementary part located outside of the brain. The complementary pathway extracts the features in the non-brain region and leads to a robust solution to brain extraction from MRIs with pathologies, which do not exist in our training dataset. We demonstrate the effectiveness of our networks by evaluating them on the OASIS dataset, resulting in the state of the art performance under the two-fold cross-validation setting. Moreover, the robustness of our networks is verified by testing on images with introduced pathologies and by showing its invariance to unseen brain pathologies. In addition, our complementary network design is general and can be extended to address other image segmentation problems with better generalization.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "040e2a1bb9f8cc3717e4dca33d01b4ab",
"text": "The Commission Internationale d'Eclairage system of colorimetry is a method of measuring colours that has been standardized, and is widely used by industries involved with colour. Knowing the CIE coordinates of a colour allows it to be reproduced easily and exactly in many different media. For this reason graphics installations which utilize colour extensively ought to have the capability of knowing the CIE coordinates of displayed colours, and of displaying colours of given CIE coordinates. Such a capability requires a function which transforms video monitor gun voltages (RGB colour space) into CIE coordinates (XYZ colour space), and vice versa. The function incorporates certain monitor parameters. The purpose of this paper is to demonstrate the form that such a function takes, and to show how the necessary monitor parameters can be measured using little more than a simple light meter. Because space is limited, and because each user is likely to implement the calibration differently, few technical details are given, but principles and methods are discussed in sufficient depth to allow the full use of the system. In addition, several visual checks which can be used for quick verification of the integrity of the calibration are described.\n The paper begins with an overview of the CIE system of colorimetry. It continues with a general discussion of transformations from RGB colour space to XYZ colour space, after which a detailed step-by-step procedure for monitor calibration is presented.",
"title": ""
},
{
"docid": "f11a88cad05210e26940e79700b0ca11",
"text": "Agile software development methods provide great flexibility to adapt to changing requirements and rapidly market products. Sri Lankan software organizations too are embracing these methods to develop software products. Being an iterative an incremental software engineering methodology, agile philosophy promotes working software over comprehensive documentation and heavily relies on continuous customer collaboration throughout the life cycle of the product. Hence characteristics of the people involved with the project and their working environment plays an important role in the success of an agile project compared to any other software engineering methodology. This study investigated the factors that lead to the success of a project that adopts agile methodology in Sri Lanka. An online questionnaire was used to collect data to identify people and organizational factors that lead to project success. The sample consisted of Sri Lankan software professionals with several years of industry experience in developing projects using agile methods. According to the statistical data analysis, customer satisfaction, customer commitment, team size, corporate culture, technical competency, decision time, customer commitment and training and learning have a influence on the success of the project.",
"title": ""
},
{
"docid": "b6a045abb9881abafae097e29f866745",
"text": "AIMS AND OBJECTIVES\nUnderstanding the processes by which nurses administer medication is critical to the minimization of medication errors. This study investigates nurses' views on the factors contributing to medication errors in the hope of facilitating improvements to medication administration processes.\n\n\nDESIGN AND METHODS\nA focus group of nine Registered Nurses discussed medication errors with which they were familiar as a result of both their own experiences and of literature review. The group, along with other researchers, then developed a semi-structured questionnaire consisting of three parts: narrative description of the error, the nurse's background and contributing factors. After the contributing factors had been elicited and verified with eight categories and 34 conditions, additional Registered Nurses were invited to participate by recalling one of the most significant medication errors that they had experienced and identifying contributing factors from those listed on the questionnaire. Identities of the hospital, patient and participants involved in the study remain confidential.\n\n\nRESULTS\nOf the 72 female nurses who responded, 55 (76.4%) believed more than one factor contributed to medication errors. 'Personal neglect' (86.1%), 'heavy workload' (37.5%) and 'new staff' (37.5%) were the three main factors in the eight categories. 'Need to solve other problems while administering drugs,''advanced drug preparation without rechecking,' and 'new graduate' were the top three of the 34 conditions. Medical wards (36.1%) and intensive care units (33.3%) were the two most error-prone places. The errors common to the two were 'wrong dose' (36.1%) and 'wrong drug' (26.4%). Antibiotics (38.9%) were the most commonly misadministered drugs.\n\n\nCONCLUSIONS\nAlthough the majority of respondents considered nurse's personal neglect as the leading factor in medication errors, analysis indicated that additional factors involving the health care system, patients' conditions and doctors' prescriptions all contributed to administration errors.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nIdentification of the main factors and conditions contributing to medication errors allows clinical nurses and administration systems to eliminate situations that promote errors and to incorporate changes that minimize them, creating a safer patient environment.",
"title": ""
},
{
"docid": "e9223a6ef6dec79724f59f2f5214becc",
"text": "JavaScript is a powerful and flexible prototype-based scripting language that is increasingly used by developers to create interactive web applications. The language is interpreted, dynamic, weakly-typed, and has first-class functions. In addition, it interacts with other web languages such as CSS and HTML at runtime. All these characteristics make JavaScript code particularly error-prone and challenging to write and maintain. Code smells are patterns in the source code that can adversely influence program comprehension and maintainability of the program in the long term. We propose a set of 13 JavaScript code smells, collected from various developer resources. We present a JavaScript code smell detection technique called JSNOSE. Our metric-based approach combines static and dynamic analysis to detect smells in client-side code. This automated technique can help developers to spot code that could benefit from refactoring. We evaluate the smell finding capabilities of our technique through an empirical study. By analyzing 11 web applications, we investigate which smells detected by JSNOSE are more prevalent.",
"title": ""
},
{
"docid": "b672aa84da41b3887664562cc4334d56",
"text": "Wearable health monitoring systems have gained considerable interest in recent years owing to their tremendous promise for personal portable health watching and remote medical practices. The sensors with excellent flexibility and stretchability are crucial components that can provide health monitoring systems with the capability of continuously tracking physiological signals of human body without conspicuous uncomfortableness and invasiveness. The signals acquired by these sensors, such as body motion, heart rate, breath, skin temperature and metabolism parameter, are closely associated with personal health conditions. This review attempts to summarize the recent progress in flexible and stretchable sensors, concerning the detected health indicators, sensing mechanisms, functional materials, fabrication strategies, basic and desired features. The potential challenges and future perspectives of wearable health monitoring system are also briefly discussed.",
"title": ""
},
{
"docid": "6ce2991a68c7d4d6467ff2007badbaf0",
"text": "This paper investigates acoustic models for automatic speech recognition (ASR) using deep neural networks (DNNs) whose input is taken directly from windowed speech waveforms (WSW). After demonstrating the ability of these networks to automatically acquire internal representations that are similar to mel-scale filter-banks, an investigation into efficient DNN architectures for exploiting WSW features is performed. First, a modified bottleneck DNN architecture is investigated to capture dynamic spectrum information that is not well represented in the time domain signal. Second,the redundancies inherent in WSW based DNNs are considered. The performance of acoustic models defined over WSW features is compared to that obtained from acoustic models defined over mel frequency spectrum coefficient (MFSC) features on the Wall Street Journal (WSJ) speech corpus. It is shown that using WSW features results in a 3.0 percent increase in WER relative to that resulting from MFSC features on the WSJ corpus. However, when combined with MFSC features, a reduction in WER of 4.1 percent is obtained with respect to the best evaluated MFSC based DNN acoustic model.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "b49e8f14c2c592e8abfed0e64f66bb5e",
"text": "Loan portfolio problems have historically been the major cause of bank losses because of inherent risk of possible loan losses (credit risk). The study of Bank Loan Fraud Detection and IT-Based Combat Strategies in Nigeria which focused on analyzing the loan assessment system was carried out purposely to overcome the challenges of high incidence of NonPerforming Loan (NPL) that are currently being experienced as a result of lack of good decision making mechanisms in disbursing loans. NPL has led to failures of some banks in the past, contributed to shareholders losing their investment in the banks and inaccessibility of bank loans to the public. Information Technology (IT) is a critical component in creating value in banking industries. It provides decision makers with an efficient means to store, calculate, and report information about risk, profitability, collateral analysis, and precedent conditions for loan. This results in a quicker response for client and efficient JIBC August 2011, Vol. 16, No.2 2 identification of appropriate risk controls to enable the financial institution realize a profit. In this paper we discussed the values of various applications of information technology in mitigating the problems of loan fraud in Nigeria financial Institutions.",
"title": ""
},
{
"docid": "310e525bc7a78da2987d8c6d6a0ff46b",
"text": "This tutorial provides an overview of the data mining process. The tutorial also provides a basic understanding of how to plan, evaluate and successfully refine a data mining project, particularly in terms of model building and model evaluation. Methodological considerations are discussed and illustrated. After explaining the nature of data mining and its importance in business, the tutorial describes the underlying machine learning and statistical techniques involved. It describes the CRISP-DM standard now being used in industry as the standard for a technology-neutral data mining process model. The paper concludes with a major illustration of the data mining process methodology and the unsolved problems that offer opportunities for research. The approach is both practical and conceptually sound in order to be useful to both academics and practitioners.",
"title": ""
},
{
"docid": "82d3217331a70ead8ec3064b663de451",
"text": "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer’s output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.",
"title": ""
},
{
"docid": "a2c60bde044287457ade061a0e9370c0",
"text": "Despite modern advances in the prevention of dental caries and an increased understanding of the importance of maintaining the natural dentition in children, many abscessed and infected primary teeth, especially the deciduous molars, are still being prematurely lost through extractions. This report describes a simple, quick and effective technique that has been successfully used to manage infected, abscessed primary teeth. Results indicate that the non-vital primary pulp therapy technique is both reliable and effective. Not only is the procedure painless, it also helps to relieve the child of his immediate pain and achieves the primary goals of elimination of infection and retention of the tooth in a functional state without endangering the developing permanent tooth germ.",
"title": ""
},
{
"docid": "05bc0aa39909125e0350cbe5bac656ac",
"text": "This paper describes an antenna array configuration for the implementation in a UWB monopulse radar. The measurement results of the gain in the sum and difference mode are presented. Next the transformation of the monopulse technique into the time domain by the evaluation of the impulse response is shown. A look-up table with very high dynamic of over 25 dB and flat characteristic is obtained. The unambiguous range of sensing is approx. 40° in the angular direction. This novel combination of UWB technology and the monopulse radar principle allows for very precise sensing, where UWB assures high precision in the range direction and monopulse principle in the angular direction.",
"title": ""
},
{
"docid": "9c7afcb568fab9551886174c3f4a329b",
"text": "Automatic semantic annotation of data from databases or the web is an important pre-process for data cleansing and record linkage. It can be used to resolve the problem of imperfect field alignment in a database or identify comparable fields for matching records from multiple sources. The annotation process is not trivial because data values may be noisy, such as abbreviations, variations or misspellings. In particular, overlapping features usually exist in a lexicon-based approach. In this work, we present a probabilistic address parser based on linear-chain conditional random fields (CRFs), which allow more expressive token-level features compared to hidden Markov models (HMMs). In additions, we also proposed two general enhancement techniques to improve the performance. One is taking original semi-structure of the data into account. Another is post-processing of the output sequences of the parser by combining its conditional probability and a score function, which is based on a learned stochastic regular grammar (SRG) that captures segment-level dependencies. Experiments were conducted by comparing the CRF parser to a HMM parser and a semi-Markov CRF parser in two real-world datasets. The CRF parser out-performed the HMM parser and the semi-Markov CRF in both datasets in terms of classification accuracy. Leveraging the structure of the data and combining the linear-chain CRF with the SRG further improved the parser to achieve an accuracy of 97% on a postal dataset and 96% on a company dataset.",
"title": ""
},
{
"docid": "218e80c55d0d184b5c699b3df7d3377d",
"text": "In the state-of-the-art video-based smoke detection methods, the representation of smoke mainly depends on the visual information in the current image frame. In the case of light smoke, the original background can be still seen and may deteriorate the characterization of smoke. The core idea of this paper is to demonstrate the superiority of using smoke component for smoke detection. In order to obtain smoke component, a blended image model is constructed, which basically is a linear combination of background and smoke components. Smoke opacity which represents a weighting of the smoke component is also defined. Based on this model, an optimization problem is posed. An algorithm is devised to solve for smoke opacity and smoke component, given an input image and the background. The resulting smoke opacity and smoke component are then used to perform the smoke detection task. The experimental results on both synthesized and real image data verify the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "edd8ac16c7eaebf5b5b06964eacb6e8c",
"text": "The authors examined White and Black participants' emotional, physiological, and behavioral responses to same-race or different-race evaluators, following rejecting social feedback or accepting social feedback. As expected, in ingroup interactions, the authors observed deleterious responses to social rejection and benign responses to social acceptance. Deleterious responses included cardiovascular (CV) reactivity consistent with threat states and poorer performance, whereas benign responses included CV reactivity consistent with challenge states and better performance. In intergroup interactions, however, a more complex pattern of responses emerged. Social rejection from different-race evaluators engendered more anger and activational responses, regardless of participants' race. In contrast, social acceptance produced an asymmetrical race pattern--White participants responded more positively than did Black participants. The latter appeared vigilant and exhibited threat responses. Discussion centers on implications for attributional ambiguity theory and potential pathways from discrimination to health outcomes.",
"title": ""
},
{
"docid": "e12410e92e3f4c0f9c78bc5988606c93",
"text": "Semiarid environments are known for climate extremes such as high temperatures, low humidity, irregular precipitations, and apparent resource scarcity. We aimed to investigate how a small neotropical primate (Callithrix jacchus; the common marmoset) manages to survive under the harsh conditions that a semiarid environment imposes. The study was carried out in a 400-ha area of Caatinga in the northeast of Brazil. During a 6-month period (3 months of dry season and 3 months of wet season), we collected data on the diet of 19 common marmosets (distributed in five groups) and estimated their behavioral time budget during both the dry and rainy seasons. Resting significantly increased during the dry season, while playing was more frequent during the wet season. No significant differences were detected regarding other behaviors. In relation to the diet, we recorded the consumption of prey items such as insects, spiders, and small vertebrates. We also observed the consumption of plant items, including prickly cladodes, which represents a previously undescribed food item for this species. Cladode exploitation required perceptual and motor skills to safely access the food resource, which is protected by sharp spines. Our findings show that common marmosets can survive under challenging conditions in part because of adjustments in their behavior and in part because of changes in their diet.",
"title": ""
},
{
"docid": "66fce3b6c516a4fa4281d19d6055b338",
"text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.",
"title": ""
}
] |
scidocsrr
|
fd59e2a3e5b50e8bc5f1d5b27ca76021
|
Estimating Time Models for News Article Excerpts
|
[
{
"docid": "444ce710b4c6a161ae5f801ed0ae8bec",
"text": "This paper investigates a machine learning approach for temporally ordering and anchoring events in natural language texts. To address data sparseness, we used temporal reasoning as an oversampling method to dramatically expand the amount of training data, resulting in predictive accuracy on link labeling as high as 93% using a Maximum Entropy classifier on human annotated data. This method compared favorably against a series of increasingly sophisticated baselines involving expansion of rules derived from human intuitions.",
"title": ""
}
] |
[
{
"docid": "0a8a6f668329175012a28be0b8fe8335",
"text": "In recent years, convolutional neural networks (CNN) have played an important role in the field of deep learning. Variants of CNN’s have proven to be very successful in classification tasks across different domains. However, there are two big drawbacks to CNN’s: their failure to take into account of important spatial hierarchies between features, and their lack of rotational invariance [1]. As long as certain key features of an object are present in the test data, CNN’s classify the test data as the object, disregarding features’ relative spatial orientation to each other. This causes false positives. The lack of rotational invariance in CNN’s would cause the network to incorrectly assign the object another label, causing false negatives. To address this concern, Hinton et al. propose a novel type of neural network using the concept of capsules in a recent paper. With the use of dynamic routing and reconstruction regularization, the capsule network model would be both rotation invariant and spatially aware. [1]",
"title": ""
},
{
"docid": "af956aac653d1da6c7cf658640ab82a8",
"text": "In this study, we successfully developed a high signal-to-noise ratio (SNR) rangefinder based on a piezoelectric micromachined ultrasonic transducer (pMUT). A monocrystalline Pb(Mn<inf>1/3</inf>, Nb<inf>2</inf>/3)O<inf>3</inf>-Pb(Zr, Ti)O<inf>3</inf> (PMnN-PZT) thin film was used because it has large figures-of-merit (FOM) for SNR due to its high piezoelectric coefficient and small relative permittivity (typical values: e<inf>31,f</inf> = −14 C/m<sup>2</sup>, ε<inf>r</inf> = 200∼300). The rangefinding ability of the monocrystalline PMnN-PZT pMUT was evaluated using a pair of the devices as transmitter and receiver. The maximum range was estimated to be over 2 m at a low actuating voltage of 1 V<inf>p-p</inf>, when 12 dB was set as the threshold SNR for reliable rangefinding. The energy consumption of the transmitter was as small as ∼55 pJ for the generation of an ultrasonic burst. This performance is suitable for rangefinding applications in consumer electronics.",
"title": ""
},
{
"docid": "7654ada6aabee2f8abf411dba5383d96",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have been demonstrated successful for object detections. However, the size of network input is limited by the amount of memory available on GPUs. Moreover, performance degrades when detecting small objects. To alleviate the memory usage and improve the performance of detecting small traffic signs, we proposed an approach for detecting small traffic signs from large images under real world conditions. In particular, large images are broken into small patches as input to a Small-Object-Sensitive-CNN (SOS-CNN) modified from a Single Shot Multibox Detector (SSD) framework with a VGG-16 network as the base network to produce patch-level object detection results. Scale invariance is achieved by applying the SOS-CNN on an image pyramid. Then, image-level object detection is obtained by projecting all the patch-level detection results to the image at the original scale. Experimental results on a real-world conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.",
"title": ""
},
{
"docid": "34f015544c91489bedae1e9bb28fff4e",
"text": "Soft robots have received an increasing attention due to their advantages of high flexibility and safety for human operators but the fabrication is a challenge. Recently, 3D printing has been used as a key technology to fabricate soft robots because of high quality and printing multiple materials at the same time. Functional soft materials are particularly well suited for soft robotics due to a wide range of stimulants and sensitive demonstration of large deformations, high motion complexities and varied multi-functionalities. This review comprises a detailed survey of 3D printing in soft robotics. The development of key 3D printing technologies and new materials along with composites for soft robotic applications is investigated. A brief summary of 3D-printed soft devices suitable for medical to industrial applications is also included. The growing research on both 3D printing and soft robotics needs a summary of the major reported studies and the authors believe that this review article serves the purpose.",
"title": ""
},
{
"docid": "0508e896f25f8e801f98e5efcc74bd17",
"text": "In this work, we proposed an efficient system for animal recognition and classification based on texture features which are obtained from the local appearance and texture of animals. The classification of animals are done by training and subsequently testing two different machine learning techniques, namely k-Nearest Neighbors (k-NN) and Support Vector Machines (SVM). Computer-assisted technique when applied through parallel computing makes the work efficient by reducing the time taken for the task of animal recognition and classification. Here we propose a parallel algorithm for the same. Experimentation is done for about 30 different classes of animals containing more than 3000 images. Among the different classifiers, k-Nearest Neighbor classifiers have achieved a better accuracy.",
"title": ""
},
{
"docid": "d99747fb44a839a2ab8765c1176e4c77",
"text": "The aim of this paper is to explore text topic influence in authorship attribution. Specifically, we test the widely accepted belief that stylometric variables commonly used in authorship attribution are topic-neutral and can be used in multi-topic corpora. In order to investigate this hypothesis, we created a special corpus, which was controlled for topic and author simultaneously. The corpus consists of 200 Modern Greek newswire articles written by two authors in two different topics. Many commonly used stylometric variables were calculated and for each one we performed a two-way ANOVA test, in order to estimate the main effects of author, topic and the interaction between them. The results showed that most of the variables exhibit considerable correlation with the text topic and their exploitation in authorship analysis should be done with caution.",
"title": ""
},
{
"docid": "cfa85e7abbbef02fcdb3323ba65fe801",
"text": "Recently, the long short-term memory neural network (LSTM) has attracted wide interest due to its success in many tasks. LSTM architecture consists of a memory cell and three gates, which looks similar to the neuronal networks in the brain. However, there still lacks the evidence of the cognitive plausibility of LSTM architecture as well as its working mechanism. In this paper, we study the cognitive plausibility of LSTM by aligning its internal architecture with the brain activity observed via fMRI when the subjects read a story. Experiment results show that the artificial memory vector in LSTM can accurately predict the observed sequential brain activities, indicating the correlation between LSTM architecture and the cognitive process of story reading.",
"title": ""
},
{
"docid": "71fe65e31364d831214e308d6ef7814d",
"text": "As aggregators, online news portals face great challenges in continuously selecting a pool of candidate articles to be shown to their users. Typically, those candidate articles are recommended manually by platform editors from a much larger pool of articles aggregated from multiple sources. Such a hand-pick process is labor intensive and time-consuming. In this paper, we study the editor article selection behavior and propose a learning by demonstration system to automatically select a subset of articles from the large pool. Our data analysis shows that (i) editors' selection criteria are non-explicit, which are less based only on the keywords or topics, but more depend on the quality and attractiveness of the writing from the candidate article, which is hard to capture based on traditional bag-of-words article representation. And (ii) editors' article selection behaviors are dynamic: articles with different data distribution come into the pool everyday and the editors' preference varies, which are driven by some underlying periodic or occasional patterns. To address such problems, we propose a meta-attention model across multiple deep neural nets to (i) automatically catch the editors' underlying selection criteria via the automatic representation learning of each article and its interaction with the meta data and (ii) adaptively capture the change of such criteria via a hybrid attention model. The attention model strategically incorporates multiple prediction models, which are trained in previous days. The system has been deployed in a commercial article feed platform. A 9-day A/B testing has demonstrated the consistent superiority of our proposed model over several strong baselines.",
"title": ""
},
{
"docid": "4156c9e17390659ec7a1c3f20d9b6e1e",
"text": "An e-commerce catalog typically comprises of specifications for millions of products. The search engine receives millions of sales offers from thousands of independent merchants that must be matched to the right products. We describe the challenges that a system for matching unstructured offers to structured product descriptions must address, drawing upon our experience from building such a system for Bing Shopping. The heart of our system is a data-driven component that learns the matching function off-line, which is then applied at run-time for matching offers to products. We provide the design of this and other critical components of the system as well as the details of the extensive experiments we performed to assess the readiness of the system. This system is currently deployed in an experimental Commerce Search Engine and is used to match all the offers received by Bing Shopping to the Bing product catalog.",
"title": ""
},
{
"docid": "7cbea1103832c97b22bfe8d1c174bd64",
"text": "Large amount of user generated data is present on web as blogs, reviews tweets, comments etc. This data involve user’s opinion, view, attitude, sentiment towards particular product, topic, event, news etc. Opinion mining (sentiment analysis) is a process of finding users’ opinion from user-generated content. Opinion summarization is useful in feedback analysis, business decision making and recommendation systems. In recent years opinion mining is one of the popular topics in Text mining and Natural Language Processing. This paper presents the methods for opinion extraction, classification, and summarization. This paper also explains different approaches, methods and techniques used in process of opinion mining and summarization, and comparative study of these different methods. Keywords— Natural Language Processing, Opinion Mining, Opinion Summarization.",
"title": ""
},
{
"docid": "38bae8a1273f102a5ffb3f15df0f2c35",
"text": "This paper investigates the application of the H∞proportional-integral-derivative (PID) control synthesis method to tip position control of a flexible-link manipulator. To achieve high performance of PID control, this particular control design problem is cast into the H∞framework. Based on the recently proposed H∞PID control synthesis method, a set of admissible controllers is then obtained to be robust against uncertainty introduced by neglecting the higher-order modes of the link and to achieve the desired time-response specifications. The most important feature of the H∞PID control synthesis method is the ability to provide the knowledge of the entire admissible PID controller gain space which can facilitate controller fine tuning. Finally, experimental results are given to demonstrate the effectiveness of H∞PID control.",
"title": ""
},
{
"docid": "f10438293c046a86515e303e39b6607b",
"text": "Remote measurement of the blood volume pulse via photoplethysmography (PPG) using digital cameras and ambient light has great potential for healthcare and affective computing. However, traditional RGB cameras have limited frequency resolution. We present results of PPG measurements from a novel five band camera and show that alternate frequency bands, in particular an orange band, allowed physiological measurements much more highly correlated with an FDA approved contact PPG sensor. In a study with participants (n = 10) at rest and under stress, correlations of over 0.92 (p <; 0.01) were obtained for heart rate, breathing rate, and heart rate variability measurements. In addition, the remotely measured heart rate variability spectrograms closely matched those from the contact approach. The best results were obtained using a combination of cyan, green, and orange (CGO) bands; incorporating red and blue channel observations did not improve performance. In short, RGB is not optimal for this problem: CGO is better. Incorporating alternative color channel sensors should not increase the cost of such cameras dramatically.",
"title": ""
},
{
"docid": "89aa13fe76bf48c982e44b03acb0dd3d",
"text": "Stock trading strategy plays a crucial role in investment companies. However, it is challenging to obtain optimal strategy in the complex and dynamic stock market. We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return. 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment. We train a deep reinforcement learning agent and obtain an adaptive trading strategy. The agent’s performance is evaluated and compared with Dow Jones Industrial Average and the traditional min-variance portfolio allocation strategy. The proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the Sharpe ratio and cumulative returns.",
"title": ""
},
{
"docid": "53b6315bfb8fcfef651dd83138b11378",
"text": "We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures. Our results have implications for efficient portfolio optimization under different measures of risk. Department of Mathematics, National University of Singapore, Singapore 117543. Email: [email protected]. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS startup grants R-146-050-070-133 & R146-050-070-101. Division of Mathematics and Sciences, Babson College, Babson Park, MA 02457, USA. E-mail: [email protected]. Research supported by the Gill grant from the Babson College Board of Research. NUS Business School, National University of Singapore. Email: [email protected]. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS academic research grant R-314-000-066-122 and R-314-000-068-122.",
"title": ""
},
{
"docid": "ba2710c7df05b149f6d2befa8dbc37ee",
"text": "This work proposes a method for blind equalization of possibly non-minimum phase channels using particular infinite impulse response (IIR) filters. In this context, the transfer function of the equalizer is represented by a linear combination of specific rational basis functions. This approach estimates separately the coefficients of the linear expansion and the poles of the rational basis functions by alternating iteratively between an adaptive (fixed pole) estimation of the coefficients and a pole placement method. The focus of the work is mainly on the issue of good pole placement (initialization and updating).",
"title": ""
},
{
"docid": "41a16f3eb3ff59d34e04ffa77bf1ae86",
"text": "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems.",
"title": ""
},
{
"docid": "94d66ffd9d9c2ccb08be7059075cd018",
"text": "Query expansion is generally a useful technique in improving search performance. However, some expanded query terms obtained by traditional statistical methods (e.g., pseudo-relevance feedback) may not be relevant to the user’s information need, while some relevant terms may not be contained in the feedback documents at all. Recent studies utilize external resources to detect terms that are related to the query, and then adopt these terms in query expansion. In this paper, we present a study in the use of Freebase [6], which is an open source general-purpose ontology, as a source for deriving expansion terms. FreeBase provides a graphbased model of human knowledge, from which a rich and multi-step structure of instances related to the query concept can be extracted, as a complement to the traditional statistical approaches to query expansion. We propose a novel method, based on the well-principled DempsterShafer’s (D-S) evidence theory, to measure the certainty of expansion terms from the Freebase structure. The expanded query model is then combined with a state of the art statistical query expansion model – the Relevance Model (RM3). Experiments show that the proposed method achieves significant improvements over RM3.",
"title": ""
},
{
"docid": "788c9479bc5eb1a7bb36bfd774280f45",
"text": "The low-density parity-check (LDPC) codes are used to achieve excellent performance with low encoding and decoding complexity. One major criticism concerning LDPC codes has been their apparent high encoding complexity and memory inefficient nature due to large parity check matrix. More generally, we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. A new technique for efficient encoding of LDP Codes based on the known concept of approximate lower triangulation (ALT) is introduced. The algorithm computes parity check symbols by solving a set of sparse equations, and the triangular factorization is employed to solve the equations efficiently. The key of the encoding method is to get the systematic approximate lower triangular (SALT) form of the Parity Check Matrix with minimum gap g, because the smaller the gap is, the more efficient encoding will be obtained. The functions are to be coded in MATLAB.",
"title": ""
},
{
"docid": "cf20ffac349478b3fc5753624eb17c7f",
"text": "Knowledge stickiness often impedes knowledge transfer. When knowledge is complex and the knowledge seeker lacks intimacy with the knowledge source, knowledge sticks in its point of origin because the knowledge seeker faces ambiguity about the best way to acquire the needed knowledge. We theorize that, given the extent of that ambiguity, knowledge seekers will make a choice to either ask for needed knowledge immediately after deciding it is needed, or wait and ask for it at a later date. We hypothesize that when knowledge is sticky, knowledge seekers will delay asking for knowledge and, in the interim period, use an enterprise social networking site to gather information that can lubricate stuck knowledge, such as how, when, and in what way to ask for the desired knowledge. We propose that by doing this, knowledge seekers can increase their ultimate satisfaction with the knowledge once they ask for it. Data describing specific instances of knowledge transfer occurring in a large telecommunications firm supported these hypotheses, showing that knowledge transfer is made easier by the fact that enterprise social networking sites make other peoples’ communications visible to casual observers such that knowledge seekers can gather information about the knowledge and its source simply by watching his or her actions through the technology, even if they never interacted with the source directly themselves. The findings show that simple awareness of others’ communications (what we call ambient awareness) played a pivotal role in helping knowledge seekers to obtain interpersonal and knowledge-related material with which to lubricate their interactions with knowledge sources. 1University of California, Santa Barbara, CA, USA 2Northwestern University, Evanston, IL, USA Corresponding Author: Paul M. Leonardi, Phelps Hall, University of California, Santa Barbara, CA, USA, 93106-5129. Email: [email protected] 540509 ABSXXX10.1177/0002764214540509American Behavioral ScientistLeonardi and Meyer research-article2014 at UNIV CALIFORNIA SANTA BARBARA on December 9, 2014 abs.sagepub.com Downloaded from Leonardi and Meyer 11",
"title": ""
}
] |
scidocsrr
|
c6f4fe1bd022436423d7bbc633305233
|
Building Rapport with Extraverted and Introverted Agents
|
[
{
"docid": "a579a45a917999f48846a29cd09a92f4",
"text": "Over the last fifty years, the “Big Five” model of personality traits has become a standard in psychology, and research has systematically documented correlations between a wide range of linguistic variables and the Big Five traits. A distinct line of research has explored methods for automatically generating language that varies along personality dimensions. We present PERSONAGE (PERSONAlity GEnerator), the first highly parametrizable language generator for extraversion, an important aspect of personality. We evaluate two personality generation methods: (1) direct generation with particular parameter settings suggested by the psychology literature; and (2) overgeneration and selection using statistical models trained from judge’s ratings. Results show that both methods reliably generate utterances that vary along the extraversion dimension, according to human judges.",
"title": ""
},
{
"docid": "a60d79008bfb7cccee262667b481d897",
"text": "It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.",
"title": ""
}
] |
[
{
"docid": "e10bc0666fc95f27d0b961067b44917c",
"text": "The tetraspanins are a superfamily of four-span membrane proteins that are widely expressed in mammalian cells. Although they have been implicated in important cell processes and some diseases, it has been difficult to characterise their exact functions. Tspan-11 is a recently identified member of the tetraspanin family. Little is known about Tspan-11; however sequence evidence indicates that Tspan-11 most closely resembles tetraspanin CD151. Recently, Tspan-11 GFP transfected CHO cells were used to immunize mice for generation of monoclonal antibodies (mAbs) and found that these mAbs bound to the surface of Tspan-11 GFP transfected CHO cells, indicating that they are able to recognize native forms of the tetraspanin. Surprisingly, one of cancer cell lines tested, the A549 lung adenocarcinoma cell line, was strongly positive for Tspan-11 mAbs, showing both intracellular and plasma membrane staining. Some preliminary investigations have been carried out on normal cells/tissues; these indicate that Tspan-11 is only expressed weakly, if at all on white blood cells, and may be present on epithelial cells in the colon. Taken together, these findings suggest that Tspan-11 has low, or restricted expression, or that the mAbs characterised so far recognize only some forms of the protein.",
"title": ""
},
{
"docid": "c1943f443b0e7be72091250b34262a8f",
"text": "We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.",
"title": ""
},
{
"docid": "c1e26dd8a32c53de1549fd609aae4c06",
"text": "CONTEXT\nAdolescent drinking is a major public health concern. The federal government does not restrict alcohol advertising to adolescents, but relies on the alcohol industry for self-regulation.\n\n\nOBJECTIVES\nTo investigate recent alcohol advertising in magazines and to determine whether advertising frequency is associated with adolescent readership.\n\n\nDESIGN, SETTING, AND SUBJECTS\nAll alcohol advertisements were counted that appeared from 1997-2001 in 35 of 48 major US magazines, which tracked their adolescent readership (3 refused all alcohol advertisements; and advertisement counts were unavailable for 10). Variation was assessed in the advertisement placement frequency for each major category of alcohol (beer, wine and wine coolers, and distilled liquor) by a magazine's adolescent readership (age 12-19 years), young adult readership (age 20-24 years), and older adult readership (age > or =25 years); readership demographics (sex, race, and income); year; frequency of publication; and cost per advertisement.\n\n\nMAIN OUTCOME MEASURE\nVariation in alcohol advertising frequency by adolescent readership.\n\n\nRESULTS\nAdolescent readership ranged from 1.0 to 7.1 million. The alcohol industry placed 9148 advertisements at a cost of 696 million dollars. Of the 9148 advertisements, 1201 (13%) were for beer, 443 (5%) for wine, and 7504 (82%) for liquor. After adjustment for other magazine characteristics, the advertisement rate ratio was 1.6 times more for beer (95% confidence interval [CI], 1.0-2.6; P =.05) and liquor (95% CI, 1.1-2.3; P =.01) for every additional million adolescent readers. Wine industry advertising was not associated with adolescent readership.\n\n\nCONCLUSIONS\nMagazine advertising by the beer and liquor industries is associated with adolescent readership. Industry and federal policymakers should examine ways to regulate advertising that reaches large numbers of adolescents.",
"title": ""
},
{
"docid": "db68d13711b9e5c2ab95296c0cae8e97",
"text": "OBJECTIVES\nAcral lentiginous melanoma (ALM) is a defined histopathological entity with peculiar clinical-pathological features and is the most common subtype of malignant melanoma in acral locations. The 5-year survival rate is lower than that for all cutaneous malignant melanoma overall (80.3% versus 91.3%). Controversy exists in the literature as to whether this worse prognosis is attributable to a more aggressive biological nature or to difficult-to-see sites and consequent advanced stage at the time of diagnosis. The main purpose of the study was to explore any prognostic difference according to upper limb or lower limb localizations, based on the hypothesis that upper limb localizations might receive attention sooner than lower limb localizations.\n\n\nPATIENTS AND METHODS\nA cohort longitudinal study was performed through a retrospective review of all patients consecutively referred to our Unit with histological confirmation of ALM. Data were collected from a 10 year period between 1996 and 2006 to allow determination of 5 year survival statistics.\n\n\nRESULTS\nOut of 87 patients included in the study, 32 were men (37%) and 55 were women 63%. The average number of months it took for patients to present was 62 months with a mode of 12 months. Overall 5 year survival was 80% and a multivariate analysis showed that the most reliable prognostic indicators are the Breslow's thickness and the margins of complete excision. When controlling the survival rates for Breslow thickness, the values were similar to the reported rates indicated in the recent literature for cutaneous malignant melanoma.\n\n\nCONCLUSIONS\nThe higher aggressiveness of ALM was noticed to be attributable to a later stage and more advanced thickness at diagnosis. No significant difference was found between upper and lower limb localization in terms of prognosis.",
"title": ""
},
{
"docid": "723cf2a8b6142a7e52a0ff3fb74c3985",
"text": "The Internet of Mobile Things (IoMT) requires support for a data lifecycle process ranging from sorting, cleaning and monitoring data streams to more complex tasks such as querying, aggregation, and analytics. Current solutions for stream data management in IoMT have been focused on partial aspects of a data lifecycle process, with special emphasis on sensor networks. This paper aims to address this problem by developing an offline and real-time data lifecycle process that incorporates a layered, data-flow centric, and an edge/cloud computing approach that is needed for handling heterogeneous, streaming and geographicallydispersed IoMT data streams. We propose an end to end architecture to support an instant intra-layer communication that establishes a stream data flow in real-time to respond to immediate data lifecycle tasks at the edge layer of the system. Our architecture also provides offline functionalities for later analytics and visualization of IoMT data streams at the core layer of the system. Communication and process are thus the defining factors in the design of our stream data management solution for IoMT. We describe and evaluate our prototype implementation using real-time transit data feeds and a commercial edge-based platform. Preliminary results are showing the advantages of running data lifecycle tasks at the edge of the network for reducing the volume of data streams that are redundant and should not be transported to the cloud. Keywords—stream data lifecycle, edge computing, cloud computing, Internet of Mobile Things, end to end architectures",
"title": ""
},
{
"docid": "a8ae6f14a7e308b70804e7f898c34876",
"text": "Find the secret to improve the quality of life by reading this architecting dependable systems. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how wellknown the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.",
"title": ""
},
{
"docid": "b72c8a92e8d0952970a258bb43f5d1da",
"text": "Neural networks excel in detecting regular patterns but are less successful in representing and manipulating complex data structures, possibly due to the lack of an external memory. This has led to the recent development of a new line of architectures known as Memory-Augmented Neural Networks (MANNs), each of which consists of a neural network that interacts with an external memory matrix. However, this RAM-like memory matrix is unstructured and thus does not naturally encode structured objects. Here we design a new MANN dubbed Relational Dynamic Memory Network (RDMN) to bridge the gap. Like existing MANNs, RDMN has a neural controller but its memory is structured as multi-relational graphs. RDMN uses the memory to represent and manipulate graph-structured data in response to query; and as a neural network, RDMN is trainable from labeled data. Thus RDMN learns to answer queries about a set of graph-structured objects without explicit programming. We evaluate the capability of RDMN on several important prediction problems, including software vulnerability, molecular bioactivity and chemical-chemical interaction. Results demonstrate the efficacy of the proposed model.",
"title": ""
},
{
"docid": "452c9eb3b5d411b1f32d6cf6a230b3e2",
"text": "The core vector machine (CVM) is a recent approach for scaling up kernel methods based on the notion of minimum enclosing ball (MEB). Though conceptually simple, an efficient implementation still requires a sophisticated numerical solver. In this paper, we introduce the enclosing ball (EB) problem where the ball's radius is fixed and thus does not have to be minimized. We develop efficient (1 + e)-approximation algorithms that are simple to implement and do not require any numerical solver. For the Gaussian kernel in particular, a suitable choice of this (fixed) radius is easy to determine, and the center obtained from the (1 + e)-approximation of this EB problem is close to the center of the corresponding MEB. Experimental results show that the proposed algorithm has accuracies comparable to the other large-scale SVM implementations, but can handle very large data sets and is even faster than the CVM in general.",
"title": ""
},
{
"docid": "28877487175f704ea3c56d8b69863018",
"text": "In this paper, we attempt to make a formal analysis of the performance in automatic part of speech tagging. Lower and upper bounds in tagging precision using existing taggers or their combination are provided. Since we show that with existing taggers, automatic perfect tagging is not possible, we offer two solutions for applications requiring very high precision: (1) a solution involving minimum human intervention for a precision of over 98.7%, and (2) a combination of taggers using a memory based learning algorithm that succeeds in reducing the error rate with 11.6% with respect to the best tagger involved.",
"title": ""
},
{
"docid": "46200c35a82b11d989c111e8398bd554",
"text": "A physics-based compact gallium nitride power semiconductor device model is presented in this work, which is the first of its kind. The model derivation is based on the classical drift-diffusion model of carrier transport, which expresses the channel current as a function of device threshold voltage and externally applied electric fields. The model is implemented in the Saber® circuit simulator using the MAST hardware description language. The model allows the user to extract the parameters from the dc I-V and C-V characteristics that are also available in the device datasheets. A commercial 80 V EPC GaN HEMT is used to demonstrate the dynamic validation of the model against the transient device characteristics in a double-pulse test and a boost converter circuit configuration. The simulated versus measured device characteristics show good agreement and validate the model for power electronics design and applications using the next generation of GaN HEMT devices.",
"title": ""
},
{
"docid": "9e10ca5f3776df0fe0ca41a8046adb27",
"text": "The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map that data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is important to reliably quantify their prediction accuracy. Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful. Here we compared two popular cross-validation methods: record-wise and subject-wise. Using both a publicly available dataset and a simulation, we found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms. We also found that this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as erroneous results can mislead both clinicians and data scientists.",
"title": ""
},
{
"docid": "02c2c8df7a4343d10c482025d07c4995",
"text": "taking data about a user’s likes and dislikes and generating a general profile of the user. These profiles can be used to retrieve documents matching user interests; recommend music, movies, or other similar products; or carry out other tasks in a specialized fashion. This article presents a fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data. These data are used to generalize user-specified data along the patterns common across the population, including areas not represented in the user’s original data. I describe the method in detail and present its implementation in the LIFESTYLE FINDER agent, an internet-based experiment testing our approach on more than 20,000 users worldwide.",
"title": ""
},
{
"docid": "3d4f6ba4239854a91cee61bded978057",
"text": "OBJECTIVE\nThe aim of this study is to analyze and visualize the polymorbidity associated with chronic kidney disease (CKD). The study shows diseases associated with CKD before and after CKD diagnosis in a time-evolutionary type visualization.\n\n\nMATERIALS AND METHODS\nOur sample data came from a population of one million individuals randomly selected from the Taiwan National Health Insurance Database, 1998 to 2011. From this group, those patients diagnosed with CKD were included in the analysis. We selected 11 of the most common diseases associated with CKD before its diagnosis and followed them until their death or up to 2011. We used a Sankey-style diagram, which quantifies and visualizes the transition between pre- and post-CKD states with various lines and widths. The line represents groups and the width of a line represents the number of patients transferred from one state to another.\n\n\nRESULTS\nThe patients were grouped according to their states: that is, diagnoses, hemodialysis/transplantation procedures, and events such as death. A Sankey diagram with basic zooming and planning functions was developed that temporally and qualitatively depicts they had amid change of comorbidities occurred in pre- and post-CKD states.\n\n\nDISCUSSION\nThis represents a novel visualization approach for temporal patterns of polymorbidities associated with any complex disease and its outcomes. The Sankey diagram is a promising method for visualizing complex diseases and exploring the effect of comorbidities on outcomes in a time-evolution style.\n\n\nCONCLUSIONS\nThis type of visualization may help clinicians foresee possible outcomes of complex diseases by considering comorbidities that the patients have developed.",
"title": ""
},
{
"docid": "31abdea5ff0fc543ddfd382249602cda",
"text": "Named Entity Recognition (NER), an information extraction task, is typically applied to spoken documents by cascading a large vocabulary continuous speech recognizer (LVCSR) and a named entity tagger. Recognizing named entities in automatically decoded speech is difficult since LVCSR errors can confuse the tagger. This is especially true of out-of-vocabulary (OOV) words, which are often named entities and always produce transcription errors. In this work, we improve speech NER by including features indicative of OOVs based on a OOV detector, allowing for the identification of regions of speech containing named entities, even if they are incorrectly transcribed. We construct a new speech NER data set and demonstrate significant improvements for this task.",
"title": ""
},
{
"docid": "da4a50c5539bb26ae917d294c83eea18",
"text": "An ultra-wide-band (UWB), stripline-fed Vivaldi antenna is characterized both numerically and experimentally. Three-dimensional far-field measurements are conducted and accurate antenna gain and efficiency as well as gain variation versus frequency in the boresight direction are measured. Using two Vivaldi antennas, a free-space communication link is set up. The impulse response of the cascaded antenna system is obtained using full-wave numerical electromagnetic time-domain simulations. These results are compared with frequency-domain measurements using a network analyzer. Full-wave numerical simulation of the free-space channel is performed using a two step process to circumvent the computationally intense simulation problem. Vector transfer function concept is used to obtain the overall system transfer function and the impulse response.",
"title": ""
},
{
"docid": "342b72bf32937104ae80ae275c8c9585",
"text": "In this paper, we introduce a Radio Frequency IDentification (RFID) based smart shopping system, KONARK, which helps users to checkout items faster and to track purchases in real-time. In parallel, our solution also provides the shopping mall owner with information about user interest on particular items. The central component of KONARK system is a customized shopping cart having a RFID reader which reads RFID tagged items. To provide check-out facility, our system detects in-cart items with almost 100% accuracy within 60s delay by exploiting the fact that the physical level information (RSSI, phase, doppler, read rate etc.) of in-cart RFID tags are different than outside tags. KONARK also detects user interest with 100% accuracy by exploiting the change in physical level parameters of RFID tag on the object user interacted with. In general, KONARK has been shown to perform with reasonably high accuracy in different mobility speeds in a mock-up of a shopping mall isle.",
"title": ""
},
{
"docid": "32f6db1bf35da397cd61d744a789d49c",
"text": "Mushroom poisoning is the main cause of mortality in food poisoning incidents in China. Although some responsible mushroom species have been identified, some were identified inaccuratly. This study investigated and analyzed 102 mushroom poisoning cases in southern China from 1994 to 2012, which involved 852 patients and 183 deaths, with an overall mortality of 21.48 %. The results showed that 85.3 % of poisoning cases occurred from June to September, and involved 16 species of poisonous mushroom: Amanita species (A. fuliginea, A. exitialis, A. subjunquillea var. alba, A. cf. pseudoporphyria, A. kotohiraensis, A. neoovoidea, A. gymnopus), Galerina sulciceps, Psilocybe samuiensis, Russula subnigricans, R. senecis, R. japonica, Chlorophyllum molybdites, Paxillus involutus, Leucocoprinus cepaestipes and Pulveroboletus ravenelii. Six species (A. subjunquillea var. alba, A. cf. pseudoporphyria, A. gymnopus, R. japonica, Psilocybe samuiensis and Paxillus involutus) are reported for the first time in poisoning reports from China. Psilocybe samuiensis is a newly recorded species in China. The genus Amanita was responsible for 70.49 % of fatalities; the main lethal species were A. fuliginea and A. exitialis. Russula subnigricans caused 24.59 % of fatalities, and five species showed mortality >20 % (A. fuliginea, A. exitialis, A. subjunquillea var. alba, R. subnigricans and Paxillus involutus). Mushroom poisoning symptoms were classified from among the reported clinical symptoms. Seven types of mushroom poisoning symptoms were identified for clinical diagnosis and treatment in China, including gastroenteritis, acute liver failure, acute renal failure, psychoneurological disorder, hemolysis, rhabdomyolysis and photosensitive dermatitis.",
"title": ""
},
{
"docid": "a55bda062d2f374ffd425c42e0d8d1ba",
"text": "We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.",
"title": ""
},
{
"docid": "8c6c0a1bd17cf5cf0b84693fdfc776d9",
"text": "This paper deals with the unification of local and non-local signal processing on graphs within a single convolutional neural network (CNN) framework. Building upon recent works on graph CNNs, we propose to use convolutional layers that take as inputs two variables, a signal and a graph, allowing the network to adapt to changes in the graph structure. This also allows us to learn through training the optimal mixing of locality and non-locality, in cases where the graph is built on the input signal itself. We demonstrate the versatility and the effectiveness of our framework on several types of signals (greyscale and color images, color palettes and speech signals) and on several applications (style transfer, color transfer, and denoising).",
"title": ""
},
{
"docid": "b40129a15767189a7a595db89c066cf8",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
}
] |
scidocsrr
|
45496e802019324e75a7495fe0651307
|
The Berlin brain-computer interface: EEG-based communication without subject training
|
[
{
"docid": "5d247482bb06e837bf04c04582f4bfa2",
"text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.",
"title": ""
}
] |
[
{
"docid": "06abf2a7c6d0c25cfe54422268300e58",
"text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.",
"title": ""
},
{
"docid": "dfdf2581010777e51ff3e29c5b9aee7f",
"text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.",
"title": ""
},
{
"docid": "d9789c6dc7febc25732617f0d57a43a1",
"text": "When a binary or ordinal regression model incorrectly assumes that error variances are the same for all cases, the standard errors are wrong and (unlike OLS regression) the parameter estimates are biased. Heterogeneous choice (also known as location-scale or heteroskedastic ordered) models explicitly specify the determinants of heteroskedasticity in an attempt to correct for it. Such models are also useful when the variance itself is of substantive interest. This paper illustrates how the author’s Stata program oglm (Ordinal Generalized Linear Models) can be used to estimate heterogeneous choice and related models. It shows that two other models that have appeared in the literature (Allison’s model for group comparisons and Hauser and Andrew’s logistic response model with proportionality constraints) are special cases of a heterogeneous choice model and alternative parameterizations of it. The paper further argues that heterogeneous choice models may sometimes be an attractive alternative to other ordinal regression models, such as the generalized ordered logit model estimated by gologit2. Finally, the paper offers guidelines on how to interpret, test and modify heterogeneous choice models.",
"title": ""
},
{
"docid": "6c106d560d8894d941851386d96afe2b",
"text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.",
"title": ""
},
{
"docid": "645395d46f653358d942742711d50c0b",
"text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets",
"title": ""
},
{
"docid": "24ac33300d3ea99441068c20761e8305",
"text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.",
"title": ""
},
{
"docid": "b92d89fec6f0e1cfd869290b015a7be5",
"text": "Vertex-centric graph processing is employed by many popular algorithms (e.g., PageRank) due to its simplicity and efficient use of asynchronous parallelism. The high compute power provided by SIMT architecture presents an opportunity for accelerating these algorithms using GPUs. Prior works of graph processing on a GPU employ Compressed Sparse Row (CSR) form for its space-efficiency; however, CSR suffers from irregular memory accesses and GPU underutilization that limit its performance. In this paper, we present CuSha, a CUDA-based graph processing framework that overcomes the above obstacle via use of two novel graph representations: G-Shards and Concatenated Windows (CW). G-Shards uses a concept recently introduced for non-GPU systems that organizes a graph into autonomous sets of ordered edges called shards. CuSha's mapping of GPU hardware resources on to shards allows fully coalesced memory accesses. CW is a novel representation that enhances the use of shards to achieve higher GPU utilization for processing sparse graphs. Finally, CuSha fully utilizes the GPU power by processing multiple shards in parallel on GPU's streaming multiprocessors. For ease of programming, CuSha allows the user to define the vertex-centric computation and plug it into its framework for parallel processing of large graphs. Our experiments show that CuSha provides significant speedups over the state-of-the-art CSR-based virtual warp-centric method for processing graphs on GPUs.",
"title": ""
},
{
"docid": "8fe823702191b4a56defaceee7d19db6",
"text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.",
"title": ""
},
{
"docid": "77ec15fd35f9bceee4537afc63c82079",
"text": "Grapheme-to-phoneme conversion plays an important role in text-to-speech applications and other fields of computational linguistics. Although Korean uses a phonemic writing system, it must have a grapheme-to-phoneme conversion for speech synthesis because Korean writing system does not always reflect its actual pronunciations. This paper describes a grapheme-to-phoneme conversion method based on sound patterns to convert Korean text strings into phonemic representations. In the experiment with Korean news broadcasting evaluation set of 20 sentences, the accuracy of our system achieve as high as 98.70% on conversion. The performance of our rule-based system shows that the rule-based sound patterns are effective on Korean grapheme-to-phoneme conversion.",
"title": ""
},
{
"docid": "617db9b325e211b45571db6fb8dc6c87",
"text": "This paper gives a review of acoustic and ultrasonic optical fiber sensors (OFSs). The review covers optical fiber sensing methods for detecting dynamic strain signals, including general sound and acoustic signals, high-frequency signals, i.e., ultrasonic/ultrasound, and other signals such as acoustic emissions, and impact induced dynamic strain. Several optical fiber sensing methods are included, in an attempted to summarize the majority of optical fiber sensing methods used to date. The OFS include single fiber sensors and optical fiber devices, fiber-optic interferometers, and fiber Bragg gratings (FBGs). The single fiber and fiber device sensors include optical fiber couplers, microbend sensors, refraction-based sensors, and other extrinsic intensity sensors. The optical fiber interferometers include Michelson, Mach-Zehnder, Fabry-Perot, Sagnac interferometers, as well as polarization and model interference. The specific applications addressed in this review include optical fiber hydrophones, biomedical sensors, and sensors for nondestructive evaluation and structural health monitoring. Future directions are outlined and proposed for acousto-ultrasonic OFS.",
"title": ""
},
{
"docid": "368e72277a5937cb8ee94cea3fa11758",
"text": "Monoclinic Gd2O3:Eu(3+) nanoparticles (NPs) possess favorable magnetic and optical properties for biomedical application. However, how to obtain small enough NPs still remains a challenge. Here we combined the standard solid-state reaction with the laser ablation in liquids (LAL) technique to fabricate sub-10 nm monoclinic Gd2O3:Eu(3+) NPs and explained their formation mechanism. The obtained Gd2O3:Eu(3+) NPs exhibit bright red fluorescence emission and can be successfully used as fluorescence probe for cells imaging. In vitro and in vivo magnetic resonance imaging (MRI) studies show that the product can also serve as MRI good contrast agent. Then, we systematically investigated the nanotoxicity including cell viability, apoptosis in vitro, as well as the immunotoxicity and pharmacokinetics assays in vivo. This investigation provides a platform for the fabrication of ultrafine monoclinic Gd2O3:Eu(3+) NPs and evaluation of their efficiency and safety in preclinical application.",
"title": ""
},
{
"docid": "3dc3e680c68aefb6968fbe120d203cdf",
"text": "A procedure for reflection and discourse on the behavior of bots in the context of law, deception, and societal norms.",
"title": ""
},
{
"docid": "49e5f9e36efb6b295868a307c1486c60",
"text": "This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem",
"title": ""
},
{
"docid": "e7d5dd2926238db52cf406f20947f90e",
"text": "The development of the capital markets is changing the relevance and empirical validity of the efficient market hypothesis. The dynamism of capital markets determines the need for efficiency research. The authors analyse the development and the current status of the efficient market hypothesis with an emphasis on the Baltic stock market. Investors often fail to earn an excess profit, but yet stock market anomalies are observed and market prices often deviate from their intrinsic value. The article presents an analysis of the concept of efficient market. Also, the market efficiency evolution is reviewed and its current status is analysed. This paper presents also an examination of stock market efficiency in the Baltic countries. Finally, the research methods are reviewed and the methodology of testing the weak-form efficiency in a developing market is suggested.",
"title": ""
},
{
"docid": "059583d1d8a6f99bae3736d900008caa",
"text": "Ultraviolet disinfection is a frequent option for eliminating viable organisms in ballast water to fulfill international and national regulations. The objective of this work is to evaluate the reduction of microalgae able to reproduce after UV irradiation, based on their growth features. A monoculture of microalgae Tisochrysis lutea was irradiated with different ultraviolet doses (UV-C 254 nm) by a flow-through reactor. A replicate of each treated sample was held in the dark for 5 days simulating a treatment during the ballasting; another replicate was incubated directly under the light, corresponding to the treatment application during de-ballasting. Periodic measurements of cell density were taken in order to obtain the corresponding growth curves. Irradiated samples depicted a regrowth following a logistic curve in concordance with the applied UV dose. By modeling these curves, it is possible to obtain the initial concentration of organisms able to reproduce for each applied UV dose, thus obtaining the dose-survival profiles, needed to determine the disinfection kinetics. These dose-survival profiles enable detection of a synergic effect between the ultraviolet irradiation and a subsequent dark period; in this sense, the UV dose applied during the ballasting operation and subsequent dark storage exerts a strong influence on microalgae survival. The proposed methodology, based on growth modeling, established a framework for comparing the UV disinfection by different devices and technologies on target organisms. This procedure may also assist the understanding of the evolution of treated organisms in more complex assemblages such as those that exist in natural ballast water.",
"title": ""
},
{
"docid": "2c95ebadb6544904b791cdbbbd70dc1c",
"text": "This report describes a small heartbeat monitoring system using capacitively coupled ECG sensors. Capacitively coupled sensors using an insulated electrode have been proposed to obtain ECG signals without pasting electrodes directly onto the skin. Although the sensors have better usability than conventional ECG sensors, it is difficult to remove noise contamination. Power-line noise can be a severe noise source that increases when only a single electrode is used. However, a multiple electrode system degrades usability. To address this problem, we propose a noise cancellation technique using an adaptive noise feedback approach, which can improve the availability of the capacitive ECG sensor using a single electrode. An instrumental amplifier is used in the proposed method for the first stage amplifier instead of voltage follower circuits. A microcontroller predicts the noise waveform from an ADC output. To avoid saturation caused by power-line noise, the predicted noise waveform is fed back to an amplifier input through a DAC. We implemented the prototype sensor system to evaluate the noise reduction performance. Measurement results using a prototype board show that the proposed method can suppress 28-dB power-line noise.",
"title": ""
},
{
"docid": "1dd4bed5dd52b18f39c0e96c0a14c153",
"text": "Understanding the generalization of deep learning has raised lots of concerns recently, where the learning algorithms play an important role in generalization performance, such as stochastic gradient descent (SGD). Along this line, we particularly study the anisotropic noise introduced by SGD, and investigate its importance for the generalization in deep neural networks. Through a thorough empirical analysis, it is shown that the anisotropic diffusion of SGD tends to follow the curvature information of the loss landscape, and thus is beneficial for escaping from sharp and poor minima effectively, towards more stable and flat minima. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of positiondependent noise.",
"title": ""
},
{
"docid": "6f242ee8418eebdd9fdce50ca1e7cfa2",
"text": "HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt età la diffusion de documents scientifiques de niveau recherche, publiés ou non, ´ emanant desétablissements d'enseignement et de recherche français oú etrangers, des laboratoires publics ou privés. Summary. This paper describes the construction and functionality of an Autonomous Fruit Picking Machine (AFPM) for robotic apple harvesting. The key element for the success of the AFPM is the integrated approach which combines state of the art industrial components with the newly designed flexible gripper. The gripper consist of a silicone funnel with a camera mounted inside. The proposed concepts guarantee adequate control of the autonomous fruit harvesting operation globally and of the fruit picking cycle particularly. Extensive experiments in the field validate the functionality of the AFPM.",
"title": ""
},
{
"docid": "aa4b36c95058177167c58d4e192c8c1d",
"text": "Face detection is a prominent research domain in the field of digital image processing. Out of various algorithms developed so far, Viola–Jones face detection has been highly successful. However, because of its complex nature, there is need to do more exploration in its various phases including training as well as actual face detection to find the scope of further improvement in terms of efficiency as well as accuracy under various constraints so as to detect and process the faces in real time. Its training phase for the screening of large amount of Haar features and generation of cascade classifiers is quite tedious and computationally intensive task. Any modification for improvement in its features or cascade classifiers requires re-training of all the features through example images, which are very large in number. Therefore, there is need to enhance the computational efficiency of training process of Viola–Jones face detection algorithm so that further enhancement in this framework is made easy. There are three main contributions in this research work. Firstly, we have achieved a considerable speedup by parallelizing the training as well as detection of rectangular Haar features based upon Viola–Jones framework on GPU. Secondly, the analysis of features selected through AdaBoost has been done, which can give intuitiveness in developing more innovative and efficient techniques for selecting competitive classifiers for the task of face detection, which can further be generalized for any type of object detection. Thirdly, implementation of parallelization techniques of modified version of Viola–Jones face detection algorithm in combination with skin color filtering to reduce the search space has been done. We have been able to achieve considerable reduction in the search space and time cost by using the skin color filtering in conjunction with the Viola–Jones algorithm. Time cost reduction of the order of 54.31% at the image resolution of 640*480 of GPU time versus CPU time has been achieved by the proposed parallelized algorithm.",
"title": ""
},
{
"docid": "45ec93ccf4b2f6a6b579a4537ca73e9c",
"text": "Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.",
"title": ""
}
] |
scidocsrr
|
3567c9cc9f656de0f5175228f411c082
|
Towards real-time Speech Emotion Recognition using deep neural networks
|
[
{
"docid": "d1cde8ce9934723224ecf21c3cab6615",
"text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.",
"title": ""
},
{
"docid": "eba5ef77b594703c96c0e2911fcce7b0",
"text": "Deep Neural Network Hidden Markov Models, or DNN-HMMs, are recently very promising acoustic models achieving good speech recognition results over Gaussian mixture model based HMMs (GMM-HMMs). In this paper, for emotion recognition from speech, we investigate DNN-HMMs with restricted Boltzmann Machine (RBM) based unsupervised pre-training, and DNN-HMMs with discriminative pre-training. Emotion recognition experiments are carried out on these two models on the eNTERFACE'05 database and Berlin database, respectively, and results are compared with those from the GMM-HMMs, the shallow-NN-HMMs with two layers, as well as the Multi-layer Perceptrons HMMs (MLP-HMMs). Experimental results show that when the numbers of the hidden layers as well hidden units are properly set, the DNN could extend the labeling ability of GMM-HMM. Among all the models, the DNN-HMMs with discriminative pre-training obtain the best results. For example, for the eNTERFACE'05 database, the recognition accuracy improves 12.22% from the DNN-HMMs with unsupervised pre-training, 11.67% from the GMM-HMMs, 10.56% from the MLP-HMMs, and even 17.22% from the shallow-NN-HMMs, respectively.",
"title": ""
}
] |
[
{
"docid": "40d7847859a974d2a91cccab55ba625b",
"text": "Programming question and answer (Q&A) websites, such as Stack Overflow, leverage the knowledge and expertise of users to provide answers to technical questions. Over time, these websites turn into repositories of software engineering knowledge. Such knowledge repositories can be invaluable for gaining insight into the use of specific technologies and the trends of developer discussions. Previous work has focused on analyzing the user activities or the social interactions in Q&A websites. However, analyzing the actual textual content of these websites can help the software engineering community to better understand the thoughts and needs of developers. In the article, we present a methodology to analyze the textual content of Stack Overflow discussions. We use latent Dirichlet allocation (LDA), a statistical topic modeling technique, to automatically discover the main topics present in developer discussions. We analyze these discovered topics, as well as their relationships and trends over time, to gain insights into the development community. Our analysis allows us to make a number of interesting observations, including: the topics of interest to developers range widely from jobs to version control systems to C# syntax; questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL.",
"title": ""
},
{
"docid": "b6e8311f7f1a01d508a7320c956f8362",
"text": "The evolution of traditional electricity grid into a state-of-the-art Smart Grid will need innovation in a number of dimensions: seamless integration of renewable energy sources, management of intermittent power supplies, realtime demand response, energy pricing strategy etc. The grid configuration will change from the central broadcasting network into a more distributed and dynamic network with two-way energy transmission. Information network is another necessary component that will be built on the power grid, which will measure the status of the whole power grid and control the energy flow. In this perspective of unsolved problems, we have designed SmartGridLab, an efficient Smart Grid testbed to help the research community analyze their designs and protocols in lab environment. This will foster the Smart Grid researchers to develop, analyze and compare different designs conveniently and efficiently. Our designed testbed consists of following major components: Intelligent Power Switch, power supply (main supply and renewable energy supply), energy demander (e.g. appliance), and an information network containing Power Meter. We have validated the usage of our designed testbed for greater research problems in Smart Grid.",
"title": ""
},
{
"docid": "9775396477ccfde5abdd766588655539",
"text": "The use of hand gestures offers an alternative to the commonly used human computer interfaces, providing a more intuitive way of navigating among menus and multimedia applications. This paper presents a system for hand gesture recognition devoted to control windows applications. Starting from the images captured by a time-of-flight camera (a camera that produces images with an intensity level inversely proportional to the depth of the objects observed) the system performs hand segmentation as well as a low-level extraction of potentially relevant features which are related to the morphological representation of the hand silhouette. Classification based on these features discriminates between a set of possible static hand postures which results, combined with the estimated motion pattern of the hand, in the recognition of dynamic hand gestures. The whole system works in real-time, allowing practical interaction between user and application.",
"title": ""
},
{
"docid": "0fd5256c319f7be353d57ed336d94587",
"text": "As a precious part of the human cultural heritage, Chinese poetry has influenced people for generations. Automatic poetry composition is a challenge for AI. In recent years, significant progress has been made in this area benefiting from the development of neural networks. However, the coherence in meaning, theme or even artistic conception for a generated poem as a whole still remains a big problem. In this paper, we propose a novel Salient-Clue mechanism for Chinese poetry generation. Different from previous work which tried to exploit all the context information, our model selects the most salient characters automatically from each so-far generated line to gradually form a salient clue, which is utilized to guide successive poem generation process so as to eliminate interruptions and improve coherence. Besides, our model can be flexibly extended to control the generated poem in different aspects, for example, poetry style, which further enhances the coherence. Experimental results show that our model is very effective, outperforming three strong baselines.",
"title": ""
},
{
"docid": "f03f84dd248d06049a177768f0fc8671",
"text": "We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.",
"title": ""
},
{
"docid": "54ecdc8d482dd68a3fde4b5c6a11cca2",
"text": "this paper presents how we can achieve the state-of-the-art accuracy in multi-scale objection detection task while adopting and combining the recent technical innovation in deep learning. Following the common pipeline of CNN feature extraction, we mainly design the architecture of feature extraction which exploits the idea of feature pyramid. We further add an extra 1*1 convolution layer to benefit feature extraction, via the batch normalization. In addition, the designed network architecture for feature extraction combines low-resolution and high-resolution feature layers to predict the category of the object in images. The new architecture is trained with the help of batch normalization, mean pooling based on plateau detection. The proposed architecture shows competitive results compared to some state-of-the-art algorithms both in accuracy and in speed on some datasets.",
"title": ""
},
{
"docid": "2baf55123171c6e2110b19b1583c3d17",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
},
{
"docid": "ab56ccc071c2eb59168266eae67c4b1f",
"text": "Word embeddings have attracted much attention recently. Different from alphabetic writing systems, Chinese characters are often composed of subcharacter components which are also semantically informative. In this work, we propose an approach to jointly embed Chinese words as well as their characters and fine-grained subcharacter components. We use three likelihoods to evaluate whether the context words, characters, and components can predict the current target word, and collected 13,253 subcharacter components to demonstrate the existing approaches of decomposing Chinese characters are not enough. Evaluation on both word similarity and word analogy tasks demonstrates the superior performance of our model.",
"title": ""
},
{
"docid": "f4c51f4790114c42bef19ff421c83f0d",
"text": "Real-time systems are growing in complexity and realtime and soft real-time applications are becoming common in general-purpose computing environments. Thus, there is a growing need for scheduling solutions that simultaneously support processes with a variety of different timeliness constraints. Toward this goal we have developed the Resource Allocation/Dispatching (RAD) integrated scheduling model and the Rate-Based Earliest Deadline (RBED) integrated multi-class real-time scheduler based on this model. We present RAD and the RBED scheduler and formally prove the correctness of the operations that RBED employs. We then describe our implementation of RBED and present results demonstrating how RBED simultaneously and seamlessly supports hard real-time, soft real-time, and best-effort processes.",
"title": ""
},
{
"docid": "180e1eb6c7c9c752de5cfca2c2149d1d",
"text": "State-of-the-art CNN models for Image recognition use deep networks with small filters instead of shallow networks with large filters, because the former requires fewer weights. In the light of above trend, we present a fast and efficient FPGA based convolution engine to accelerate CNN models over small filters. The convolution engine implements Winograd minimal filtering algorithm to reduce the number of multiplications by 38% to 55% for state-of-the-art CNNs. We exploit the parallelism of the Winograd convolution engine to scale the overall performance. We show that our overall design sustains the peak throughput of the convolution engines. We propose a novel data layout to reduce the required memory bandwidth of our design by half. One noteworthy feature of our Winograd convolution engine is that it hides the computation latency of the pooling layer. As a case study we implement VGG16 CNN model and compare it with previous approaches. Compared with the state-of-the-art reduced precision VGG16 implementation, our implementation achieves 1.2× improvement in throughput by using 3× less multipliers and 2× less on-chip memory without impacting the classification accuracy. The improvements in throughput per multiplier and throughput per unit on-chip memory are 3.7× and 2.47× respectively, compared with the state-of-the-art design.",
"title": ""
},
{
"docid": "ed2ad5cd12eb164a685a60dc0d0d4a06",
"text": "Explainable Recommendation refers to the personalized recommendation algorithms that address the problem of why they not only provide users with the recommendations, but also provide explanations to make the user or system designer aware of why such items are recommended. In this way, it helps to improve the effectiveness, efficiency, persuasiveness, and user satisfaction of recommendation systems. In recent years, a large number of explainable recommendation approaches – especially model-based explainable recommendation algorithms – have been proposed and adopted in real-world systems. In this survey, we review the work on explainable recommendation that has been published in or before the year of 2018. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation itself in terms of three aspects: 1) We provide a chronological research line of explanations in recommender systems, including the user study approaches in the early years, as well as the more recent model-based approaches. 2) We provide a taxonomy for explainable recommendation algorithms, including user-based, item-based, model-based, and post-model explanations. 3) We summarize the application of explainable recommendation in different recommendation tasks, including product recommendation, social recommendation, POI recommendation, etc. We devote a section to discuss the explanation perspectives in the broader IR and machine learning settings, as well as their relationship with explainable recommendation research. We end the survey by discussing potential future research directions to promote the explainable recommendation research area. now Publishers Inc.. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends © in Information Retrieval, vol. XX, no. XX, pp. 1–87, 2018. DOI: 10.1561/XXXXXXXXXX.",
"title": ""
},
{
"docid": "4306cc9072c5b53f6fc7b79574dac117",
"text": "It is popular to use real-world data to evaluate data mining techniques. However, there are some disadvantages to use real-world data for such purposes. Firstly, real-world data in most domains is difficult to obtain for several reasons, such as budget, technical or ethical. Secondly, the use of many of the real-world data is restricted, those data sets do either not contain specific patterns that are easy to mine or the data needs special preparation and the algorithm needs very specific settings in order to find patterns in it. The solution to this could be the generation of synthetic, \"meaningful data\" (data with intrinsic patterns). This paper presents a novel approach for generating synthetic data by developing a tool, including novel algorithms for specific data mining patterns, and a user-friendly interface, which is able to create large data sets with predefined classification rules, multilinear regression patterns. A preliminary run of the prototype proves that the generation of large amounts of such \"meaningful data\" is possible. Also the proposed approach could be extended to a further development for generating synthetic data with other intrinsic patterns.",
"title": ""
},
{
"docid": "ff3f051b9fde8a8e1a877e998851c9ec",
"text": "We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results. ∗[email protected] †[email protected] ‡[email protected]",
"title": ""
},
{
"docid": "647c9d35a4f26df723125bc0f751cd6a",
"text": "Convolutional Neural Networks (CNNs) have become the state-of-the-art in supervised learning vision tasks. Their convolutional filters are of paramount importance for they allow to learn patterns while disregarding their locations in input images. When facing highly irregular domains, generalized convolutional operators based on an underlying graph structure have been proposed. However, these operators do not exactly match standard ones on grid graphs, and introduce unwanted additional invariance (e.g. with regards to rotations). We propose a novel approach to generalize CNNs to irregular domains using weight sharing and graph-based operators. Using experiments, we show that these models resemble CNNs on regular domains and offer better performance than multilayer perceptrons on distorded ones.",
"title": ""
},
{
"docid": "1045117f9e6e204ff51ef67a1aff031f",
"text": "Application of models to data is fraught. Data-generating collaborators often only have a very basic understanding of the complications of collating, processing and curating data. Challenges include: poor data collection practices, missing values, inconvenient storage mechanisms, intellectual property, security and privacy. All these aspects obstruct the sharing and interconnection of data, and the eventual interpretation of data through machine learning or other approaches. In project reporting, a major challenge is in encapsulating these problems and enabling goals to be built around the processing of data. Project overruns can occur due to failure to account for the amount of time required to curate and collate. But to understand these failures we need to have a common language for assessing the readiness of a particular data set. This position paper proposes the use of data readiness levels: it gives a rough outline of three stages of data preparedness and speculates on how formalisation of these levels into a common language for data readiness could facilitate project management.",
"title": ""
},
{
"docid": "9adb3374f58016ee9bec1daf7392a64e",
"text": "To develop a less genotype-dependent maize-transformation procedure, we used 10-month-old Type I callus as target tissue for microprojectile bombardment. Twelve transgenic callus lines were obtained from two of the three anther-culture-derived callus cultures representing different gentic backgrounds. Multiple fertile transgenic plants (T0) were regenerated from each transgenic callus line. Transgenic leaves treated with the herbicide Basta showed no symptoms, indicating that one of the two introduced genes, bar, was functionally expressing. Data from DNA hybridization analysis confirmed that the introduced genes (bar and uidA) were integrated into the plant genome and that all lines derived from independent transformation events. Transmission of the introduced genes and the functional expression of bar in T1 progeny was also confirmed. Germination of T1 immature embryos in the presence of bialaphos was used as a screen for functional expression of bar; however, leaf painting of T1 plants proved a more accurate predictor of bar expression in plants. This study suggests that maize Type I callus can be transformed efficiently through microprojectile bombardment and that fertile transgenic plants can be recovered. This system should facilitate the direct introduction of agronomically important genes in to commercial genotypes.",
"title": ""
},
{
"docid": "ad68a9ecf4ba36ec924ec22afaafd9f3",
"text": "The convergence rate and final performance of common deep learning models have significantly benefited from heuristics such as learning rate schedules, knowledge distillation, skip connections, and normalization layers. In the absence of theoretical underpinnings, controlled experiments aimed at explaining these strategies can aid our understanding of deep learning landscapes and the training dynamics. Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations. Instead, we revisit such analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz., mode connectivity and canonical correlation analysis (CCA), and hypothesize reasons for the success of the heuristics. In particular, we explore knowledge distillation and learning rate heuristics of (cosine) restarts and warmup using mode connectivity and CCA. Our empirical analysis suggests that: (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and (c) that the latent knowledge shared by the teacher is primarily disbursed to the deeper layers.",
"title": ""
},
{
"docid": "fcea8882b303897fd47cbece47271512",
"text": "Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.",
"title": ""
},
{
"docid": "019c2d5927e54ae8ce3fc7c5b8cff091",
"text": "In this paper, we present Affivir, a video browsing system that recommends Internet videos that match a user’s affective preference. Affivir models a user’s watching behavior as sessions, and dynamically adjusts session parameters to cater to the user’s current mood. In each session, Affivir discovers a user’s affective preference through user interactions, such as watching or skipping videos. Affivir uses video affective features (motion, shot change rate, sound energy, and audio pitch average) to retrieve videos that have similar affective responses. To efficiently search videos of interest from our video repository, all videos in the repository are pre-processed and clustered. Our experimental results shows that Affivir has made a significant improvement in user satisfaction and enjoyment, compared with several other popular baseline approaches.",
"title": ""
},
{
"docid": "b9b0a18b26f563ffc75179e372742636",
"text": "Purpose – The purpose of this paper is to survey, explore and inform researchers about the previous methodologies applied, target audience and coverage of previous assessment of cybersecurity awareness by capturing, summarizing, synthesizing and critically comment on it. It is also conducted to identify the gaps in the cybersecurity awareness assessment research which warrants the future work. Design/methodology/approach – The authors used a systematic literature review technique to search the relevant online databases by using pre-defined keywords. The authors limited the search to retrieve only English language academic articles published from 2005 to 2014. Relevant information was extracted from the retrieved articles, and the ensuing discussion centres on providing the answers to the research questions. Findings – From the online searches, 23 studies that matched the search criteria were retrieved, and the information extracted from each study includes the authors, publication year, assessment method used, target audiences, coverage of assessment and assessment goals. Originality/value – The review of the retrieved articles indicates that no previous research was conducted in the assessment of the cybersecurity awareness using a programme evaluation technique. It was also found that few studies focused on youngsters and on the issue of safeguarding personal information.",
"title": ""
}
] |
scidocsrr
|
8eaf4f6e40e4a0c9585c8d572cd77814
|
A Horizontal Fragmentation Algorithm for the Fact Relation in a Distributed Data Warehouse
|
[
{
"docid": "cd892dec53069137c1c2cfe565375c62",
"text": "Optimal application performance on a Distributed Object Based System (DOBS) requires class fragmentation and the development of allocation schemes to place fragments at distributed sites so data transfer is minimized. Fragmentation enhances application performance by reducing the amount of irrelevant data accessed and the amount of data transferred unnecessarily between distributed sites. Algorithms for effecting horizontal and vertical fragmentation ofrelations exist, but fragmentation techniques for class objects in a distributed object based system are yet to appear in the literature. This paper first reviews a taxonomy of the fragmentation problem in a distributed object base. The paper then contributes by presenting a comprehensive set of algorithms for horizontally fragmenting the four realizable class models on the taxonomy. The fundamental approach is top-down, where the entity of fragmentation is the class object. Our approach consists of first generating primary horizontal fragments of a class based on only applications accessing this class, and secondly generating derived horizontal fragments of the class arising from primary fragments of its subclasses, its complex attributes (contained classes), and/or its complex methods classes. Finally, we combine the sets of primary and derived fragments of each class to produce the best possible fragments. Thus, these algorithms account for inheritance and class composition hierarchies as well as method nesting among objects, and are shown to be polynomial time.",
"title": ""
}
] |
[
{
"docid": "d1114f1ced731a700d40dd97fe62b82b",
"text": "Agricultural sector is playing vital role in Indian economy, in which irrigation mechanism is of key concern. Our paper aims to find the exact field condition and to control the wastage of water in the field and to provide exact controlling of field by using the drip irrigation, atomizing the agricultural environment by using the components and building the necessary hardware. For the precisely monitoring and controlling of the agriculture filed, different types of sensors were used. To implement the proposed system ARM LPC2148 Microcontroller is used. The irrigation mechanism is monitored and controlled more efficiently by the proposed system, which is a real time feedback control system. GSM technology is used to inform the end user about the exact field condition. Actually this method of irrigation system has been proposed primarily to save resources, yield of crops and farm profitability.",
"title": ""
},
{
"docid": "80c21770ada160225e17cb9673fff3b3",
"text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.",
"title": ""
},
{
"docid": "aed80386c32e16f70fff3cbc44b07d97",
"text": "The vision for the \"Web of Things\" (WoT) aims at bringing physical objects of the world into the World Wide Web. The Web is constantly evolving and has changed over the last couple of decades and the changes have spurted new areas of growth. The primary focus of the WoT is to bridge the gap between physical and digital worlds over a common and widely used platform, which is the Web. Everyday physical \"things\", which are not Web-enabled, and have limited or zero computing capability, can be accommodated within the Web. As a step towards this direction, this work focuses on the specification of a thing, its descriptors and functions that could participate in the process of its discovery and operations. Besides, in this model for the WoT, we also propose a semantic Web-based architecture to integrate these things as Web resources to further demystify the realization of the WoT vision.",
"title": ""
},
{
"docid": "c3c5931200ff752d8285cc1068e779ee",
"text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.",
"title": ""
},
{
"docid": "812c41737bb2a311d45c5566f773a282",
"text": "Acceleration, sprint and agility performance are crucial in sports like soccer. There are few studies regarding the effect of training on youth soccer players in agility performance and in sprint distances shorter than 30 meter. Therefore, the aim of the recent study was to examine the effect of a high-intensity sprint and plyometric training program on 13-year-old male soccer players. A training group of 14 adolescent male soccer players, mean age (±SD) 13.5 years (±0.24) followed an eight week intervention program for one hour per week, and a group of 12 adolescent male soccer players of corresponding age, mean age 13.5 years (±0.23) served as control a group. Preand post-tests assessed 10-m linear sprint, 20-m linear sprint and agility performance. Results showed a significant improvement in agility performance, pre 8.23 s (±0.34) to post 7.69 s (± 0.34) (p<0.01), and a significant improvement in 0-20m linear sprint, pre 3.54s (±0.17) to post 3.42s (±0.18) (p<0.05). In 0-10m sprint the participants also showed an improvement, pre 2.02s (±0.11) to post 1.96s (± 0.11), however this was not significant. The correlation between 10-m sprint and agility was r = 0.53 (p<0.01), and between 20-m linear sprint and agility performance, r = 0.67 (p<0.01). The major finding in the study is the significant improvement in agility performance and in 0-20 m linear sprint in the intervention group. These findings suggest that organizing the training sessions with short-burst high-intensity sprint and plyometric exercises interspersed with adequate recovery time, may result in improvements in both agility and in linear sprint performance in adolescent male soccer players. Another finding is the correlation between linear sprint and agility performance, indicating a difference when compared to adults. 4 | Mathisen: EFFECT OF HIGH-SPEED...",
"title": ""
},
{
"docid": "ccff1c7fa149a033b49c3a6330d4e0f3",
"text": "Stroke is the leading cause of permanent adult disability in the U.S., frequently resulting in chronic motor impairments. Rehabilitation of the upper limb, particularly the hand, is especially important as arm and hand deficits post-stroke limit the performance of activities of daily living and, subsequently, functional independence. Hand rehabilitation is challenging due to the complexity of motor control of the hand. New instrumentation is needed to facilitate examination of the hand. Thus, a novel actuated exoskeleton for the index finger, the FingerBot, was developed to permit the study of finger kinetics and kinematics under a variety of conditions. Two such novel environments, one applying a spring-like extension torque proportional to angular displacement at each finger joint and another applying a constant extension torque at each joint, were compared in 10 stroke survivors with the FingerBot. Subjects attempted to reach targets located throughout the finger workspace. The constant extension torque assistance resulted in a greater workspace area (p < 0.02) and a larger active range of motion for the metacarpophalangeal joint (p < 0.01) than the spring-like assistance. Additionally, accuracy in terms of reaching the target was greater with the constant extension assistance as compared to no assistance. The FingerBot can be a valuable tool in assessing various hand rehabilitation paradigms following stroke.",
"title": ""
},
{
"docid": "177c5969917e04ea94773d1c545fae82",
"text": "Attitudes toward global warming are influenced by various heuristics, which may distort policy away from what is optimal for the well-being of people. These possible distortions, or biases, include: a focus on harms that we cause, as opposed to those that we can remedy more easily; a feeling that those who cause a problem should fix it; a desire to undo a problem rather than compensate for its presence; parochial concern with one’s own group (nation); and neglect of risks that are not available. Although most of these biases tend to make us attend relatively too much to global warming, other biases, such as wishful thinking, cause us to attend too little. I discuss these possible effects and illustrate some of them with an experiment conducted on the World Wide Web.",
"title": ""
},
{
"docid": "34382f9716058d727f467716350788a7",
"text": "The structure of the brain and the nature of evolution suggest that, despite its uniqueness, language likely depends on brain systems that also subserve other functions. The declarative/procedural (DP) model claims that the mental lexicon of memorized word-specific knowledge depends on the largely temporal-lobe substrates of declarative memory, which underlies the storage and use of knowledge of facts and events. The mental grammar, which subserves the rule-governed combination of lexical items into complex representations, depends on a distinct neural system. This system, which is composed of a network of specific frontal, basal-ganglia, parietal and cerebellar structures, underlies procedural memory, which supports the learning and execution of motor and cognitive skills, especially those involving sequences. The functions of the two brain systems, together with their anatomical, physiological and biochemical substrates, lead to specific claims and predictions regarding their roles in language. These predictions are compared with those of other neurocognitive models of language. Empirical evidence is presented from neuroimaging studies of normal language processing, and from developmental and adult-onset disorders. It is argued that this evidence supports the DP model. It is additionally proposed that \"language\" disorders, such as specific language impairment and non-fluent and fluent aphasia, may be profitably viewed as impairments primarily affecting one or the other brain system. Overall, the data suggest a new neurocognitive framework for the study of lexicon and grammar.",
"title": ""
},
{
"docid": "b741698d7e4d15cb7f4e203f2ddbce1d",
"text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.",
"title": ""
},
{
"docid": "8ba9439094fae89d6ff14d03476878b9",
"text": "In this paper we present a framework for the real-time control of lightweight autonomous vehicles which comprehends a proposed hardand software design. The system can be used for many kinds of vehicles and offers high computing power and flexibility in respect of the control algorithms and additional application dependent tasks. It was originally developed to control a small quad-rotor UAV where stringent restrictions in weight and size of the hardware components exist, but has been transfered to a fixed-wing UAV and a ground vehicle for inand outdoor search and rescue missions. The modular structure and the use of a standard PC architecture at an early stage simplifies reuse of components and fast integration of new features. Figure 1: Quadrotor UAV controlled by the proposed system",
"title": ""
},
{
"docid": "5f96b65c7facf35cd0b2e629a2e98662",
"text": "Effectively evaluating visualization techniques is a difficult task often assessed through feedback from user studies and expert evaluations. This work presents an alternative approach to visualization evaluation in which brain activity is passively recorded using electroencephalography (EEG). These measurements are used to compare different visualization techniques in terms of the burden they place on a viewer’s cognitive resources. In this paper, EEG signals and response times are recorded while users interpret different representations of data distributions. This information is processed to provide insight into the cognitive load imposed on the viewer. This paper describes the design of the user study performed, the extraction of cognitive load measures from EEG data, and how those measures are used to quantitatively evaluate the effectiveness of visualizations.",
"title": ""
},
{
"docid": "9ae370847ec965a3ce9c7636f8d6a726",
"text": "In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and userdefined gestures with an accuracy of 97% It can detect tremors above 2HZ within .1 Hz.",
"title": ""
},
{
"docid": "3d9e279afe4ba8beb1effd4f26550f67",
"text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.",
"title": ""
},
{
"docid": "97561632e9d87093a5de4f1e4b096df7",
"text": "Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate. Guy Shani Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: [email protected] Asela Gunawardana Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: [email protected]",
"title": ""
},
{
"docid": "5c469bbeb053c187c2d14fd9f27c4426",
"text": "Fatigue damage increases with applied load cycles in a cumulative manner. Cumulative fatigue damage analysis plays a key role in life prediction of components and structures subjected to field load histories. Since the introduction of damage accumulation concept by Palmgren about 70 years ago and ‘linear damage rule’ by Miner about 50 years ago, the treatment of cumulative fatigue damage has received increasingly more attention. As a result, many damage models have been developed. Even though early theories on cumulative fatigue damage have been reviewed by several researchers, no comprehensive report has appeared recently to review the considerable efforts made since the late 1970s. This article provides a comprehensive review of cumulative fatigue damage theories for metals and their alloys. emphasizing the approaches developed between the early 1970s to the early 1990s. These theories are grouped into six categories: linear damage rules; nonlinear damage curve and two-stage linearization approaches; life curve modification methods; approaches based on crack growth concepts: continuum damage mechanics models: and energy-based theories.",
"title": ""
},
{
"docid": "b0bcd65de1841474dba09e9b1b5c2763",
"text": "Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.",
"title": ""
},
{
"docid": "c31ffcb1514f437313c2f3f0abaf3a88",
"text": "Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.",
"title": ""
},
{
"docid": "2a68d57f8d59205122dd11461accecab",
"text": "A resistive methanol sensor based on ZnO hexagonal nanorods having average diameter (60–70 nm) and average length of <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\sim}{\\rm 500}~{\\rm nm}$</tex></formula>, is reported in this paper. A low temperature chemical bath deposition technique is employed to deposit vertically aligned ZnO hexagonal nanorods using zinc acetate dihydrate and hexamethylenetetramine (HMT) precursors at 100<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula> on a <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm SiO}_{2}$</tex></formula> substrate having Sol-Gel grown ZnO seed layer. After structural (XRD, FESEM) and electrical (Hall effect) characterizations, four types of sensors structures incorporating the effect of catalytic metal electrode (Pd-Ag) and Pd nanoparticle sensitization, are fabricated and tested for sensing methanol vapor in the temperature range of 27<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex> </formula>–300<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>. The as deposited ZnO nanorods with Pd-Ag catalytic contact offered appreciably high dynamic range (190–3040 ppm) at moderately lower temperature (200<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}{\\rm C}$</tex></formula>) compared to the sensors with noncatalytic electrode (Au). Surface modification of nanorods by Pd nanoparticles offered faster response and recovery with increased response magnitude for both type of electrodes, but at the cost of lower dynamic range (190–950 ppm). The possible sensing mechanism has also been discussed briefly.",
"title": ""
},
{
"docid": "ef1f34e7bc08b78bfbf7317cd102c89e",
"text": "Most modern trackers typically employ a bounding box given in the first frame to track visual objects, where their tracking results are often sensitive to the initialization. In this paper, we propose a new tracking method, Reliable Patch Trackers (RPT), which attempts to identify and exploit the reliable patches that can be tracked effectively through the whole tracking process. Specifically, we present a tracking reliability metric to measure how reliably a patch can be tracked, where a probability model is proposed to estimate the distribution of reliable patches under a sequential Monte Carlo framework. As the reliable patches distributed over the image, we exploit the motion trajectories to distinguish them from the background. Therefore, the visual object can be defined as the clustering of homo-trajectory patches, where a Hough voting-like scheme is employed to estimate the target state. Encouraging experimental results on a large set of sequences showed that the proposed approach is very effective and in comparison to the state-of-the-art trackers. The full source code of our implementation will be publicly available.",
"title": ""
},
{
"docid": "90084e7b31e89f5eb169a0824dde993b",
"text": "In this work, we present a novel way of using neural network for graph-based dependency parsing, which fits the neural network into a simple probabilistic model and can be furthermore generalized to high-order parsing. Instead of the sparse features used in traditional methods, we utilize distributed dense feature representations for neural network, which give better feature representations. The proposed parsers are evaluated on English and Chinese Penn Treebanks. Compared to existing work, our parsers give competitive performance with much more efficient inference.",
"title": ""
}
] |
scidocsrr
|
8448fb512ca5914104577a33dfa0868c
|
Exploring Shared Structures and Hierarchies for Multiple NLP Tasks
|
[
{
"docid": "7f74c519207e469c39f81d52f39438a0",
"text": "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.",
"title": ""
},
{
"docid": "9c4a7d9a313472d9d321578937ca1015",
"text": "Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% reduction in training time.",
"title": ""
},
{
"docid": "2aa74c713a646e28f7cf8fb2e4c40364",
"text": "Semantic composition functions have been playing a pivotal role in neural representation learning of text sequences. In spite of their success, most existing models suffer from the underfitting problem: they use the same shared compositional function on all the positions in the sequence, thereby lacking expressive power due to incapacity to capture the richness of compositionality. Besides, the composition functions of different tasks are independent and learned from scratch. In this paper, we propose a new sharing scheme of composition function across multiple tasks. Specifically, we use a shared meta-network to capture the meta-knowledge of semantic composition and generate the parameters of the taskspecific semantic composition models. We conduct extensive experiments on two types of tasks, text classification and sequence tagging, which demonstrate the benefits of our approach. Besides, we show that the shared meta-knowledge learned by our proposed model can be regarded as off-theshelf knowledge and easily transferred to new tasks.",
"title": ""
},
{
"docid": "dadd12e17ce1772f48eaae29453bc610",
"text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st",
"title": ""
}
] |
[
{
"docid": "101e93562935c799c3c3fa62be98bf09",
"text": "This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line , we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.",
"title": ""
},
{
"docid": "d58b5ba27042645edb91fdf9f463d078",
"text": "Coastal ecosystems and the services they provide are adversely affected by a wide variety of human activities. In particular, seagrass meadows are negatively affected by impacts accruing from the billion or more people who live within 50 km of them. Seagrass meadows provide important ecosystem services, including an estimated $1.9 trillion per year in the form of nutrient cycling; an order of magnitude enhancement of coral reef fish productivity; a habitat for thousands of fish, bird, and invertebrate species; and a major food source for endangered dugong, manatee, and green turtle. Although individual impacts from coastal development, degraded water quality, and climate change have been documented, there has been no quantitative global assessment of seagrass loss until now. Our comprehensive global assessment of 215 studies found that seagrasses have been disappearing at a rate of 110 km(2) yr(-1) since 1980 and that 29% of the known areal extent has disappeared since seagrass areas were initially recorded in 1879. Furthermore, rates of decline have accelerated from a median of 0.9% yr(-1) before 1940 to 7% yr(-1) since 1990. Seagrass loss rates are comparable to those reported for mangroves, coral reefs, and tropical rainforests and place seagrass meadows among the most threatened ecosystems on earth.",
"title": ""
},
{
"docid": "c37296d4b2673e69ecbe78a3fb1d4440",
"text": "Deep learning-based techniques have achieved stateof-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-theart performance on the user’s training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of 25% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and—because the behavior of neural networks is difficult to explicate— stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging",
"title": ""
},
{
"docid": "958fea977cf31ddabd291da68754367d",
"text": "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.",
"title": ""
},
{
"docid": "1999df76f8d2a8ee35f6b76b59954a37",
"text": "Humans, infants and adults alike, automatically mimic a variety of behaviors. Such mimicry facilitates social functioning, including establishment of interpersonal rapport and understanding of other minds. This fundamental social process may thus be impaired in disorders such as autism characterized by socio-emotional and communicative deficits. We examined automatic and voluntary mimicry of emotional facial expression among adolescents and adults with autistic spectrum disorders (ASD) and a typical sample matched on age, gender and verbal intelligence. Participants viewed pictures of happy and angry expressions while the activity over their cheek and brow muscle region was monitored with electromyography (EMG). ASD participants did not automatically mimic facial expressions whereas the typically developing participants did. However, both groups showed evidence of successful voluntary mimicry. The data suggest that autism is associated with an impairment of a basic automatic social-emotion process. Results have implications for understanding typical and atypical social cognition.",
"title": ""
},
{
"docid": "7a5e65dde7af8fe05654ea9d5c3b7861",
"text": "The objective of this paper is to provide a comparison among permanent magnet (PM) wind generators of different topologies. Seven configurations are chosen for the comparison, consisting of both radial-flux and axial-flux machines. The comparison is done at seven power levels ranging from 1 to 200 kW. The basis for the comparison is discussed and implemented in detail in the design procedure. The criteria used for comparison are considered to be critical for the efficient deployment of PM wind generators. The design data are optimized and verified by finite-element analysis and commercial generator test results. For a given application, the results provide an indication of the best-suited machine.",
"title": ""
},
{
"docid": "334e29faadafff9a0d6e0017ea1d2fef",
"text": "OBJECTIVES\nTo provide typical examples of biomedical ontologies in action, emphasizing the role played by biomedical ontologies in knowledge management, data integration and decision support.\n\n\nMETHODS\nBiomedical ontologies selected for their practical impact are examined from a functional perspective. Examples of applications are taken from operational systems and the biomedical literature, with a bias towards recent journal articles.\n\n\nRESULTS\nThe ontologies under investigation in this survey include SNOMED CT, the Logical Observation Identifiers, Names, and Codes (LOINC), the Foundational Model of Anatomy, the Gene Ontology, RxNorm, the National Cancer Institute Thesaurus, the International Classification of Diseases, the Medical Subject Headings (MeSH) and the Unified Medical Language System (UMLS). The roles played by biomedical ontologies are classified into three major categories: knowledge management (indexing and retrieval of data and information, access to information, mapping among ontologies); data integration, exchange and semantic interoperability; and decision support and reasoning (data selection and aggregation, decision support, natural language processing applications, knowledge discovery).\n\n\nCONCLUSIONS\nOntologies play an important role in biomedical research through a variety of applications. While ontologies are used primarily as a source of vocabulary for standardization and integration purposes, many applications also use them as a source of computable knowledge. Barriers to the use of ontologies in biomedical applications are discussed.",
"title": ""
},
{
"docid": "e7f49d6abd6aaaf105a8949ade40aa65",
"text": "Study Term Definition Boudreau and Hagiu (2009) Multisided platform, e.g. App store Platforms are products, services, or technologies that serve as foundations upon which other parties can build complementary products, services, or technologies A multisided platform is both a platform and a market intermediary. Distinct groups of consumers and “complementors” interact through multisided platforms. Boudreau (2012) Handheld computer platforms Computer platforms are a particular type of multisided platforms, which support interactions across multiple sets of actors and can facilitate technical development. Network effects result from a large number of independent software producers creating applications. Ceccagnoli et al. (2012) Platform A platform refers to the components used in common across a product family whose functionality can be extended by applications Fichman (2004) IT platform An IT platform is broadly defined as a general-purpose technology that enables a family of applications and related business opportunities. This includes computing platforms (e.g., Palm OS), infrastructure platforms (e.g., wireless networking), software development platforms (e.g., Java), and enterprise application platforms (e.g., ERP). Tiwana et al. (2010) Software based platform Software based platform is the extensible codebase of a software-based system that provides core functionality shared by the modules that interoperate with it and the interfaces through which they interoperate. Module is an add-on software subsystem that connects to the platform to add functionality to it (e.g., iOS apps, modular innovation). the collection of the platform and the modules specific to that platform as that platform’s ecosystem .",
"title": ""
},
{
"docid": "acf55a4257dab1e79e8b7d8e4a6bcf96",
"text": "Using the suffix tree of a string S, decision queries of the type “Is P a substring of S?” can be answered in O(|P |) time and enumeration queries of the type “Where are all z occurrences of P in S?” can be answered in O(|P |+z) time, totally independent of the size of S. However, in large scale applications as genome analysis, the space requirements of the suffix tree are a severe drawback. The suffix array is a more space economical index structure. Using it and an additional table, Manber and Myers (1993) showed that decision queries and enumeration queries can be answered in O(|P |+log |S|) and O(|P |+log |S|+z) time, respectively, but no optimal time algorithms are known. In this paper, we show how to achieve the optimal O(|P |) and O(|P |+ z) time bounds for the suffix array. Our approach is not confined to exact pattern matching. In fact, it can be used to efficiently solve all problems that are usually solved by a top-down traversal of the suffix tree. Experiments show that our method is not only of theoretical interest but also of practical relevance.",
"title": ""
},
{
"docid": "ce5fc5fbb3cb0fb6e65ca530bfc097b1",
"text": "The Bulgarian electricity market rules require from the transmission system operator, to procure electricity for covering transmission grid losses on hourly base before day-ahead gate closure. In this paper is presented a software solution for day-ahead forecasting of hourly transmission losses that is based on statistical approach of the impacting factors correlations and uses as inputs numerical weather predictions.",
"title": ""
},
{
"docid": "ee65be73600a223c6d4b1a7a2773228c",
"text": "In this paper a highly configurable, real-time analysis system to automatically record, analyze and visualize high level aggregated information of user interventions in Twitter is described. The system is designed to provide public entities with a powerful tool to rapidly and easily understand what the citizen behavior trends are, what their opinion about city services, events, etc. is, and also may used as a primary alert system that may improve the efficiency of emergency systems. The citizen is here observed as a proactive city sensor capable of generating huge amounts of very rich, high-level and valuable data through social media platforms, which, after properly processed, summarized and annotated, allows city administrators to better understand citizen necessities. The architecture and component blocks are described and some key details of the design, implementation and scenarios of application are discussed.",
"title": ""
},
{
"docid": "00d512bce77790afd830ffc4fa49c317",
"text": "How can we find data for quality prediction? Early in the life cycle, projects may lack the data needed to build such predictors. Prior work assumed that relevant training data was found nearest to the local project. But is this the best approach? This paper introduces the Peters filter which is based on the following conjecture: When local data is scarce, more information exists in other projects. Accordingly, this filter selects training data via the structure of other projects. To assess the performance of the Peters filter, we compare it with two other approaches for quality prediction. Within-company learning and cross-company learning with the Burak filter (the state-of-the-art relevancy filter). This paper finds that: 1) within-company predictors are weak for small data-sets; 2) the Peters filter+cross-company builds better predictors than both within-company and the Burak filter+cross-company; and 3) the Peters filter builds 64% more useful predictors than both within-company and the Burak filter+cross-company approaches. Hence, we recommend the Peters filter for cross-company learning.",
"title": ""
},
{
"docid": "724799864fe2556a8239436edf1c87f3",
"text": "Efficient shadowing algorithms have been sought for decades, but most shadow research focuses on quickly identifying shadows on surfaces. This paper introduces a novel algorithm to efficiently sample light visibility at points inside a volume. These voxelized shadow volumes (VSVs) extend shadow maps to allow efficient, simultaneous queries of visibility along view rays, or can alternately be seen as a discretized shadow volume. We voxelize the scene into a binary, epipolar-space grid where we apply a fast parallel scan to identify shadowed voxels. Using a view-dependent grid, our GPU implementation looks up 128 visibility samples along any eye ray with a single texture fetch. We demonstrate our algorithm in the context of interactive shadows in homogeneous, single-scattering participating media.",
"title": ""
},
{
"docid": "32699147f4915dc4e2d7708ade19ea5b",
"text": "Occlusions, complex backgrounds, scale variations and non-uniform distributions present great challenges for crowd counting in practical applications. In this paper, we propose a novel method using an attention model to exploit head locations which are the most important cue for crowd counting. The attention model estimates a probability map in which high probabilities indicate locations where heads are likely to be present. The estimated probability map is used to suppress nonhead regions in feature maps from several multi-scale feature extraction branches of a convolutional neural network for crowd density estimation, which makes our method robust to complex backgrounds, scale variations and non-uniform distributions. In addition, we introduce a relative deviation loss to compensate a commonly used training loss, Euclidean distance, to improve the accuracy of sparse crowd density estimation. Experiments on ShanghaiTech, UCF CC 50 and WorldExpo’10 datasets demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "9d1d02358e32a5c40c35e573f63d5366",
"text": "Ensuring communications security in Wireless Sensor Networks (WSNs) indeed is critical; due to the criticality of the resources in the sensor nodes as well as due to their ubiquitous and pervasive deployment, with varying attributes and degrees of security required. The proliferation of the next generation sensor nodes, has not solved this problem, because of the greater emphasis on low-cost deployment. In addition, the WSNs use data-centric multi-hop communication that in turn, necessitates the security support to be devised at the link layer (increasing the cost of security related operations), instead of being at the application layer, as in general networks. Therefore, an energy-efficient link layer security framework is necessitated. There do exists a number of link layer security architectures that offer some combinations of the security attributes desired by different WSN applications. However, as we show in this paper, none of them is responsive to the actual security demands of the applications. Therefore, we believe that there is a need for investigating the feasibility of a configurable software-based link layer security architecture wherein an application can be compiled flexibly, with respect to its actual security demands. In this paper, we analyze, propose and experiment with the basic design of such configurable link layer security architecture for WSNs. We also experimentally evaluate various aspects related to our scheme viz. configurable block ciphers, configurable block cipher modes of operations, configurable MAC sizes and configurable replay protection. The architecture proposed is aimed to offer the optimal level of security at the minimal overhead, thus saving the precious resources in the WSNs.",
"title": ""
},
{
"docid": "1655080e43831fa11643fd6d6a478a2a",
"text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The softswitching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zerovoltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters. Keywords— Buck converter, coupled inductor, soft switching, zero-current switching (ZCS), zero-voltage switching (ZVS).",
"title": ""
},
{
"docid": "3bd2bfd1c7652f8655d009c085d6ed5c",
"text": "The past decade has witnessed the boom of human-machine interactions, particularly via dialog systems. In this paper, we study the task of response generation in open-domain multi-turn dialog systems. Many research efforts have been dedicated to building intelligent dialog systems, yet few shed light on deepening or widening the chatting topics in a conversational session, which would attract users to talk more. To this end, this paper presents a novel deep scheme consisting of three channels, namely global, wide, and deep ones. The global channel encodes the complete historical information within the given context, the wide one employs an attention-based recurrent neural network model to predict the keywords that may not appear in the historical context, and the deep one trains a Multi-layer Perceptron model to select some keywords for an in-depth discussion. Thereafter, our scheme integrates the outputs of these three channels to generate desired responses. To justify our model, we conducted extensive experiments to compare our model with several state-of-the-art baselines on two datasets: one is constructed by ourselves and the other is a public benchmark dataset. Experimental results demonstrate that our model yields promising performance by widening or deepening the topics of interest.",
"title": ""
},
{
"docid": "25af730c2a44b96e95058942b498dd32",
"text": "We introduce a manually-created, multireference dataset for abstractive sentence and short paragraph compression. First, we examine the impact of singleand multi-sentence level editing operations on human compression quality as found in this corpus. We observe that substitution and rephrasing operations are more meaning preserving than other operations, and that compressing in context improves quality. Second, we systematically explore the correlations between automatic evaluation metrics and human judgments of meaning preservation and grammaticality in the compression task, and analyze the impact of the linguistic units used and precision versus recall measures on the quality of the metrics. Multi-reference evaluation metrics are shown to offer significant advantage over single reference-based metrics.",
"title": ""
}
] |
scidocsrr
|
57139a5ab5e2e62b1e007675e58aa65a
|
Boosting Trees for Anti-Spam Email Filtering
|
[
{
"docid": "b45aae55cc4e7bdb13463eff7aaf6c60",
"text": "Text retrieval systems typically produce a ranking of documents and let a user decide how far down that ranking to go. In contrast, programs that filter text streams, software that categorizes documents, agents which alert users, and many other IR systems must make decisions without human input or supervision. It is important to define what constitutes good effectiveness for these autonomous systems, tune the systems to achieve the highest possible effectiveness, and estimate how the effectiveness changes as new data is processed. We show how to do this for binary text classification systems, emphasizing that different goals for the system lead to different optimal behaviors. Optimizing and estimating effectiveness is greatly aided if classifiers that explicitly estimate the probability of class membership are used.",
"title": ""
}
] |
[
{
"docid": "bac20e8ec46424ecb5e8211ebd369283",
"text": "Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST by adding noise to the MNIST dataset, and three labeled datasets formed by adding noise to the offline Bangla numeral database. Then we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST, n-MNIST and noisy Bangla datasets, our framework shows promising results and outperforms traditional Deep Belief Networks.",
"title": ""
},
{
"docid": "022a5d9fc952e093ccb835bb63310862",
"text": "Converting a sentence to a meaningful vector representation has uses in many NLP tasks, however very few methods allow that representation to be restored to a human readable sentence. Being able to generate sentences from the vector representations demonstrates the level of information maintained by the embedding representation – in this case a simple sum of word embeddings. We introduce such a method for moving from this vector representation back to the original sentences. This is done using a two stage process, first a greedy algorithm is utilised to convert the vector to a bag of words, and second a simple probabilistic language model is used to order the words to get back the sentence. To the best of our knowledge this is the first work to demonstrate quantitatively the ability to reproduce text from a large corpus based directly on its sentence embeddings.",
"title": ""
},
{
"docid": "5b9b13de031badc5fb7bc230821f4b6b",
"text": "This paper aims to establish a new optimization paradigm to efficiently execute distributed learning tasks on wireless edge nodes with heterogeneous computing and communication capacities. We will refer to this new paradigm as “Mobile Edge Learning (MEL)”. The problem of adaptive task allocation for MEL is considered in this paper with the aim to maximize the learning accuracy, while guaranteeing that the total times of data distribution/aggregation over heterogeneous channels, and local computation on heterogeneous nodes, are bounded by a preset duration. The problem is first formulated as a quadraticallyconstrained integer linear problem. Being NP-hard, the paper relaxes it into a non-convex problem over real variables. We then propose a solution based on deriving analytical upper bounds on the optimal solution of this relaxed problem using KKT conditions. The merits of this proposed solution is exhibited by comparing its performances to both numerical approaches and the equal task allocation approach.",
"title": ""
},
{
"docid": "85713bc895a5477e9e99bd4884d01d3c",
"text": "Recently, Fan-out Wafer Level Packaging (FOWLP) has been emerged as a promising technology to meet the ever increasing demands of the consumer electronic products. However, conventional FOWLP technology is limited to small size packages with single chip and Low to Mid-range Input/ Output (I/O) count due to die shift, warpage and RDL scaling issues. In this paper, we are presenting new RDL-First FOWLP approach which enables RDL scaling, overcomes the die shift, die protrusion and warpage challenges of conventional FOWLP, and extend the FOWLP technology for multi-chip and high I/O count package applications. RDL-First FOWLP process integration flow was demonstrated and fabricated test vehicles of large multi-chip package of 20 x 20 mm2 with 3 layers fine pitch RDL of LW/LS of 2μm/2μm and ~2400 package I/Os. Two Through Mold Interconnections (TMI) fabrication approaches (tall Cu pillar and vertical Cu wire) were evaluated on this platform for Package-on-Package (PoP) application. Backside RDL process on over molded Chip-to-Wafer (C2W) with carrier wafer was demonstrated for PoP applications. Laser de-bonding and sacrificial release layer material cleaning processes were established, and successfully used in the integration flow to fabricate the test vehicles. Assembly processes were optimized and successfully demonstrated large multi-chip RDL-first FOWLP package and PoP assembly on test boards. The large multi-chip FOWLP packages samples were passed JEDEC component level test Moisture Sensitivity Test Level 1 & Level 3 (MST L1 & MST L3) and 30 drops of board level drop test, and results will be presented.",
"title": ""
},
{
"docid": "24d0d2a384b2f9cefc6e5162cdc52c45",
"text": "Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2%. Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.",
"title": ""
},
{
"docid": "0e672586c4be2e07c3e794ed1bb3443d",
"text": "In this thesis, the multi-category dataset has been incorporated with the robust feature descriptor using the scale invariant feature transform (SIFT), SURF and FREAK along with the multi-category enabled support vector machine (mSVM). The multi-category support vector machine (mSVM) has been designed with the iterative phases to make it able to work with the multi-category dataset. The mSVM represents the training samples of main class as the primary class in every iterative phase and all other training samples are categorized as the secondary class for the support vector machine classification. The proposed model is made capable of working with the variations in the indoor scene image dataset, which are noticed in the form of the color, texture, light, image orientation, occlusion and color illuminations. Several experiments have been conducted over the proposed model for the performance evaluation of the indoor scene recognition system in the proposed model. The results of the proposed model have been obtained in the form of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. The proposed model has clearly outperformed the existing models in the terms of the overall accuracy. The proposed model improvement has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.",
"title": ""
},
{
"docid": "0899cfa62ccd036450c079eb3403902a",
"text": "Manual editing of a metro map is essential because many aesthetic and readability demands in map generation cannot be achieved by using a fully automatic method. In addition, a metro map should be updated when new metro lines are developed in a city. Considering that manually designing a metro map is time-consuming and requires expert skills, we present an interactive editing system that considers human knowledge and adjusts the layout to make it consistent with user expectations. In other words, only a few stations are controlled and the remaining stations are relocated by our system. Our system supports both curvilinear and octilinear layouts when creating metro maps. It solves an optimization problem, in which even spaces, route straightness, and maximum included angles at junctions are considered to obtain a curvilinear result. The system then rotates each edge to extend either vertically, horizontally, or diagonally while approximating the station positions provided by users to generate an octilinear layout. Experimental results, quantitative and qualitative evaluations, and user studies show that our editing system is easy to use and allows even non-professionals to design a metro map.",
"title": ""
},
{
"docid": "72d863c7e323cd9b3ab4368a51743319",
"text": "STUDY DESIGN\nThis study is a retrospective review of the initial enrollment data from a prospective multicentered study of adult spinal deformity.\n\n\nOBJECTIVES\nThe purpose of this study is to correlate radiographic measures of deformity with patient-based outcome measures in adult scoliosis.\n\n\nSUMMARY OF BACKGROUND DATA\nPrior studies of adult scoliosis have attempted to correlate radiographic appearance and clinical symptoms, but it has proven difficult to predict health status based on radiographic measures of deformity alone. The ability to correlate radiographic measures of deformity with symptoms would be useful for decision-making and surgical planning.\n\n\nMETHODS\nThe study correlates radiographic measures of deformity with scores on the Short Form-12, Scoliosis Research Society-29, and Oswestry profiles. Radiographic evaluation was performed according to an established positioning protocol for anteroposterior and lateral 36-inch standing radiographs. Radiographic parameters studied were curve type, curve location, curve magnitude, coronal balance, sagittal balance, apical rotation, and rotatory subluxation.\n\n\nRESULTS\nThe 298 patients studied include 172 with no prior surgery and 126 who had undergone prior spine fusion. Positive sagittal balance was the most reliable predictor of clinical symptoms in both patient groups. Thoracolumbar and lumbar curves generated less favorable scores than thoracic curves in both patient groups. Significant coronal imbalance of greater than 4 cm was associated with deterioration in pain and function scores for unoperated patients but not in patients with previous surgery.\n\n\nCONCLUSIONS\nThis study suggests that restoration of a more normal sagittal balance is the critical goal for any reconstructive spine surgery. The study suggests that magnitude of coronal deformity and extent of coronal correction are less critical parameters.",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "212e8f1ce4871e8534a42ba6665ee16d",
"text": "Since the tragic events of September 11, security research has become critically important for the entire world. Academics in fields such as computational science, information systems, social sciences, engineering, and medicine have been called on to help enhance our ability to fight violence, terrorism, and other crimes. The US 2002 National Strategy for Homeland Security report identified science and technology as the keys to winning this international security war.1 It is widely believed that information technology will play an indispensable role in making the world safer2 by supporting intelligence and knowledge discovery through collecting, processing, analyzing, and utilizing terrorismand crime-related data.",
"title": ""
},
{
"docid": "381446e2f352dcae3afb19b70e94f227",
"text": "A longstanding goal of SSD virtualization has been to provide performance isolation between multiple tenants sharing the device. Virtualizing SSDs, however, has traditionally been a challenge because of the fundamental tussle between resource isolation and the lifetime of the device – existing SSDs aim to uniformly age all the regions of flash and this hurts isolation. We propose utilizing flash parallelism to improve isolation between virtual SSDs by running them on dedicated channels and dies. Furthermore, we offer a complete solution by also managing the wear. We propose allowing the wear of different channels and dies to diverge at fine time granularities in favor of isolation and adjusting that imbalance at a coarse time granularity in a principled manner. Our experiments show that the new SSD wears uniformly while the 99th percentile latencies of storage operations in a variety of multi-tenant settings are reduced by up to 3.1x compared to software isolated virtual SSDs.",
"title": ""
},
{
"docid": "771100d86f7bebba569f84e6bbb0b89f",
"text": "The business model concept is characterized by numerous fields of application which are promising in business practice. Consequently, research on business models has attracted increasing attention in the scientific world. However, for a successful utilization, the widely-criticized lack of theoretical consensus in this field of research has to be overcome. Thus, this paper conducted a comprehensive and up-to-date literature analysis examining 30 relevant literature sources focusing mainly on business model research. To achieve this, the analysis was based on a classification framework containing 17 evaluation criteria. Hereby, a systematic and objective penetration of the research area could be achieved. Moreover, existing research gaps as well as the most important fields to be addressed in future research could be revealed.",
"title": ""
},
{
"docid": "71d5fba169222eaab6a7fcb5a7417c90",
"text": "Melanoma is amongst most aggressive types of cancer. However, it is highly curable if detected in its early stages. Prescreening of suspicious moles and lesions for malignancy is of great importance. Detection can be done by images captured by standard cameras, which are more preferable due to low cost and availability. One important step in computerized evaluation of skin lesions is accurate detection of lesion’s region, i.e. segmentation of an image into two regions as lesion and normal skin. Accurate segmentation can be challenging due to burdens such as illumination variation and low contrast between lesion and healthy skin. In this paper, a method based on deep neural networks is proposed for accurate extraction of a lesion region. The input image is preprocessed and then its patches are fed to a convolutional neural network (CNN). Local texture and global structure of the patches are processed in order to assign pixels to lesion or normal classes. A method for effective selection of training patches is used for more accurate detection of a lesion’s border. The output segmentation mask is refined by some post processing operations. The experimental results of qualitative and quantitative evaluations demonstrate that our method can outperform other state-of-the-art algorithms exist in the literature.",
"title": ""
},
{
"docid": "929c3c0bd01056851952660ffd90673a",
"text": "SUMMARY: The Food and Drug Administration (FDA) is issuing this proposed rule to amend the 1994 tentative final monograph or proposed rule (the 1994 TFM) for over-the-counter (OTC) antiseptic drug products. In this proposed rule, we are proposing to establish conditions under which OTC antiseptic products intended for use by health care professionals in a hospital setting or other health care situations outside the hospital are generally recognized as safe and effective. In the 1994 TFM, certain antiseptic active ingredients were proposed as being generally recognized as safe for use in health care settings based on safety data evaluated by FDA as part of its ongoing review of OTC antiseptic drug products. However, in light of more recent scientific developments, we are now proposing that additional safety data are necessary to support the safety of antiseptic active ingredients for these uses. We also are proposing that all health care antiseptic active ingredients have in vitro data characterizing the ingredient's antimicrobial properties and in vivo clinical simulation studies showing that specified log reductions in the amount of certain bacteria are achieved using the ingredient. DATES: Submit electronic or written comments by October 28, 2015. See section VIII of this document for the proposed effective date of a final rule based on this proposed rule. ADDRESSES: You may submit comments by any of the following methods: Electronic Submissions Submit electronic comments in the following way: • Federal eRulemaking Portal: http:// www.regulations.gov. Follow the instructions for submitting comments.",
"title": ""
},
{
"docid": "4dd61afa86e13270599d4193f8b9bb70",
"text": "The paper deals with the definition of urban mobility, assuming that inside the city, the flow of mobility is deeply connected with the distribution, quality and use of the urban activities that “polarize” different users (residents, commuters, tourists and city users). In this vision, ICT assume a strategic role, but the need to reconsider their role emerges in respect to the concept of a smart city. The consideration that “urban smartness” does not depend exclusively on the ICT component or on the quantitative presence of technologies in the city, in fact, represents a shared opinion within the current scientific debate on the subject of the smart city. The paper assumes that, for the present urban contexts, the smart vision has to be related to an integrated approach, which considers the city as a complex system. Inside the urban system, the networks for both material and immaterial mobility interact with the urban activities that play a supporting role and have characteristics that affect the levels of urban smartness. Changes in urban systems greatly depend on the sorts of innovation technology that have intensely modified the social component to a far greater extent than others. Big Data, for instance, can help with knowledge of urban processes, provided they have to be well-interpreted and managed, and this will be of interest within the interactions among urban systems and the functioning of the system as a whole. Town planning has to take on responsibility in regard to approaching cities according to a different vision and updating its tools in order to steer the urban system steadfastly into a smartness state. In a systemic vision, this transition must be framed within the context of a process of governmental transformation that is carefully oriented towards the individuation of interactions among the different subsystems composing the city. According to this vision, the study of urban mobility can be related to the attractiveness generated by the different urban functions. The formalization of the degree of polarization, activated by urban functions, represents the main objective of this study. Among the urban functions, the study considers tourism as one of the most significant in the formalization of urban mobility flow inside the smart city.",
"title": ""
},
{
"docid": "152d1db97d048e1e9d0be1ab2ffe9e7d",
"text": "Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed/parallel graph processing systems have been proposed, such as Pregel, GraphLab, and Trinity. These systems can be divided into two categories: (1) vertex-centric and (2) block-centric approaches. In vertex-centric approaches, each vertex corresponds to a process, and message are exchanged among vertices. In block-centric approaches, the unit of computation is a block, a connected subgraph of the graph, and message exchanges occur among blocks. In this paper, we are considering the issues of scale and dynamism in the case of block-centric approaches. We present BLADYG, a block-centric framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of BLADYG on top of AKKA framework. We experimentally evaluate the performance of the proposed framework.",
"title": ""
},
{
"docid": "420fa81c2dbe77622108c978d5c6c019",
"text": "Reasoning about a scene's thermal signature, in addition to its visual appearance and spatial configuration, would facilitate significant advances in perceptual systems. Applications involving the segmentation and tracking of persons, vehicles, and other heat-emitting objects, for example, could benefit tremendously from even coarsely accurate relative temperatures. With the increasing affordability of commercially available thermal cameras, as well as the imminent introduction of new, mobile form factors, such data will be readily and widely accessible. However, in order for thermal processing to complement existing methods in RGBD, there must be an effective procedure for calibrating RGBD and thermal cameras to create RGBDT (red, green, blue, depth, and thermal) data. In this paper, we present an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments. While traditional calibration methods fail in our multimodal setting, we leverage invariant features visible by both camera types. We first synchronize the streams with a simple optimization procedure that aligns their motion statistic time series. We then find the relative poses of the cameras by minimizing an objective that measures the alignment between edge maps from the two streams. In contrast to existing methods that use special calibration targets with key points visible to both cameras, our method requires nothing more than some edges visible to both cameras, such as those arising from humans. We evaluate our method and demonstrate that it consistently converges to the correct transform and that it results in high-quality RGBDT data.",
"title": ""
},
{
"docid": "a4d45c12ecc459ea6564fb0df8d13bd3",
"text": "Amazon’s Mechanical Turk (AMT) has revolutionized data processing and collection in both research and industry and remains one of the most prominent paid crowd work platforms today (Kittur et al., 2013). Unfortunately, it also remains in beta nine years after its launch with many of the same limitations as when it was launched: lack of worker profi indicating skills or experience, inability to post worker or employer ratings and reviews, minimal infrastructure for eff ely managing workers or collecting analytics, etc. Difficulty accomplishing quality, complex work with AMT continues to drive active research. Fortunately, many other alternative platforms now exist and off a wide range of features and workflow models for accomplishing quality work (crowdsortium.org). Despite this, research on crowd work has continued to focus on AMT near-exclusively. By analogy, if one had only ever programmed in Basic, how might this limit one’s conception of programming? What if the only search engine we knew was AltaVista? Adar (2011) opined that prior research has often been envisioned too narrowly for AMT, “...writing the user’s manual for MTurk ... struggl[ing] against the limits of the platform...”. Such narrow focus risks AMT’s particular vagaries and limitations unduly shape research questions, methodology, and imagination. To assess the extent of AMT’s infl upon research questions and use, we review its impact on prior work, assess what functionality and workflows other platforms off and consider what light other platforms’ diverse capabilities may shed on current research practices and future directions. To this end, we present a qualitative content analysis (Mayring, 2000) of ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. To characterize and diff entiate crowd work platforms, we identify several key categories for analysis. Our qualitative content analysis assesses each platform by drawing upon a variety of information sources: Webpages, blogs, news articles, white papers, and research papers. We also shared our analyses with platform representatives and incorporated their feedback. Contributions. Our content analysis of crowd work platforms represents the first such study we know of by researchers for researchers, with categories of analysis chosen based on research relevance. Contributions include our review of how AMT assumptions and limitations have influenced prior research, the detailed criteria we developed for characterizing crowd work platforms, and our analysis. Findings inform",
"title": ""
},
{
"docid": "9f16cb2dd8c4a95d5faed112779ee041",
"text": "This paper deals with the problem of measuring the wheel-rail interaction quality in real time using a suitably designed and realized railway measurement system. More specifically, the measured parameter is the equivalent conicity, as defined in the international union of railways UIC 518 Standard, and the measurement system is based on suitable processing of geometric data that is acquired by a contactless optical unit. The measurement system has been verified according to the test procedures described in the UIC 519 Standard. This paper shows how it is possible to obtain, in real time and with comparatively simple algorithms, measurements that are perfectly compliant with the UIC 519 Standard, with regard to the required measurement uncertainty as well.",
"title": ""
}
] |
scidocsrr
|
2b3fb61ba8c0b8c5d5df64c9fe2c86cb
|
Fast Group Recommendations by Applying User Clustering
|
[
{
"docid": "3663322ebe405b5e9d588ccdf305da02",
"text": "In this demonstration paper, we present gRecs, a system for group recommendations that follows a collaborative strategy. We enhance recommendations with the notion of support to model the confidence of the recommendations. Moreover, we propose partitioning users into clusters of similar ones. This way, recommendations for users are produced with respect to the preferences of their cluster members without extensively searching for similar users in the whole user base. Finally, we leverage the power of a top-k algorithm for locating the top-k group recommendations.",
"title": ""
},
{
"docid": "455a6fe5862e3271ac00057d1b569b11",
"text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"title": ""
}
] |
[
{
"docid": "c591881de09c709ae2679cacafe24008",
"text": "This paper discusses a technique to estimate the position of a sniper using a spatial microphone array placed on elevated platforms. The shooter location is obtained from the exact location of the microphone array, from topographic information of the area and from an estimated direction of arrival (DoA) of the acoustic wave related to the explosion in the gun barrel, which is known as muzzle blast. The estimation of the DOA is based on the time differences the sound wavefront arrives at each pair of microphones, employing a technique known as Generalized Cross Correlation (GCC) with phase transform. The main idea behind the localization procedure used herein is that, based on the DoA, the acoustical path of the muzzle blast (from the weapon to the microphone) can be marked as a straight line on a terrain profile obtained from an accurate digital map, allowing the estimation of the shooter location whenever the microphone array is located on an dominant position. In addition, a new approach to improve the DoA estimation from a cognitive selection of microphones is introduced. In this technique, the microphones selected must form a consistent (sum of delays equal to zero) fundamental loop. The results obtained after processing muzzle blast gunshot signals recorded in a typical scenario, show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "52844cb9280029d5ddec869945b28be2",
"text": "In this work, a new fast dynamic community detection algorithm for large scale networks is presented. Most of the previous community detection algorithms are designed for static networks. However, large scale social networks are dynamic and evolve frequently over time. To quickly detect communities in dynamic large scale networks, we proposed dynamic modularity optimizer framework (DMO) that is constructed by modifying well-known static modularity based community detection algorithm. The proposed framework is tested using several different datasets. According to our results, community detection algorithms in the proposed framework perform better than static algorithms when large scale dynamic networks are considered.",
"title": ""
},
{
"docid": "6f9ffe5e1633046418ca0bc4f7089b2f",
"text": "This paper presents a new motion planning primitive to be used for the iterative steering of vision-based autonomous vehicles. This primitive is a parameterized quintic spline, denoted as -spline, that allows interpolating an arbitrary sequence of points with overall second-order geometric ( -) continuity. Issues such as completeness, minimality, regularity, symmetry, and flexibility of these -splines are addressed in the exposition. The development of the new primitive is tightly connected to the inversion control of nonholonomic car-like vehicles. The paper also exposes a supervisory strategy for iterative steering that integrates feedback vision data processing with the feedforward inversion control.",
"title": ""
},
{
"docid": "f267da735820809d9c93672299db43f5",
"text": "The Feigenbaum constants arise in the theory of iteration of real functions. We calculate here to high precision the constants a and S associated with period-doubling bifurcations for maps with a single maximum of order z , for 2 < z < 12. Multiple-precision floating-point techniques are used to find a solution of Feigenbaum's functional equation, and hence the constants. 1. History Consider the iteration of the function (1) fßZ(x) = l-p\\x\\z, z>0; that is, the sequence (2) *(+i =/„,*(*/)> i'=l,2,...; x0 = 0. In 1979 Feigenbaum [8] observed that there exist bifurcations in the set of limit points of (2) (that is, in the set of all points which are the limit of some infinite subsequence) as the parameter p is increased for fixed z. Roughly speaking, if the sequence (2) is asymptotically periodic with period p for a particular parameter value p (that is, there exists a stable p-cycle), then as p is increased, the period will be observed to double, so that a stable 2/>cycle appears. We denote the critical /¿-value at which the 2J cycle first appears by Pj. Feigenbaum also conjectured that there exist certain \"universal\" scaling constants associated with these bifurcations. Specifically, (3) «5 = lim ZlZJhzi 7-00 pJ+x ftj exists, and ô2 is about 4.669. Similarly, if rf. is the value of the nearest cycle element to 0 in the 2J cycle, then (4) az = lim y;-oo dJ+x exists, and a2 is about -2.503 . Received November 22, 1989; revised September 10, 1990. 1980 Mathematics Subject Classification (1985 Revision). Primary 11Y60, 26A18, 39A10, 65Q05. ©1991 American Mathematical Society 0025-5718/91 $1.00 + $.25 per page",
"title": ""
},
{
"docid": "55fef695aadc5d524e2d858345dc325f",
"text": "The number of offboard fast charging stations is increasing as plug-in electric vehicles (PEVs) are more widespread in the world. Additional features on the operation of chargers will result in more benefits for investors, utility companies, and PEV owners. This paper investigates reactive power support operation using offboard PEV charging stations while charging a PEV battery. The topology consists of a three-phase ac-dc boost rectifier that is capable of operating in all four quadrants. The operation modes that are of interest are power-factor-corrected charging operation, and charging and capacitive/inductive reactive power operation. This paper also presents a control system for the PQ command following of a bidirectional offboard charger. The controller only receives the charging power command from a user and the reactive power command (when needed) from a utility, and it adjusts the line current and the battery charging current correspondingly. The vehicle's battery is not affected during the reactive power operation. A simulation study is developed utilizing PSIM, and the control system is experimentally tested using a 12.5-kVA charging station design.",
"title": ""
},
{
"docid": "320947783c6a43fe858e3ab97f231d9f",
"text": "Almost all orthopaedic surgeons come across acute compartment syndrome (ACS) in their clinical practice. Diagnosis of ACS mostly relies on clinical findings. If the diagnosis is missed and left untreated, it can lead to serious consequences which can endanger limb and life of the patient and also risk the clinician to face lawsuits. This review article highlights the characteristic features of ACS which will help an orthopaedic surgeon to understand the pathophysiology, natural history, high risk patients, diagnosis, and surgical management of the condition.",
"title": ""
},
{
"docid": "c056fa934bbf9bc6a286cd718f3a7217",
"text": "The advent of deep sub-micron technology has exacerbated reliability issues in on-chip interconnects. In particular, single event upsets, such as soft errors, and hard faults are rapidly becoming a force to be reckoned with. This spiraling trend highlights the importance of detailed analysis of these reliability hazards and the incorporation of comprehensive protection measures into all network-on-chip (NoC) designs. In this paper, we examine the impact of transient failures on the reliability of on-chip interconnects and develop comprehensive counter-measures to either prevent or recover from them. In this regard, we propose several novel schemes to remedy various kinds of soft error symptoms, while keeping area and power overhead at a minimum. Our proposed solutions are architected to fully exploit the available infrastructures in an NoC and enable versatile reuse of valuable resources. The effectiveness of the proposed techniques has been validated using a cycle-accurate simulator",
"title": ""
},
{
"docid": "642b98bf1ea22958411514cb7f01ef68",
"text": "This paper studies the problems of vehicle make & model classification. Some of the main challenges are reaching high classification accuracy and reducing the annotation time of the images. To address these problems, we have created a fine-grained database using online vehicle marketplaces of Turkey. A pipeline is proposed to combine an SSD (Single Shot Multibox Detector) model with a CNN (Convolutional Neural Network) model to train on the database. In the pipeline, we first detect the vehicles by following an algorithm which reduces the time for annotation. Then, we feed them into the CNN model. It is reached approximately 4% better classification accuracy result than using a conventional CNN model. Next, we propose to use the detected vehicles as ground truth bounding box (GTBB) of the images and feed them into an SSD model in another pipeline. At this stage, it is reached reasonable classification accuracy result without using perfectly shaped GTBB. Lastly, an application is implemented in a use case by using our proposed pipelines. It detects the unauthorized vehicles by comparing their license plate numbers and make & models. It is assumed that license plates are readable.",
"title": ""
},
{
"docid": "a8c1224f291df5aeb655a2883b16bcfb",
"text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"title": ""
},
{
"docid": "361e874cccb263b202155ef92e502af3",
"text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.",
"title": ""
},
{
"docid": "4ead8caeea4143b8c5deb2ea91e0a141",
"text": "The statistical discrimination and clustering literature has studied the problem of identifying similarities in time series data. Some studies use non-parametric approaches for splitting a set of time series into clusters by looking at their Euclidean distances in the space of points. A new measure of distance between time series based on the normalized periodogram is proposed. Simulation results comparing this measure with others parametric and non-parametric metrics are provided. In particular, the classification of time series as stationary or as non-stationary is discussed. The use of both hierarchical and non-hierarchical clustering algorithms is considered. An illustrative example with economic time series data is also presented. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "421e3e91c92c10485c6da9b29d37521d",
"text": "STUDY OBJECTIVES\nThe psychomotor vigilance test (PVT) is among the most widely used measures of behavioral alertness, but there is large variation among published studies in PVT performance outcomes and test durations. To promote standardization of the PVT and increase its sensitivity and specificity to sleep loss, we determined PVT metrics and task durations that optimally discriminated sleep deprived subjects from alert subjects.\n\n\nDESIGN\nRepeated-measures experiments involving 10-min PVT assessments every 2 h across both acute total sleep deprivation (TSD) and 5 days of chronic partial sleep deprivation (PSD).\n\n\nSETTING\nControlled laboratory environment.\n\n\nPARTICIPANTS\n74 healthy subjects (34 female), aged 22-45 years.\n\n\nINTERVENTIONS\nTSD experiment involving 33 h awake (N = 31 subjects) and a PSD experiment involving 5 nights of 4 h time in bed (N = 43 subjects).\n\n\nMEASUREMENTS AND RESULTS\nIn a paired t-test paradigm and for both TSD and PSD, effect sizes of 10 different PVT performance outcomes were calculated. Effect sizes were high for both TSD (1.59-1.94) and PSD (0.88-1.21) for PVT metrics related to lapses and to measures of psychomotor speed, i.e., mean 1/RT (response time) and mean slowest 10% 1/RT. In contrast, PVT mean and median RT outcomes scored low to moderate effect sizes influenced by extreme values. Analyses facilitating only portions of the full 10-min PVT indicated that for some outcomes, high effect sizes could be achieved with PVT durations considerably shorter than 10 min, although metrics involving lapses seemed to profit from longer test durations in TSD.\n\n\nCONCLUSIONS\nDue to their superior conceptual and statistical properties and high sensitivity to sleep deprivation, metrics involving response speed and lapses should be considered primary outcomes for the 10-min PVT. In contrast, PVT mean and median metrics, which are among the most widely used outcomes, should be avoided as primary measures of alertness. Our analyses also suggest that some shorter-duration PVT versions may be sensitive to sleep loss, depending on the outcome variable selected, although this will need to be confirmed in comparative analyses of separate duration versions of the PVT. Using both sensitive PVT metrics and optimal test durations maximizes the sensitivity of the PVT to sleep loss and therefore potentially decreases the sample size needed to detect the same neurobehavioral deficit. We propose criteria to better standardize the 10-min PVT and facilitate between-study comparisons and meta-analyses.",
"title": ""
},
{
"docid": "4f278f699b587f01191bc7f06839a548",
"text": "This paper describes the design and the realization of a low-frequency ac magnetic-field-based indoor positioning system (PS). The system operation is based on the principle of inductive coupling between wire loop antennas. Specifically, due to the characteristics of the ac artificially generated magnetic fields, the relation between the induced voltage and the distance is modeled with a linear behavior in a bilogarithmic scale when a configuration with coplanar, thus equally oriented, antennas is used. In this case, the distance between a transmitting antenna and a receiving one is estimated using measurements of the induced voltage in the latter. For a high operational range, the system makes use of resonant antennas tuned at the same nominal resonant frequency. The quality factors act as antenna gain increasing the amplitude of the induced voltage. The low-operating frequency is the key factor for improving robustness against nonline-of-sight (NLOS) conditions and environment influences with respect to other existing solutions. The realized prototype, which is implemented using off-the-shelf components, exhibits an average and maximum positioning error, respectively, lower than 0.3 and 0.9 m in an indoor environment over a large area of 15 m × 12 m in NLOS conditions. Similar performance is obtained in an outdoor environment over an area of 30 m × 14 m. Furthermore, the system does not require any type of synchronization between the nodes and can accommodate an arbitrary number of users without additional infrastructure.",
"title": ""
},
{
"docid": "a041c18f97eb9b5b2ed2e5315d542b96",
"text": "While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat\" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat\" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.",
"title": ""
},
{
"docid": "054ed84aa377673d1327dedf26c06c59",
"text": "App Stores, such as Google Play or the Apple Store, allow users to provide feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps. Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers to accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task. In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%).",
"title": ""
},
{
"docid": "cbb5d9269067ad2bbdb2c9823338d752",
"text": "This Paper reveals the information about Deep Neural Network (DNN) and concept of deep learning in field of natural language processing i.e. machine translation. Now day's DNN is playing major role in machine leaning technics. Recursive recurrent neural network (R2NN) is a best technic for machine learning. It is the combination of recurrent neural network and recursive neural network (such as Recursive auto encoder). This paper presents how to train the recurrent neural network for reordering for source to target language by using Semi-supervised learning methods. Word2vec tool is required to generate word vectors of source language and Auto encoder helps us in reconstruction of the vectors for target language in tree structure. Results of word2vec play an important role in word alignment of the input vectors. RNN structure is very complicated and to train the large data file on word2vec is also a time-consuming task. Hence, a powerful hardware support (GPU) is required. GPU improves the system performance by decreasing training time period.",
"title": ""
},
{
"docid": "fd97b7130c7d1828566422f49c857db5",
"text": "The phase noise of phase/frequency detectors can significantly raise the in-band phase noise of frequency synthesizers, corrupting the modulated signal. This paper analyzes the phase noise mechanisms in CMOS phase/frequency detectors and applies the results to two different topologies. It is shown that an octave increase in the input frequency raises the phase noise by 6 dB if flicker noise is dominant and by 3 dB if white noise is dominant. An optimization methodology is also proposed that lowers the phase noise by 4 to 8 dB for a given power consumption. Simulation and analytical results agree to within 3.1 dB for the two topologies at different frequencies.",
"title": ""
},
{
"docid": "119215115226e0bd3ee4c2762433aad5",
"text": "Super-coiled polymer (SCP) artificial muscles have many attractive properties, such as high energy density, large contractions, and good dynamic range. To fully utilize them for robotic applications, it is necessary to determine how to scale them up effectively. Bundling of SCP actuators, as though they are individual threads in woven textiles, can demonstrate the versatility of SCP actuators and artificial muscles in general. However, this versatility comes with a need to understand how different bundling techniques can be achieved with these actuators and how they may trade off in performance. This letter presents the first quantitative comparison, analysis, and modeling of bundled SCP actuators. By exploiting weaving and braiding techniques, three new types of bundled SCP actuators are created: woven bundles, two-dimensional, and three-dimensional braided bundles. The bundle performance is adjustable by employing different numbers of individual actuators. Experiments are conducted to characterize and compare the force, strain, and speed of different bundles, and a linear model is proposed to predict their performance. This work lays the foundation for model-based SCP-actuated textiles, and physically scaling robots that employ SCP actuators as the driving mechanism.",
"title": ""
},
{
"docid": "3c7d25c85b837a3337c93ca2e1e54af4",
"text": "BACKGROUND\nThe treatment of acne scars with fractional CO(2) lasers is gaining increasing impact, but has so far not been compared side-by-side to untreated control skin.\n\n\nOBJECTIVE\nIn a randomized controlled study to examine efficacy and adverse effects of fractional CO(2) laser resurfacing for atrophic acne scars compared to no treatment.\n\n\nMETHODS\nPatients (n = 13) with atrophic acne scars in two intra-individual areas of similar sizes and appearances were randomized to (i) three monthly fractional CO(2) laser treatments (MedArt 610; 12-14 W, 48-56 mJ/pulse, 13% density) and (ii) no treatment. Blinded on-site evaluations were performed by three physicians on 10-point scales. Endpoints were change in scar texture and atrophy, adverse effects, and patient satisfaction.\n\n\nRESULTS\nPreoperatively, acne scars appeared with moderate to severe uneven texture (6.15 ± 1.23) and atrophy (5.72 ± 1.45) in both interventional and non-interventional control sites, P = 1. Postoperatively, lower scores of scar texture and atrophy were obtained at 1 month (scar texture 4.31 ± 1.33, P < 0.0001; atrophy 4.08 ± 1.38, P < 0.0001), at 3 months (scar texture 4.26 ± 1.97, P < 0.0001; atrophy 3.97 ± 2.08, P < 0.0001), and at 6 months (scar texture 3.89 ± 1.7, P < 0.0001; atrophy 3.56 ± 1.76, P < 0.0001). Patients were satisfied with treatments and evaluated scar texture to be mild or moderately improved. Adverse effects were minor.\n\n\nCONCLUSIONS\nIn this single-blinded randomized controlled trial we demonstrated that moderate to severe atrophic acne scars can be safely improved by ablative fractional CO(2) laser resurfacing. The use of higher energy levels might have improved the results and possibly also induced significant adverse effects.",
"title": ""
}
] |
scidocsrr
|
dc8b756609bc8b19762000be733a3968
|
Morphological Analysis for Japanese Noisy Text based on Character-level and Word-level Normalization
|
[
{
"docid": "a16139b8924fc4468086c41fedeef3d9",
"text": "Grapheme-to-phoneme conversion is the task of finding the pronunciation of a word given its written form. It has important applications in text-to-speech and speech recognition. Joint-sequence models are a simple and theoretically stringent probabilistic framework that is applicable to this problem. This article provides a selfcontained and detailed description of this method. We present a novel estimation algorithm and demonstrate high accuracy on a variety of databases. Moreover we study the impact of the maximum approximation in training and transcription, the interaction of model size parameters, n-best list generation, confidence measures, and phoneme-to-grapheme conversion. Our software implementation of the method proposed in this work is available under an Open Source license.",
"title": ""
},
{
"docid": "571c73de53da3ed4d9a465325c9e9746",
"text": "Twitter provides access to large volumes of data in real time, but is notoriously noisy, hampering its utility for NLP. In this paper, we target out-of-vocabulary words in short text messages and propose a method for identifying and normalising ill-formed words. Our method uses a classifier to detect ill-formed words, and generates correction candidates based on morphophonemic similarity. Both word similarity and context are then exploited to select the most probable correction candidate for the word. The proposed method doesn’t require any annotations, and achieves state-of-the-art performance over an SMS corpus and a novel dataset based on Twitter.",
"title": ""
}
] |
[
{
"docid": "085155ebfd2ac60ed65293129cb0bfee",
"text": "Today, Convolution Neural Networks (CNN) is adopted by various application areas such as computer vision, speech recognition, and natural language processing. Due to a massive amount of computing for CNN, CNN running on an embedded platform may not meet the performance requirement. In this paper, we propose a system-on-chip (SoC) CNN architecture synthesized by high level synthesis (HLS). HLS is an effective hardware (HW) synthesis method in terms of both development effort and performance. However, the implementation should be optimized carefully in order to achieve a satisfactory performance. Thus, we apply several optimization techniques to the proposed CNN architecture to satisfy the performance requirement. The proposed CNN architecture implemented on a Xilinx's Zynq platform has achieved 23% faster and 9.05 times better throughput per energy consumption than an implementation on an Intel i7 Core processor.",
"title": ""
},
{
"docid": "280acc4e653512fabf7b181be57b31e2",
"text": "BACKGROUND\nHealth care workers incur frequent injuries resulting from patient transfer and handling tasks. Few studies have evaluated the effectiveness of mechanical lifts in preventing injuries and time loss due to these injuries.\n\n\nMETHODS\nWe examined injury and lost workday rates before and after the introduction of mechanical lifts in acute care hospitals and long-term care (LTC) facilities, and surveyed workers regarding lift use.\n\n\nRESULTS\nThe post-intervention period showed decreased rates of musculoskeletal injuries (RR = 0.82, 95% CI: 0.68-1.00), lost workday injuries (RR = 0.56, 95% CI: 0.41-0.78), and total lost days due to injury (RR = 0.42). Larger reductions were seen in LTC facilities than in hospitals. Self-reported frequency of lift use by registered nurses and by nursing aides were higher in the LTC facilities than in acute care hospitals. Observed reductions in injury and lost day injury rates were greater on nursing units that reported greater use of the lifts.\n\n\nCONCLUSIONS\nImplementation of patient lifts can be effective in reducing occupational musculoskeletal injuries to nursing personnel in both LTC and acute care settings. Strategies to facilitate greater use of mechanical lifting devices should be explored, as further reductions in injuries may be possible with increased use.",
"title": ""
},
{
"docid": "d7cc1619647d83911ad65fac9637ef03",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 4 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "bd121443b5a1dfb16687001c72b22199",
"text": "We review the nosological criteria and functional neuroanatomical basis for brain death, coma, vegetative state, minimally conscious state, and the locked-in state. Functional neuroimaging is providing new insights into cerebral activity in patients with severe brain damage. Measurements of cerebral metabolism and brain activations in response to sensory stimuli with PET, fMRI, and electrophysiological methods can provide information on the presence, degree, and location of any residual brain function. However, use of these techniques in people with severe brain damage is methodologically complex and needs careful quantitative analysis and interpretation. In addition, ethical frameworks to guide research in these patients must be further developed. At present, clinical examinations identify nosological distinctions needed for accurate diagnosis and prognosis. Neuroimaging techniques remain important tools for clinical research that will extend our understanding of the underlying mechanisms of these disorders.",
"title": ""
},
{
"docid": "fb4f4d1762535b8afe7feec072f1534e",
"text": "Recently, evaluation of a recommender system has been beyond evaluating just the algorithm. In addition to accuracy of algorithms, user-centric approaches evaluate a system’s e↵ectiveness in presenting recommendations, explaining recommendations and gaining users’ confidence in the system. Existing research focuses on explaining recommendations that are related to user’s current task. However, explaining recommendations can prove useful even when recommendations are not directly related to user’s current task. Recommendations of development environment commands to software developers is an example of recommendations that are not related to the user’s current task, which is primarily focussed on programming, rather than inspecting recommendations. In this dissertation, we study three di↵erent kinds of explanations for IDE commands recommended to software developers. These explanations are inspired by the common approaches based on literature in the domain. We describe a lab-based experimental study with 24 participants where they performed programming tasks on an open source project. Our results suggest that explanations a↵ect users’ trust of recommendations, and explanations reporting the system’s confidence in recommendation a↵ects their trust more. The explanation with system’s confidence rating of the recommendations resulted in more recommendations being investigated. However, explanations did not a↵ect the uptake of the commands. Our qualitative results suggest that recommendations, when not user’s primary focus, should be in context of his task to be accepted more readily.",
"title": ""
},
{
"docid": "8890f9ab4ba7164194474d9bba7b5c47",
"text": "Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository. Battista Biggio, Igino Corona, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi 09123, Cagliari, Italy. e-mail: {battista.biggio,igino.corona,davide.maiorca}@diee.unica.it e-mail: {fumera,giacinto,roli}@diee.unica.it Blaine Nelson Institut für Informatik, Universität Potsdam, August-Bebel-Straße 89, 14482 Potsdam, Germany. e-mail: [email protected] Benjamin I. P. Rubinstein IBM Research, Lvl 5 / 204 Lygon Street, Carlton, VIC 3053, Australia. e-mail: [email protected] 1 ar X iv :1 40 1. 77 27 v1 [ cs .L G ] 3 0 Ja n 20 14 2 Biggio, Corona, Nelson, Rubinstein, Maiorca, Fumera, Giacinto, Roli",
"title": ""
},
{
"docid": "da26ae25feebea6fbe63dacea03e0742",
"text": "A classic result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded into k-dimensional Euclidean space—where k is logarithmic in n and independent of d—so that all pairwise distances are maintained within an arbitrarily small factor. All known constructions of such embeddings involve projecting the n points onto a spherically random k-dimensional hyperplane through the origin. We give two constructions of such embeddings with the property that all elements of the projection matrix belong in f 1; 0;þ1g: Such constructions are particularly well suited for database environments, as the computation of the embedding reduces to evaluating a single aggregate over k random partitions of the attributes. r 2003 Elsevier Science (USA). All rights reserved.",
"title": ""
},
{
"docid": "919f42363fed69dc38eba0c46be23612",
"text": "Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. In this tutorial, we introduce the characteristics and related mining challenges on dealing with big medical data. Many of those insights come from medical informatics community, which is highly related to data mining but focuses on biomedical specifics. We survey various related papers from data mining venues as well as medical informatics venues to share with the audiences key problems and trends in healthcare analytics research, with different applications ranging from clinical text mining, predictive modeling, survival analysis, patient similarity, genetic data analysis, and public health. The tutorial will include several case studies dealing with some of the important healthcare applications.",
"title": ""
},
{
"docid": "858f15a9fc0e014dd9ffa953ac0e70f7",
"text": "Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.",
"title": ""
},
{
"docid": "35d942882cbf5351bb0465cf51db1fdb",
"text": "A Proposed Definition Computers are special technology and they raise some special ethical issues. In this essay I will discuss what makes computers different from other technology and how this difference makes a difference in ethical considerations. In particular, I want to characterize computer ethics and show why this emerging field is both intellectually interesting and enormously important. On my view, computer ethics is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. I use the phrase “computer technology” because I take the subject matter of the field broadly to include computers and associated technology. For instance, I include concerns about software as well as hardware and concerns about networks connecting computers as well as computers themselves. A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology. Now it may seem that all that needs to be done is the mechanical application of an ethical theory to generate the appropriate policy. But this is usually not possible. A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action. Indeed, much of the important work in computer ethics is devoted to proposing conceptual frameworks for understanding ethical problems involving computer technology. An example may help to clarify the kind of conceptual work that is required. Let’s suppose we are trying to formulate a policy for protecting computer programs. Initially, the idea may seem clear enough. We are looking for a policy for protecting a kind of intellectual property. But then a",
"title": ""
},
{
"docid": "050c67f963f0a6968e951d689eb6e2ef",
"text": "Detection and preventing Distributed Denial of Service Attack (DDoS) becomes a crucial process for the commercial organization that using the internet these days. Different approaches have been adopted to process traffic information collected by a monitoring stations (Routers and Servers) to distinguish the misbehaving of malicious traffic of DDoS attacks in Intrusion Detection Systems (IDS). In general, data mining techniques can be designed and implemented with the intrusion systems to protect the organizations from malicious. Specifically, unsupervised data mining clustering techniques allow to effectively distinguish the normal traffic from malicious traffic in a good accuracy. In this paper, we present a hybrid approach called centroid-based rules to detect and prevent a real-world DDoS attacks collected from “CAIDA UCSD \" DDoS Attack 2007 Dataset” and normal traffic traces from “CAIDA Anonymized Internet Traces 2008 Dataset” using unsupervised k-means data mining clustering techniques with proactive rules method. Centroid-based rules are used to effectively detect the DDoS attack in an efficient time. The Result of experiments shows that the centroid-based rules method perform better than the centroid-based method in term of accuracy and detection rate. In term of false alarm rates, the proposed solution obtains very low false positive rate in the training process and testing phases. Results of accuracy were more than 99% in training and testing processes. The proposed centroid-based rules method can be used in a real-time monitoring as DDoS defense system.",
"title": ""
},
{
"docid": "377cab312d5e262a5363e6cf5b5c64de",
"text": "Electroencephalography (EEG) has been instrumental in making discoveries about cognition, brain function, and dysfunction. However, where do EEG signals come from and what do they mean? The purpose of this paper is to argue that we know shockingly little about the answer to this question, to highlight what we do know, how important the answers are, and how modern neuroscience technologies that allow us to measure and manipulate neural circuits with high spatiotemporal accuracy might finally bring us some answers. Neural oscillations are perhaps the best feature of EEG to use as anchors because oscillations are observed and are studied at multiple spatiotemporal scales of the brain, in multiple species, and are widely implicated in cognition and in neural computations.",
"title": ""
},
{
"docid": "a8ac2bab8abbee070dc2ae929714a801",
"text": "Measuring word relatedness is an important ingredient of many NLP applications. Several datasets have been developed in order to evaluate such measures. The main drawback of existing datasets is the focus on single words, although natural language contains a large proportion of multiword terms. We propose the new TR9856 dataset which focuses on multi-word terms and is significantly larger than existing datasets. The new dataset includes many real world terms such as acronyms and named entities, and further handles term ambiguity by providing topical context for all term pairs. We report baseline results for common relatedness methods over the new data, and exploit its magnitude to demonstrate that a combination of these methods outperforms each individual method.",
"title": ""
},
{
"docid": "e75b7c2fcdfc19a650d7da4e6ae643a2",
"text": "With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services.",
"title": ""
},
{
"docid": "28b5b038cfaecab90b07683c2eabbb5b",
"text": "In this work, we devise a chaos-based secret key cryptography scheme for digital communication where the encryption is realized at the physical level, that is, the encrypting transformations are applied to the wave signal instead to the symbolic sequence. The encryption process consists of transformations applied to a two-dimensional signal composed of the message carrying signal and an encrypting signal that has to be a chaotic one. The secret key, in this case, is related to the number of times the transformations are applied. Furthermore, we show that due to its chaotic nature, the encrypting signal is able to hide the statistics of the original signal. 2004 Elsevier Ltd. All rights reserved. In this letter, we present a chaos-based cryptography scheme designed for digital communication. We depart from the traditional approach where encrypting transformations are applied to the binary sequence (the symbolic sequence) into which the wave signal is encoded [1]. In this work, we devise a scheme where the encryption is realized at the physical level, that is, a scheme that encrypts the wave signal itself. Our chaos-based cryptographic scheme takes advantage of the complexity of a chaotic transformation. This complexity is very desirable for cryptographic schemes, since security increases with the number of possibilities of encryption for a given text unit (a letter for example). One advantage of using a chaotic transformation is that it can be implemented at the physical level by means of a low power deterministic electronic circuit which can be easily etched on a chip. Another advantage is that, contrary to a stochastic transformation, a chaotic one allows an straightforward decryption. Moreover, as has been shown elsewhere [2], chaotic transformations for cryptography, enables one to introduce powerful analytical methods to analyze the method performance, besides satisfying the design axioms that guarantees security. In order to clarify our goal and the scheme devised, in what follows, we initially outline the basic ideas of our method. Given a message represented by a sequence fy i g l i1⁄41, and a chaotic encrypting signal fxi g l i1⁄41, with yi and xi 2 R and xiþ1 1⁄4 GðxiÞ, where G is a chaotic transformation, we construct an ordered pair ðxi ; y i Þ. The ith element of the sequence representing the encrypted message is the y component of the ordered pair ðxi ; yn i Þ, obtained from F n c ðxi ; y i Þ. The function Fc : R 2 ! R is a chaotic transformation and n is the number of times we apply it to the ordered pair. The nth iteration of ðxi ; y i Þ, has no inverse if n and x0i are unknown, that is, y i can not be recovered if one knows only F n c ðxi; yiÞ. As it will be clear further, this changing of initial condition is one of the factors responsible for the security of the method. Now we describe how to obtain the sequence fyigli1⁄41 by means of the sampling and quantization methods. These methods play an essential role in the field of digital communication, since they allow us to treat signals varying continuously in time as discrete signals. One instance of the use of continuous in time signals is the encoding of music or * Corresponding author. E-mail address: [email protected] (R.F. Machado). 0960-0779/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2003.12.094 1266 R.F. Machado et al. / Chaos, Solitons and Fractals 21 (2004) 1265–1269 speech where variations in the pressure of the air are represented by a continuous signal such as the voltage in an electric circuit. In the sampling process, a signal varying continuously in time is replaced by a set of measurements (samples) taken at instants separated by a suitable time interval provided by the sampling theorem [3,4]. The signals to which the sampling theorem applies are the band limited ones. By a band limited signal, we mean a function of time whose Fourier transform is null for frequencies f such that jf jPW . According to the sampling theorem, it is possible to reconstruct the original signal from samples taken at times multiple of the sampling interval TS 6 1=2W . Thus, at the end of the sampling process, the signal is converted to a sequence fs1; s02; . . . ; slg of real values, which we refer to as the s sequence. After being sampled the signal is quantized. In this process, the amplitude range of the signal, say the interval 1⁄2a; b , is divided into N subintervals Rk 1⁄4 1⁄2ak ; akþ1Þ, 16 k6N , with a1 1⁄4 a, akþ1 1⁄4 ak þ dk , aNþ1 1⁄4 b, where dk is the length of the kth subinterval. To each Rk one assigns an appropriate real amplitude value qk 2 Rk , its middle point for example. A new sequence, the y sequence, is generated by replacing each s0i by the qk associated to the Rk region to which it belongs. So, the y sequence fy 1 ; y 2 ; . . . ; y l g is a sequence where each y i 2 R takes on values from the set fq1; . . . ; qNg. In traditional digital communication, each member of the y sequence is encoded into a binary sequence of length log2 N . Thus, traditional cryptographic schemes, and even recent proposed chaotic ones [1], transforms this binary sequence (or any other discrete alphabet) into another binary sequence, which is then modulated and transmitted. In our proposed scheme, we transform the real y into another real value, and then modulate this new y value in order to transmit it. This scheme deals with signals rather than with symbols, which implies that the required transformations are performed at the hardware or physical level. Instead of applying the encrypting transformations to the binary sequence, we apply them to the y sequence, the sequence obtained by sampling and quantizing the original wave signal. Suppose, now, that the amplitude of the wave signal is restricted to the interval [0,1]. The first step of the process is to obtain the encrypting signal, a sequence fx1; x02; . . . ; xlg, 0 < x0i < 1. As we will show, this signal is obtained by either sampling a chaotic one or by a chaotic mapping. The pair ðxi ; y i Þ localizes a point in the unit square. In order to encrypt y i , we apply the baker map to the point ðxi ; y i Þ to obtain ðxi ; y i Þ 1⁄4 ð2xi b2xi c; 0:5ðy i þ b2xi cÞÞ, where b2xi c is the largest integer equal to or less than 2x0i . The encrypted signal is given by y 1 i , that is, 0:5ðy i þ b2xi cÞ. It is important to notice that y i can take 2N different values instead of N , since each y 0 i may be encoded as either 0:5 ðy i Þ < 0:5 or 0:5 ðy i þ 1Þ > 0:5, depending on whether x0i falls below or above 0:5. So, in order to digitally modulate the encrypted signal for transmission, 2N pulse amplitudes are necessary, with each binary block being encoded by two different pulses. Therefore, our method has an output format that can be straightforwardly used in digital transmissions. Suppose, for example, that N 1⁄4 2, and we have q1 1⁄4 0:25 and q2 1⁄4 0:75. If s0i < 0:5 then y i 1⁄4 0:25 and if we use n 1⁄4 1, we have y i 1⁄4 0:125 if x0i < 0:5 or y i 1⁄4 0:625 if x0i P 0:5. On the other hand, if s0i > 0:5, then y i 1⁄4 0:75 and we have y i 1⁄4 0:375, if x0i < 0:5 or y i 1⁄4 0:875 if x0i P 0:5. So, the encrypted signal takes on values from the set f0:125; 0:375; 0:625; 0:875g, where the first and third values can be decrypted as 0.25 in the non-encrypted signal while the second and the forth as 0.75. In a general case, where we apply n iterations of the mapping, y i can assume 2nN different values. In this case, if one wants to digitally transmit the cipher text, one can encode every cipher text unit using a binary block of length log2ð2NÞ and then modulate this binary stream using 2nN pulse amplitudes. Thus, the decryption is straightforward if one knows how many times the baker map was applied during the encryption. If the baker transformation (function Fc) is applied n times, there are, for each plain text unit, 2nN possible cipher text units. In this case, the complexity of the ciphertext, that is, its security, can have its upper bound estimated by the Shannon complexity Hs which is the logarithm of the possible number of ciphertext units, produced after the baker’s map have been applied n times. So, Hs 1⁄4 n logð2Þ þ logðNÞ. We see that n is much more important for security reasons than N . So, if one wishes to improve security, one could implement a dynamical secret key schedule for n. By this we mean that, based on some information of the encrypted trajectory ðxi ; y i Þ, the value of n could be changed whenever a plain text unit is encrypted. If one allows only m values for n, the number of possible cipher text units would be given by Nm Qm j1⁄41 2 nj and the complexity of the cipher text would be Pm j1⁄41 nj log 2þ m logN , which can be very high, even for small m. Thus, without knowing the number n of applications of the baker map during the encryption, the decryption renders highly improbable. In fact, n is the secret key of our cryptographic scheme and we can think of the sequence fxi g as a dynamical secret key schedule for the x-component in the initial condition represented by the ordered pair ðxi ; y i Þ. The tools necessary to perform the security analysis are provided by the information theory. In this context, information sources are modelled by random processes whose outcome may be either discrete or continuous in time. Since major interest, and ours too, is in band limited signals, we restrict ourselves to the discrete case, where the source is modelled by a discrete time random process. This is a sequence fy i g l i1⁄41 in which each y 0 i assumes values within the set A 1⁄4 fq1; q2; . . . ; qNg. This set is called the alphabet and its elements are the letters. To each letter is assigned a probability mass function pðqjÞ 1⁄4 P ðy i 1⁄4 qjÞ, that gives the probability with which the letter is selected for transmission. R.F. Machado et al. / Chaos, Solitons and Fractals 21 (2004) 1265–1269 1267 In cryptography, one deals with two messages: the plai",
"title": ""
},
{
"docid": "a288a610a6cd4ff32b3fff4e2124aee0",
"text": "According to the survey done by IBM business consulting services in 2006, global CEOs stated that business model innovation will have a greater impact on operating margin growth, than product or service innovation. We also noticed that some enterprises in China's real estate industry have improved their business models for sustainable competitive advantage and surplus profit in recently years. Based on the case studies of Shenzhen Vanke, as well as literature review, a framework for business model innovation has been developed. The framework provides an integrated means of making sense of new business model. These include critical dimensions of new customer value propositions, technological innovation, collaboration of the business infrastructure and the economic feasibility of a new business model.",
"title": ""
},
{
"docid": "3229ceebb2534f9da93981b5de3b7928",
"text": "Tarantula is an aggressive floating point machine targeted at technical, scientific and bioinformatics workloads, originally planned as a follow-on candidate to the EV8 processor [6, 5]. Tarantula adds to the EV8 core a vector unit capable of 32 double-precision flops per cycle. The vector unit fetches data directly from a 16 MByte second level cache with a peak bandwidth of sixty four 64-bit values per cycle. The whole chip is backed by a memory controller capable of delivering over 64 GBytes/s of raw band- width. Tarantula extends the Alpha ISA with new vector instructions that operate on new architectural state. Salient features of the architecture and implementation are: (1) it fully integrates into a virtual-memory cache-coherent system without changes to its coherency protocol, (2) provides high bandwidth for non-unit stride memory accesses, (3) supports gather/scatter instructions efficiently, (4) fully integrates with the EV8 core with a narrow, streamlined interface, rather than acting as a co-processor, (5) can achieve a peak of 104 operations per cycle, and (6) achieves excellent \"real-computation\" per transistor and per watt ratios. Our detailed simulations show that Tarantula achieves an average speedup of 5X over EV8, out of a peak speedup in terms of flops of 8X. Furthermore, performance on gather/scatter intensive benchmarks such as Radix Sort is also remarkable: a speedup of almost 3X over EV8 and 15 sustained operations per cycle. Several benchmarks exceed 20 operations per cycle.",
"title": ""
},
{
"docid": "97c9d91709c98cd6dd803ffc9810d88f",
"text": "Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graphlabeled inputs.",
"title": ""
},
{
"docid": "e0724c87fd4344e01cb9260fdd36856c",
"text": "In this paper we introduce a multi-objective auto-tuning framework comprising compiler and runtime components. Focusing on individual code regions, our compiler uses a novel search technique to compute a set of optimal solutions, which are encoded into a multi-versioned executable. This enables the runtime system to choose specifically tuned code versions when dynamically adjusting to changing circumstances.\n We demonstrate our method by tuning loop tiling in cache-sensitive parallel programs, optimizing for both runtime and efficiency. Our static optimizer finds solutions matching or surpassing those determined by exhaustively sampling the search space on a regular grid, while using less than 4% of the computational effort on average. Additionally, we show that parallelism-aware multi-versioning approaches like our own gain a performance improvement of up to 70% over solutions tuned for only one specific number of threads.",
"title": ""
},
{
"docid": "b3352b90c84bb7e85cdb09ed95981231",
"text": "We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available through the open-source CellProfiler project and enables objective scoring of whole-worm high-throughput image-based assays of C. elegans for the study of diverse biological pathways that are relevant to human disease.",
"title": ""
}
] |
scidocsrr
|
7603863e232d4524ad77241726ab3950
|
Probabilistic text analytics framework for information technology service desk tickets
|
[
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
}
] |
[
{
"docid": "579333c5b2532b0ad04d0e3d14968a54",
"text": "We present a learning to rank approach to classify folktales, such as fairy tales and urban legends, according to their story type, a concept that is widely used by folktale researchers to organize and classify folktales. A story type represents a collection of similar stories often with recurring plot and themes. Our work is guided by two frequently used story type classification schemes. Contrary to most information retrieval problems, the text similarity in this problem goes beyond topical similarity. We experiment with approaches inspired by distributed information retrieval and features that compare subject-verb-object triplets. Our system was found to be highly effective compared with a baseline system.",
"title": ""
},
{
"docid": "b3de5fa8a61042c486ca9819448a444d",
"text": "This paper proposes a novel optimization algorithm called Hyper-Spherical Search (HSS) algorithm. Like other evolutionary algorithms, the proposed algorithm starts with an initial population. Population individuals are of two types: particles and hyper-sphere centers that all together form particle sets. Searching the hyper-sphere inner space made by the hyper-sphere center and its particle is the basis of the proposed evolutionary algorithm. The HSS algorithm hopefully converges to a state at which there exists only one hyper-sphere center, and its particles are at the same position and have the same cost function value as the hyper-sphere center. Applying the proposed algorithm to some benchmark cost functions shows its ability in dealing with different types of optimization problems. The proposed method is compared with the genetic algorithm (GA), particle swarm optimization (PSO) and harmony search algorithm (HSA). The results show that the HSS algorithm has faster convergence and results in better solutions than GA, PSO and HSA.",
"title": ""
},
{
"docid": "83adebddcfd162922e55d89bf2dea9e6",
"text": "In this paper, we present an orientation inference framework for reconstructing implicit surfaces from unoriented point clouds. The proposed method starts from building a surface approximation hierarchy comprising of a set of unoriented local surfaces, which are represented as a weighted combination of radial basis functions. We formulate the determination of the globally consistent orientation as a graph optimization problem by treating the local implicit patches as nodes. An energy function is defined to penalize inconsistent orientation changes by checking the sign consistency between neighboring local surfaces. An optimal labeling of the graph nodes indicating the orientation of each local surface can, thus, be obtained by minimizing the total energy defined on the graph. The local inference results are propagated over the model in a front-propagation fashion to obtain the global solution. The reconstructed surfaces are consolidated by a simple and effective inspection procedure to locate the erroneously fitted local surfaces. A progressive reconstruction algorithm that iteratively includes more oriented points to improve the fitting accuracy and efficiently updates the RBF coefficients is proposed. We demonstrate the performance of the proposed method by showing the surface reconstruction results on some real-world 3-D data sets with comparison to those by using the previous methods.",
"title": ""
},
{
"docid": "897434ecb3fbf9ea6aae02aeca9cc267",
"text": "The three stage design of a microstrip slotted holy shaped patch structure intended to serve high frequency applications in the frequency range between 19.52 GHz to 31.5 GHz is proposed in this paper. The geometrical stages use FR4 epoxy substrate with small dimensions of 10 mm × 8.7 mm × 1.6 mm and employ coaxial feeding technique. An analysis of the three design stages has been done over HFSS-15to obtain the corresponding reflection coefficient, bandwidth, radiation pattern, gain and VSWR. The graphical as well as tabulated comparison of the standard parameters has been included in the results section.",
"title": ""
},
{
"docid": "a0e14f5c359de4aa8e7640cf4ff5effa",
"text": "In speech translation, we are faced with the problem of how to couple the speech recognition process and the translation process. Starting from the Bayes decision rule for speech translation, we analyze how the interaction between the recognition process and the translation process can be modelled. In the light of this decision rule, we discuss the already existing approaches to speech translation. None of the existing approaches seems to have addressed this direct interaction. We suggest two new methods, the local averaging approximation and the monotone alignments.",
"title": ""
},
{
"docid": "9a5137b87e70af421d93aa7dd70bfacd",
"text": "The human immune system has numerous properties that make it ripe for exploitation in the computational domain, such as robustness and fault tolerance, and many different algorithms, collectively termed Artificial Immune Systems (AIS), have been inspired by it. Two generations of AIS are currently in use, with the first generation relying on simplified immune models and the second generation utilising interdisciplinary collaboration to develop a deeper understanding of the immune system and hence produce more complex models. Both generations of algorithms have been successfully applied to a variety of problems, including anomaly detection, pattern recognition, optimisation and robotics. In this chapter an overview of AIS is presented, its evolution is discussed, and it is shown that the diversification of the field is linked to the diversity of the immune system itself, leading to a number of algorithms as opposed to one archetypal system. Two case studies are also presented to help provide insight into the mechanisms of AIS; these are the idiotypic network approach and the Dendritic Cell Algorithm.",
"title": ""
},
{
"docid": "8543e4cd67ef3f23efabd0b130bfe9f9",
"text": "A promising way of software reuse is Component-Based Software Development (CBSD). There is an increasing number of OSS products available that can be freely used in product development. However, OSS communities themselves have not yet taken full advantage of the “reuse mechanism”. Many OSS projects duplicate effort and code, even when sharing the same application domain and topic. One successful counter-example is the FFMpeg multimedia project, since several of its components are widely and consistently reused into other OSS projects. This paper documents the history of the libavcodec library of components from the FFMpeg project, which at present is reused in more than 140 OSS projects. Most of the recipients use it as a blackbox component, although a number of OSS projects keep a copy of it in their repositories, and modify it as such. In both cases, we argue that libavcodec is a successful example of reusable OSS library of compo-",
"title": ""
},
{
"docid": "7fa9bacbb6b08065ecfe0530f082a391",
"text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.",
"title": ""
},
{
"docid": "c9e5a1b9c18718cc20344837e10b08f7",
"text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.",
"title": ""
},
{
"docid": "682921e4e2f000384fdcb9dc6fbaa61a",
"text": "The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 × 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.",
"title": ""
},
{
"docid": "6deaeb7d3fdb3a9ffce007af333061ac",
"text": "This paper proposes a simple CMOS exponential current circuit that is capable to control a Variable Gain Amplifier with a linear-in-dB manner. The proposed implementation is based on a Taylor's series approximation of the exponential function. A simple VGA architecture has been designed in a CMOS 90nm technology, in order to validate the theoretical analysis. The approximation achieves a 17dB linear range with less than 0.5dB approximation error, while the overall power consumption is less than 300μW.",
"title": ""
},
{
"docid": "977efac2809f4dc455e1289ef54008b0",
"text": "A novel 3-D NAND flash memory device, VSAT (Vertical-Stacked-Array-Transistor), has successfully been achieved. The VSAT was realized through a cost-effective and straightforward process called PIPE (planarized-Integration-on-the-same-plane). The VSAT combined with PIPE forms a unique 3-D vertical integration method that may be exploited for ultra-high-density Flash memory chip and solid-state-drive (SSD) applications. The off-current level in the polysilicon-channel transistor dramatically decreases by five orders of magnitude by using an ultra-thin body of 20 nm thick and a double-gate-in-series structure. In addition, hydrogen annealing improves the subthreshold swing and the mobility of the polysilicon-channel transistor.",
"title": ""
},
{
"docid": "3dfe5099c72f3ef3341c2d053ee0d2c2",
"text": "In this paper, the authors introduce a type of transverse flux reluctance machines. These machines work without permanent magnets or electric rotor excitation and hold several advantages, including a high power density, high torque, and compact design. Disadvantages are a high fundamental frequency and a high torque ripple that complicates the control of the motor. The device uses soft magnetic composites (SMCs) for the magnetic circuit, which allows complex stator geometries with 3-D magnetic flux paths. The winding is made from hollow copper tubes, which also form the main heat sink of the machine by using water as a direct copper coolant. Models concerning the design and computation of the magnetic circuit, torque, and the power output are described. A crucial point in this paper is the determination of hysteresis and eddy-current losses in the SMC and the calculation of power losses and current displacement in the copper winding. These are calculated with models utilizing a combination of analytic approaches and finite-element method simulations. Finally, a thermal model based on lumped parameters is introduced, and calculated temperature rises are presented.",
"title": ""
},
{
"docid": "8fd79b51fd744b675751c45cc0256787",
"text": "New grid codes demand the wind turbine systems to ride through recurring grid faults. In this paper, the performance of the doubly Ffed induction generator (DFIG) wind turbine system under recurring symmetrical grid faults is analyzed. The mathematical model of the DFIG under recurring symmetrical grid faults is established. The analysis is based on the DFIG wind turbine system with the typical low-voltage ride-through strategy-with rotor-side crowbar. The stator natural flux produced by the voltage recovery after the first grid fault may be superposed on the stator natural flux produced by the second grid fault, so that the transient rotor and stator current and torque fluctuations under the second grid fault may be influenced by the characteristic of the first grid fault, including the voltage dips level and the grid fault angle, as well as the duration between two faults. The mathematical model of the DFIG under recurring grid faults is verified by simulations on a 1.5-MW DFIG wind turbine system model and experiments on a 30-kW reduced scale DFIG test system.",
"title": ""
},
{
"docid": "ca4e2cff91621bca4018ce1eca5450e2",
"text": "Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high-dimensional constrained problems, as the projection step becomes computationally prohibitive. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank–Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an <italic>inexact </italic> FW algorithm. Using a diminishing step size rule and letting <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math></inline-formula> be the iteration number, we show that the DeFW algorithm's convergence rate is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t)$</tex-math></inline-formula> for convex objectives; is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t^2)$</tex-math></inline-formula> for strongly convex objectives with the optimal solution in the interior of the constraint set; and is <inline-formula> <tex-math notation=\"LaTeX\">${\\mathcal O}(1/\\sqrt{t})$</tex-math></inline-formula> toward a stationary point for smooth but nonconvex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. We demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings.",
"title": ""
},
{
"docid": "d8c45560377ac2774b1bbe8b8a61b1fb",
"text": "Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.",
"title": ""
},
{
"docid": "70ea4bbe03f2f733ff995dc4e8fea920",
"text": "The spread of malicious or accidental misinformation in social media, especially in time-sensitive situations, such as real-world emergencies, can have harmful effects on individuals and society. In this work, we developed models for automated verification of rumors (unverified information) that propagate through Twitter. To predict the veracity of rumors, we identified salient features of rumors by examining three aspects of information spread: linguistic style used to express rumors, characteristics of people involved in propagating information, and network propagation dynamics. The predicted veracity of a time series of these features extracted from a rumor (a collection of tweets) is generated using Hidden Markov Models. The verification algorithm was trained and tested on 209 rumors representing 938,806 tweets collected from real-world events, including the 2013 Boston Marathon bombings, the 2014 Ferguson unrest, and the 2014 Ebola epidemic, and many other rumors about various real-world events reported on popular websites that document public rumors. The algorithm was able to correctly predict the veracity of 75% of the rumors faster than any other public source, including journalists and law enforcement officials. The ability to track rumors and predict their outcomes may have practical applications for news consumers, financial markets, journalists, and emergency services, and more generally to help minimize the impact of false information on Twitter.",
"title": ""
},
{
"docid": "26a9fb64389a5dbbbd8afdc6af0b6f07",
"text": "specifications of the essential structure of a system. Models in the analysis or preliminary design stages focus on the key concepts and mechanisms of the eventual system. They correspond in certain ways with the final system. But details are missing from the model, which must be added explicitly during the design process. The purpose of the abstract models is to get the high-level pervasive issues correct before tackling the more localized details. These models are intended to be evolved into the final models by a careful process that guarantees that the final system correctly implements the intent of the earlier models. There must be traceability from these essential models to the full models; otherwise, there is no assurance that the final system correctly incorporates the key properties that the essential model sought to show. Essential models focus on semantic intent. They do not need the full range of implementation options. Indeed, low-level performance distinctions often obscure the logical semantics. The path from an essential model to a complete implementation model must be clear and straightforward, however, whether it is generated automatically by a code generator or evolved manually by a designer. Full specifications of a final system. An implementation model includes enough information to build the system. It must include not only the logical semantics of the system and the algorithms, data structures, and mechanisms that ensure proper performance, but also organizational decisions about the system artifacts that are necessary for cooperative work by humans and processing by tools. This kind of model must include constructs for packaging the model for human understanding and for computer convenience. These are not properties of the target application itself. Rather, they are properties of the construction process. Exemplars of typical or possible systems. Well-chosen examples can give insight to humans and can validate system specifications and implementations. Even a large Chapter 2 • The Nature and Purpose of Models 17 collection of examples, however, necessarily falls short of a definitive description. Ultimately, we need models that specify the general case; that is what a program is, after all. Examples of typical data structures, interaction sequences, or object histories can help a human trying to understand a complicated situation, however. Examples must be used with some care. It is logically impossible to induce the general case from a set of examples, but well-chosen prototypes are the way most people think. An example model includes instances rather than general descriptors. It therefore tends to have a different feel than a generic descriptive model. Example models usually use only a subset of the UML constructs, those that deal with instances. Both descriptive models and exemplar models are useful in modeling a system. Complete or partial descriptions of systems. A model can be a complete description of a single system with no outside references. More often, it is organized as a set of distinct, discrete units, each of which may be stored and manipulated separately as a part of the entire description. Such models have “loose ends” that must be bound to other models in a complete system. Because the pieces have coherence and meaning, they can be combined with other pieces in various ways to produce many different systems. Achieving reuse is an important goal of good modeling. Models evolve over time. Models with greater degrees of detail are derived from more abstract models, and more concrete models are derived from more logical models. For example, a model might start as a high-level view of the entire system, with a few key services in brief detail and no embellishments. Over time, much more detail is added and variations are introduced. Also over time, the focus shifts from a front-end, user-centered logical view to a back-end, implementationcentered physical view. As the developers work with a system and understand it better, the model must be iterated at all levels to capture that understanding; it is impossible to understand a large system in a single, linear pass. There is no one “right” form for a model.",
"title": ""
},
{
"docid": "762197e61c90492d2d405fe2a832092f",
"text": "This paper proposes a methodology to design and optimize the footprint of miniaturized 3-dB branch-line hybrid couplers, which consists of high-impedance transmission lines and distributed capacitors. To minimize the physical size of the coupler, the distributed capacitors are placed within the empty space of the hybrid. The proposed design methodology calls for the joint optimization of the length of the reduced high-impedance transmission lines and the area of the distributed capacitors. A prototype at S-band was designed and built to validate the approach. It showed a size reduction by 62% compared with the conventional 3-dB branch-line hybrid coupler while providing similar performance and bandwidth.",
"title": ""
},
{
"docid": "cca9972ce9d49d1347274b446e6be00b",
"text": "Miura folding is famous all over the world. It is an element of the ancient Japanese tradition of origami and reaches as far as astronautical engineering through the construction of solar panels. This article explains how to achieve the Miura folding, and describes its application to maps. The author also suggests in this context that nature may abhor the right angle, according to observation of the wing base of a dragonfly. AMS Subject Classification: 51M05, 00A09, 97A20",
"title": ""
}
] |
scidocsrr
|
cbc2c0f62b7501d1880d4f27128d399d
|
Salient Structure Detection by Context-Guided Visual Search
|
[
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
}
] |
[
{
"docid": "b42f3575dad9615a40f491291661e7c5",
"text": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.",
"title": ""
},
{
"docid": "f84de5ba61de555c2d90afc2c8c2b465",
"text": "Visual sensor networks have emerged as an important class of sensor-based distributed intelligent systems, with unique performance, complexity, and quality of service challenges. Consisting of a large number of low-power camera nodes, visual sensor networks support a great number of novel vision-based applications. The camera nodes provide information from a monitored site, performing distributed and collaborative processing of their collected data. Using multiple cameras in the network provides different views of the scene, which enhances the reliability of the captured events. However, the large amount of image data produced by the cameras combined with the network’s resource constraints require exploring new means for data processing, communication, and sensor management. Meeting these challenges of visual sensor networks requires interdisciplinary approaches, utilizing vision processing, communications and networking, and embedded processing. In this paper, we provide an overview of the current state-of-the-art in the field of visual sensor networks, by exploring several relevant research directions. Our goal is to provide a better understanding of current research problems in the different research fields of visual sensor networks, and to show how these different research fields should interact to solve the many challenges of visual sensor networks.",
"title": ""
},
{
"docid": "520de9b576c112171ce0d08650a25093",
"text": "Figurative language represents one of the most difficult tasks regarding natural language processing. Unlike literal language, figurative language takes advantage of linguistic devices such as irony, humor, sarcasm, metaphor, analogy, and so on, in order to communicate indirect meanings which, usually, are not interpretable by simply decoding syntactic or semantic information. Rather, figurative language reflects patterns of thought within a communicative and social framework that turns quite challenging its linguistic representation, as well as its computational processing. In this Ph. D. thesis we address the issue of developing a linguisticbased framework for figurative language processing. In particular, our efforts are focused on creating some models capable of automatically detecting instances of two independent figurative devices in social media texts: humor and irony. Our main hypothesis relies on the fact that language reflects patterns of thought; i.e. to study language is to study patterns of conceptualization. Thus, by analyzing two specific domains of figurative language, we aim to provide arguments concerning how people mentally conceive humor and irony, and how they verbalize each device in social media platforms. In this context, we focus on showing how fine-grained knowledge, which relies on shallow and deep linguistic layers, can be translated into valuable patterns to automatically identify figurative uses of language. Contrary to most researches that deal with figurative language, we do not support our arguments on prototypical examples neither of humor nor of irony. Rather, we try to find patterns in texts such as blogs, web comments, tweets, etc., whose intrinsic characteristics are quite different to the characteristics described in the specialized literature. Apart from providing a linguistic inventory for detecting humor and irony at textual level, in this investigation we stress out the importance of considering user-generated tags in order to automatically build resources for figurative language processing, such as ad hoc corpora in which human annotation is not necessary. Finally, each model is evaluated in terms of its relevance to properly identify instances of humor and irony, respectively. To this end, several experiments are carried out taking into consideration different data sets and applicability scenarios. Our findings point out that figurative language processing (especially humor and irony) can provide fine-grained knowledge in tasks as diverse as sentiment analysis, opinion mining, information retrieval, or trend discovery.",
"title": ""
},
{
"docid": "62f4c947cae38cc7071b87597b54324a",
"text": "A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be pre-calibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from two-view point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundle-adjusted calibration-grid data. The new estimator is fast enough to be included in a RANSAC-based matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multi-view relation is a planar homography or trifocal tensor is described.",
"title": ""
},
{
"docid": "d061ac8a6c312c768a9dfc6e59cfe6a8",
"text": "The assessment of crop yield losses is needed for the improvement of production systems that contribute to the incomes of rural families and food security worldwide. However, efforts to quantify yield losses and identify their causes are still limited, especially for perennial crops. Our objectives were to quantify primary yield losses (incurred in the current year of production) and secondary yield losses (resulting from negative impacts of the previous year) of coffee due to pests and diseases, and to identify the most important predictors of coffee yields and yield losses. We established an experimental coffee parcel with full-sun exposure that consisted of six treatments, which were defined as different sequences of pesticide applications. The trial lasted three years (2013-2015) and yield components, dead productive branches, and foliar pests and diseases were assessed as predictors of yield. First, we calculated yield losses by comparing actual yields of specific treatments with the estimated attainable yield obtained in plots which always had chemical protection. Second, we used structural equation modeling to identify the most important predictors. Results showed that pests and diseases led to high primary yield losses (26%) and even higher secondary yield losses (38%). We identified the fruiting nodes and the dead productive branches as the most important and useful predictors of yields and yield losses. These predictors could be added in existing mechanistic models of coffee, or can be used to develop new linear mixed models to estimate yield losses. Estimated yield losses can then be related to production factors to identify corrective actions that farmers can implement to reduce losses. The experimental and modeling approaches of this study could also be applied in other perennial crops to assess yield losses.",
"title": ""
},
{
"docid": "abdc445e498c6d04e8f046e9c2610f9f",
"text": "Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.",
"title": ""
},
{
"docid": "376911fb47b9954a35f9910326f9b97e",
"text": "Immunotherapy enhances a patient’s immune system to fight disease and has recently been a source of promising new cancer treatments. Among the many immunotherapeutic strategies, immune checkpoint blockade has shown remarkable benefit in the treatment of a range of cancer types. Immune checkpoint blockade increases antitumor immunity by blocking intrinsic downregulators of immunity, such as cytotoxic T-lymphocyte antigen 4 (CTLA-4) and programmed cell death 1 (PD-1) or its ligand, programmed cell death ligand 1 (PD-L1). Several immune checkpoint–directed antibodies have increased overall survival for patients with various cancers and are approved by the Food and Drug Administration (Table 1). By increasing the activity of the immune system, immune checkpoint blockade can have inflammatory side effects, which are often termed immune-related adverse events. Although any organ system can be affected, immune-related adverse events most commonly involve the gastrointestinal tract, endocrine glands, skin, and liver.1 Less often, the central nervous system and cardiovascular, pulmonary, musculoskeletal, and hematologic systems are involved. The wide range of potential immune-related adverse events requires multidisciplinary, collaborative management by providers across the clinical spectrum (Fig. 1). No prospective trials have defined strategies for effectively managing specific immune-related adverse events; thus, clinical practice remains variable. Nevertheless, several professional organizations are working to harmonize expert consensus on managing specific immune-related adverse events. In this review, we focus on 10 essential questions practitioners will encounter while caring for the expanding population of patients with cancer who are being treated with immune checkpoint blockade (Table 2).",
"title": ""
},
{
"docid": "cb00e564a81ace6b75e776f1fe41fb8f",
"text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30",
"title": ""
},
{
"docid": "fb941f03dd02f1d7fc7ded54ae462afd",
"text": "In this paper we discuss the development and implementation of an Arabic automatic speech recognition engine. The engine can recognize both continuous speech and isolated words. The system was developed using the Hidden Markov Model Toolkit. First, an Arabic dictionary was built by composing the words to its phones. Next, Mel Frequency Cepstral Coefficients (MFCC) of the speech samples are derived to extract the speech feature vectors. Then, the training of the engine based on triphones is developed to estimate the parameters for a Hidden Markov Model. To test the engine, the database consisting of speech utterance from thirteen Arabian native speakers is used which is divided into ten speaker-dependent and three speaker-independent samples. The experimental results showed that the overall system performance was 90.62%, 98.01 % and 97.99% for sentence correction, word correction and word accuracy respectively.",
"title": ""
},
{
"docid": "e95fa624bb3fd7ea45650213088a43b0",
"text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.",
"title": ""
},
{
"docid": "a73da9191651ae5d0330d6f64f838f67",
"text": "Language selection (or control) refers to the cognitive mechanism that controls which language to use at a given moment and context. It allows bilinguals to selectively communicate in one target language while minimizing the interferences from the nontarget language. Previous studies have suggested the participation in language control of different brain areas. However, the question remains whether the selection of one language among others relies on a language-specific neural module or general executive regions that also allow switching between different competing behavioral responses including the switching between various linguistic registers. In this functional magnetic resonance imaging study, we investigated the neural correlates of language selection processes in German-French bilingual subjects during picture naming in different monolingual and bilingual selection contexts. We show that naming in the first language in the bilingual context (compared with monolingual contexts) increased activation in the left caudate and anterior cingulate cortex. Furthermore, the activation of these areas is even more extended when the subjects are using a second weaker language. These findings show that language control processes engaged in contexts during which both languages must remain active recruit the left caudate and the anterior cingulate cortex (ACC) in a manner that can be distinguished from areas engaged in intralanguage task switching.",
"title": ""
},
{
"docid": "e67b75e11ca6dd9b4e6c77b3cb92cceb",
"text": "The incidence of malignant melanoma continues to increase worldwide. This cancer can strike at any age; it is one of the leading causes of loss of life in young persons. Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. New developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in clinical diagnostic ability to the point that melanoma can be detected in the clinic at the very earliest stages. The global adoption of this technology has allowed accumulation of large collections of dermoscopy images of melanomas and benign lesions validated by histopathology. The development of advanced technologies in the areas of image processing and machine learning have given us the ability to allow distinction of malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow not only earlier detection of melanoma, but also reduction of the large number of needless and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, widespread implementation must await further technical progress in accuracy and reproducibility. In this paper, we provide an overview of computerized detection of melanoma in dermoscopy images. First, we discuss the various aspects of lesion segmentation. Then, we provide a brief overview of clinical feature segmentation. Finally, we discuss the classification stage where machine learning algorithms are applied to the attributes generated from the segmented features to predict the existence of melanoma.",
"title": ""
},
{
"docid": "b898d7a2da7a10ef756317bc7f44f37c",
"text": "Cellulosomes are multienzyme complexes that are produced by anaerobic cellulolytic bacteria for the degradation of lignocellulosic biomass. They comprise a complex of scaffoldin, which is the structural subunit, and various enzymatic subunits. The intersubunit interactions in these multienzyme complexes are mediated by cohesin and dockerin modules. Cellulosome-producing bacteria have been isolated from a large variety of environments, which reflects their prevalence and the importance of this microbial enzymatic strategy. In a given species, cellulosomes exhibit intrinsic heterogeneity, and between species there is a broad diversity in the composition and configuration of cellulosomes. With the development of modern technologies, such as genomics and proteomics, the full protein content of cellulosomes and their expression levels can now be assessed and the regulatory mechanisms identified. Owing to their highly efficient organization and hydrolytic activity, cellulosomes hold immense potential for application in the degradation of biomass and are the focus of much effort to engineer an ideal microorganism for the conversion of lignocellulose to valuable products, such as biofuels.",
"title": ""
},
{
"docid": "ddd353b5903f12c14cc3af1163ac617c",
"text": "Unmanned Aerial Vehicles (UAVs) have recently received notable attention because of their wide range of applications in urban civilian use and in warfare. With air traffic densities increasing, it is more and more important for UAVs to be able to predict and avoid collisions. The main goal of this research effort is to adjust real-time trajectories for cooperative UAVs to avoid collisions in three-dimensional airspace. To explore potential collisions, predictive state space is utilized to present the waypoints of UAVs in the upcoming situations, which makes the proposed method generate the initial collision-free trajectories satisfying the necessary constraints in a short time. Further, a rolling optimization algorithm (ROA) can improve the initial waypoints, minimizing its total distance. Several scenarios are illustrated to verify the proposed algorithm, and the results show that our algorithm can generate initial collision-free trajectories more efficiently than other methods in the common airspace.",
"title": ""
},
{
"docid": "cbcdc411e22786dcc1b3655c5e917fae",
"text": "Local intracellular Ca(2+) transients, termed Ca(2+) sparks, are caused by the coordinated opening of a cluster of ryanodine-sensitive Ca(2+) release channels in the sarcoplasmic reticulum of smooth muscle cells. Ca(2+) sparks are activated by Ca(2+) entry through dihydropyridine-sensitive voltage-dependent Ca(2+) channels, although the precise mechanisms of communication of Ca(2+) entry to Ca(2+) spark activation are not clear in smooth muscle. Ca(2+) sparks act as a positive-feedback element to increase smooth muscle contractility, directly by contributing to the global cytoplasmic Ca(2+) concentration ([Ca(2+)]) and indirectly by increasing Ca(2+) entry through membrane potential depolarization, caused by activation of Ca(2+) spark-activated Cl(-) channels. Ca(2+) sparks also have a profound negative-feedback effect on contractility by decreasing Ca(2+) entry through membrane potential hyperpolarization, caused by activation of large-conductance, Ca(2+)-sensitive K(+) channels. In this review, the roles of Ca(2+) sparks in positive- and negative-feedback regulation of smooth muscle function are explored. We also propose that frequency and amplitude modulation of Ca(2+) sparks by contractile and relaxant agents is an important mechanism to regulate smooth muscle function.",
"title": ""
},
{
"docid": "31e052aaf959a4c5d6f1f3af6587d6cd",
"text": "We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier.",
"title": ""
},
{
"docid": "72be75e973b6a843de71667566b44929",
"text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.",
"title": ""
},
{
"docid": "56f18b39a740dd65fc2907cdef90ac99",
"text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "d7a620c961341e35fc8196b331fb0e68",
"text": "Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [32, 51]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection and analysis of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we show how we can use a two-tiered approach to build a hybrid exploit detector that enjoys the same accuracy as TaintCheck but have extremely low performance overhead. Finally, we propose a new type of automatic signature generation—semanticanalysis based signature generation. We show that by backtracing the chain of tainted data structure rooted at the detection point, TaintCheck can automatically identify which original flow and which part of the original flow have caused the attack and identify important invariants of the payload that can be used as signatures. Semantic-analysis based signature generation can be more accurate, resilient against polymorphic worms, and robust to attacks exploiting polymorphism than the pattern-extraction based signature generation methods.",
"title": ""
}
] |
scidocsrr
|
2f1a5e3459587e0c087e498679e2b507
|
How to Combine Homomorphic Encryption and Garbled Circuits Improved Circuits and Computing the Minimum Distance Efficiently
|
[
{
"docid": "3afa5356d956e2a525836b873442aa6b",
"text": "The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.",
"title": ""
}
] |
[
{
"docid": "2c56891c1c9f128553bab35d061049b8",
"text": "RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors' performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant.",
"title": ""
},
{
"docid": "362b1a5119733eba058d1faab2d23ebf",
"text": "§ Mission and structure of the project. § Overview of the Stone Man version of the Guide to the SWEBOK. § Status and development process of the Guide. § Applications of the Guide in the fields of education, human resource management, professional development and licensing and certification. § Class exercise in applying the Guide to defining the competencies needed to support software life cycle process deployment. § Strategy for uptake and promotion of the Guide. § Discussion of promotion, trial usage and experimentation. Workshop Leaders:",
"title": ""
},
{
"docid": "8e3366b6102ad6420972d4daee40d2a8",
"text": "Containers are increasingly gaining popularity and becoming one of the major deployment models in cloud environments. To evaluate the performance of scheduling and allocation policies in containerized cloud data centers, there is a need for evaluation environments that support scalable and repeatable experiments. Simulation techniques provide repeatable and controllable environments, and hence, they serve as a powerful tool for such purpose. This paper introduces ContainerCloudSim, which provides support for modeling and simulation of containerized cloud computing environments. We developed a simulation architecture for containerized clouds and implemented it as an extension of CloudSim. We described a number of use cases to demonstrate how one can plug in and compare their container scheduling and provisioning policies in terms of energy efficiency and SLA compliance. Our system is highly scalable as it supports simulation of large number of containers, given that there are more containers than virtual machines in a data center. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "ed77ce10f448cb58568a63089903a4a8",
"text": "Sentence representation at the semantic level is a challenging task for Natural Language Processing and Artificial Intelligence. Despite the advances in word embeddings (i.e. word vector representations), capturing sentence meaning is an open question due to complexities of semantic interactions among words. In this paper, we present an embedding method, which is aimed at learning unsupervised sentence representations from unlabeled text. We propose an unsupervised method that models a sentence as a weighted series of word embeddings. The weights of the word embeddings are fitted by using Shannon’s word entropies provided by the Term Frequency–Inverse Document Frequency (TF–IDF) transform. The hyperparameters of the model can be selected according to the properties of data (e.g. sentence length and textual gender). Hyperparameter selection involves word embedding methods and dimensionalities, as well as weighting schemata. Our method offers advantages over existing methods: identifiable modules, short-term training, online inference of (unseen) sentence representations, as well as independence from domain, external knowledge and language resources. Results showed that our model outperformed the state of the art in well-known Semantic Textual Similarity (STS) benchmarks. Moreover, our model reached state-of-the-art performance when compared to supervised and knowledge-based STS systems. Corresponding author Email addresses: [email protected] (Ignacio Arroyo-Fernández), [email protected] (Carlos-Francisco Méndez-Cruz), [email protected] (Gerardo Sierra), [email protected] (Juan-Manuel Torres-Moreno), [email protected] (Grigori Sidorov) 1Av. Universidad s/n Col. Chamilpa 62210, Cuernavaca, Morelos 2AV. Universidad No. 3000, Ciudad universitaria, Coyoacán 04510, Ciudad de México 3Université d’Avignon et des Pays de Vaucluse. 339 chemin des Meinajaries 84911, Avignon cedex 9, France 4Instituto Politécnico Nacional. Av. Juan de Dios Bátiz, Esq. Miguel Othón de Mendizábal, Col. Nueva Industrial Vallejo, Gustavo A. Madero 07738, Ciudad de México Preprint submitted to Journal October 23, 2017",
"title": ""
},
{
"docid": "6f4e5448f956017c39c1727e0eb5de7b",
"text": "Recently, community search over graphs has attracted significant attention and many algorithms have been developed for finding dense subgraphs from large graphs that contain given query nodes. In applications such as analysis of protein protein interaction (PPI) networks, citation graphs, and collaboration networks, nodes tend to have attributes. Unfortunately, most previously developed community search algorithms ignore these attributes and result in communities with poor cohesion w.r.t. their node attributes. In this paper, we study the problem of attribute-driven community search, that is, given an undirected graph G where nodes are associated with attributes, and an input query Q consisting of nodes Vq and attributes Wq , find the communities containing Vq , in which most community members are densely inter-connected and have similar attributes. We formulate our problem of finding attributed truss communities (ATC), as finding all connected and close k-truss subgraphs containing Vq, that are locally maximal and have the largest attribute relevance score among such subgraphs. We design a novel attribute relevance score function and establish its desirable properties. The problem is shown to be NP-hard. However, we develop an efficient greedy algorithmic framework, which finds a maximal k-truss containing Vq, and then iteratively removes the nodes with the least popular attributes and shrinks the graph so as to satisfy community constraints. We also build an elegant index to maintain the known k-truss structure and attribute information, and propose efficient query processing algorithms. Extensive experiments on large real-world networks with ground-truth communities shows the efficiency and effectiveness of our proposed methods.",
"title": ""
},
{
"docid": "a973ed3011d9c07ddab4c15ef82fe408",
"text": "OBJECTIVES\nTo assess the efficacy of a 6-week interdisciplinary treatment that combines coordinated psychological, medical, educational, and physiotherapeutic components (PSYMEPHY) over time compared to standard pharmacologic care.\n\n\nMETHODS\nRandomised controlled trial with follow-up at 6 months for the PSYMEPHY and control groups and 12 months for the PSYMEPHY group. Participants were 153 outpatients with FM recruited from a hospital pain management unit. Patients randomly allocated to the control group (CG) received standard pharmacologic therapy. The experimental group (EG) received an interdisciplinary treatment (12 sessions). The main outcome was changes in quality of life, and secondary outcomes were pain, physical function, anxiety, depression, use of pain coping strategies, and satisfaction with treatment as measured by the Fibromyalgia Impact Questionnaire, the Hospital Anxiety and Depression Scale, the Coping with Chronic Pain Questionnaire, and a question regarding satisfaction with the treatment.\n\n\nRESULTS\nSix months after the intervention, significant improvements in quality of life (p=0.04), physical function (p=0.01), and pain (p=0.03) were seen in the PSYMEPHY group (n=54) compared with controls (n=56). Patients receiving the intervention reported greater satisfaction with treatment. Twelve months after the intervention, patients in the PSYMEPHY group (n=58) maintained statistically significant improvements in quality of life, physical functioning, pain, and symptoms of anxiety and depression, and were less likely to use maladaptive passive coping strategies compared to baseline.\n\n\nCONCLUSIONS\nAn interdisciplinary treatment for FM was associated with improvements in quality of life, pain, physical function, anxiety and depression, and pain coping strategies up to 12 months after the intervention.",
"title": ""
},
{
"docid": "e2817500683f4eea7e4ed9e0484b303a",
"text": "This paper presents the Transport Disruption ontology, a formal framework for modelling travel and transport related events that have a disruptive impact on traveller’s journeys. We discuss related models, describe how transport events and their impacts are captured, and outline use of the ontology within an interlinked repository of the travel information to support intelligent transport systems.",
"title": ""
},
{
"docid": "e7d955c48e5bdd86ae21a61fcd130ae2",
"text": "We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.",
"title": ""
},
{
"docid": "9d5ba6f0beb2c9f03ea29f8fc35d51bb",
"text": "Independent component analysis (ICA) is a promising analysis method that is being increasingly applied to fMRI data. A principal advantage of this approach is its applicability to cognitive paradigms for which detailed models of brain activity are not available. Independent component analysis has been successfully utilized to analyze single-subject fMRI data sets, and an extension of this work would be to provide for group inferences. However, unlike univariate methods (e.g., regression analysis, Kolmogorov-Smirnov statistics), ICA does not naturally generalize to a method suitable for drawing inferences about groups of subjects. We introduce a novel approach for drawing group inferences using ICA of fMRI data, and present its application to a simple visual paradigm that alternately stimulates the left or right visual field. Our group ICA analysis revealed task-related components in left and right visual cortex, a transiently task-related component in bilateral occipital/parietal cortex, and a non-task-related component in bilateral visual association cortex. We address issues involved in the use of ICA as an fMRI analysis method such as: (1) How many components should be calculated? (2) How are these components to be combined across subjects? (3) How should the final results be thresholded and/or presented? We show that the methodology we present provides answers to these questions and lay out a process for making group inferences from fMRI data using independent component analysis.",
"title": ""
},
{
"docid": "868f6c927cf500aed70cfb921b0564b2",
"text": "The battery management system (BMS) is a critical component of electric and hybrid electric vehicles. The purpose of the BMS is to guarantee safe and reliable battery operation. To maintain the safety and reliability of the battery, state monitoring and evaluation, charge control, and cell balancing are functionalities that have been implemented in BMS. As an electrochemical product, a battery acts differently under different operational and environmental conditions. The uncertainty of a battery’s performance poses a challenge to the implementation of these functions. This paper addresses concerns for current BMSs. State evaluation of a battery, including state of charge, state of health, and state of life, is a critical task for a BMS. Through reviewing the latest methodologies for the state evaluation of batteries, the future challenges for BMSs are presented and possible solutions are proposed as well.",
"title": ""
},
{
"docid": "c64cfef80a4d49870894cd5f910896b6",
"text": "Digital music has become prolific in the web in recent decades. Automated recommendation systems are essential for users to discover music they love and for artists to reach appropriate audience. When manual annotations and user preference data is lacking (e.g. for new artists) these systems must rely on content based methods. Besides powerful machine learning tools for classification and retrieval, a key component for successful recommendation is the audio content representation. Good representations should capture informative musical patterns in the audio signal of songs. These representations should be concise, to enable efficient (low storage, easy indexing, fast search) management of huge music repositories, and should also be easy and fast to compute, to enable real-time interaction with a user supplying new songs to the system. Before designing new audio features, we explore the usage of traditional local features, while adding a stage of encoding with a pre-computed codebook and a stage of pooling to get compact vectorial representations. We experiment with different encoding methods, namely the LASSO, vector quantization (VQ) and cosine similarity (CS). We evaluate the representations' quality in two music information retrieval applications: query-by-tag and query-by-example. Our results show that concise representations can be used for successful performance in both applications. We recommend using top-τ VQ encoding, which consistently performs well in both applications, and requires much less computation time than the LASSO.",
"title": ""
},
{
"docid": "6b6790a92cb4dafb816648cdd5f51aa1",
"text": "An algebraic nonlinear analysis of the switched reluctance drive system is described. The analysis is intended to provide an understanding of the factors that determine the kVA requirements of the electronic power converter and to determine the fundamental nature of the torque/speed characteristics. The effect of saturation is given special attention. It is shown that saturation has the two main effects of increasing the motor size required for a given torque, and at the same time decreasing the kVA per horsepower (i.e., increasing the effective power factor by analogy with an ac machine). The kVA per horsepower is lower than predicted by simple linear analysis that neglects saturation. Necessary conditions are also developed for a flat-topped current waveform by correctly determining the motor back-EMF. The reason why it is desirable to allow the phase current to continue (though with much reduced magnitude) even after the poles have passed the aligned position is explained. The theory provides a formula for determining the required commutation angle for the phase current. The basis is provided for an estimation of the kVA requirements of the switched reluctance (SR) drive. These requirements have been measured and also calculated by a computer simulation program.",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "043b51b50f17840508b0dfb92c895fc9",
"text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (",
"title": ""
},
{
"docid": "f60426bdd66154a7d2cb6415abd8f233",
"text": "In the rapidly expanding field of parallel processing, job schedulers are the “operating systems” of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the performance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler ts and a nonlinear exponent αs. For all four schedulers, the utilization of the computing system decreases to <10% for computations lasting only a few seconds. Multi-level schedulers (such as LLMapReduce) that transparently aggregate short computations can improve utilization for these short computations to >90% for all four of the schedulers that were tested.",
"title": ""
},
{
"docid": "75246f1ef21d4ce739e8b27753c52ee1",
"text": "The ship control system for the U.S. Navy's newest attack submarine, Seawolf; incorporates hardware modular redundancy both in its core processing and its input/output system. This paper provides a practical experience report on the redundancy management software services developed for this system. Introductory material is presented to provide contextual information regarding the overall ship control system. An overview of the system's processing platform is presented in sufficient detail to define the problems associated with redundancy management and to describe hardware functionality which supports the software services. Policies and procedures for detection and isolation of faults are discussed as are reconfiguration responses to faults.",
"title": ""
},
{
"docid": "e7d36dc01a3e20c3fb6d2b5245e46705",
"text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.",
"title": ""
},
{
"docid": "b249fe89bcfc985fcb4f9128d12c28b3",
"text": "Prevalent matrix completion methods capture only the low-rank property which gives merely a constraint that the data points lie on some low-dimensional subspace, but generally ignore the extra structures (beyond low-rank) that specify in more detail how the data points lie on the subspace. Whenever the data points are not uniformly distributed on the low-dimensional subspace, the row-coherence of the target matrix to recover could be considerably high and, accordingly, prevalent methods might fail even if the target matrix is fairly low-rank. To relieve this challenge, we suggest to consider a model termed low-rank factor decomposition (LRFD), which imposes an additional restriction that the data points must be represented as linear, compressive combinations of the bases in a given dictionary. We show that LRFD can effectively mitigate the challenges of high row-coherence, provided that its dictionary is configured properly. Namely, it is mathematically proven that if the dictionary is well-conditioned and low-rank, then LRFD can weaken the dependence on the row-coherence. In particular, if the dictionary itself is low-rank, then the dependence on the row-coherence can be entirely removed. Subsequently, we devise two practical algorithms to obtain proper dictionaries in unsupervised environments: one uses the existing matrix completion methods to construct the dictionary in LRFD, and the other tries to learn a proper dictionary from the data given. Experiments on randomly generated matrices and motion datasets show superior performance of our proposed algorithms.",
"title": ""
},
{
"docid": "ed351364658a99d4d9c10dd2b9be3c92",
"text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.",
"title": ""
},
{
"docid": "16f5686c1675d0cf2025cf812247ab45",
"text": "This paper presents the system analysis and implementation of a soft switching Sepic-Cuk converter to achieve zero voltage switching (ZVS). In the proposed converter, the Sepic and Cuk topologies are combined together in the output side. The features of the proposed converter are to reduce the circuit components (share the power components in the transformer primary side) and to share the load current. Active snubber is connected in parallel with the primary side of transformer to release the energy stored in the leakage inductor of transformer and to limit the peak voltage stress of switching devices when the main switch is turned off. The active snubber can achieve ZVS turn-on for power switches. Experimental results, taken from a laboratory prototype rated at 300W, are presented to verify the effectiveness of the proposed converter. I. Introduction Modern",
"title": ""
}
] |
scidocsrr
|
3e3245e4472042e11325e56f1119c801
|
Analyzing the Blogosphere for Predicting the Success of Music and Movie Products
|
[
{
"docid": "e033eddbc92ee813ffcc69724e55aa84",
"text": "Over the past few years, weblogs have emerged as a new communication and publication medium on the Internet. In this paper, we describe the application of data mining, information extraction and NLP algorithms for discovering trends across our subset of approximately 100,000 weblogs. We publish daily lists of key persons, key phrases, and key paragraphs to a public web site, BlogPulse.com. In addition, we maintain a searchable index of weblog entries. On top of the search index, we have implemented trend search, which graphs the normalized trend line over time for a search query and provides a way to estimate the relative buzz of word of mouth for given topics over time.",
"title": ""
}
] |
[
{
"docid": "55fcc765be689166b0a44eef1a8f26b6",
"text": "A key goal of computer vision researchers is to create automated face recognition systems that can equal, and eventually surpass, human performance. To this end, it is imperative that computational researchers know of the key findings from experimental studies of face recognition by humans. These findings provide insights into the nature of cues that the human visual system relies upon for achieving its impressive performance and serve as the building blocks for efforts to artificially emulate these abilities. In this paper, we present what we believe are 19 basic results, with implications for the design of computational systems. Each result is described briefly and appropriate pointers are provided to permit an in-depth study of any particular result",
"title": ""
},
{
"docid": "2c92d42311f9708b7cb40f34551315e0",
"text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.",
"title": ""
},
{
"docid": "cefabe1b4193483d258739674b53f773",
"text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.",
"title": ""
},
{
"docid": "b3d915b4ff4d86b8c987b760fcf7d525",
"text": "We examine how exercising control over a technology platform can increase profits and innovation. Benefits depend on using a platform as a governance mechanism to influence ecosystem parters. Results can inform innovation strategy, antitrust and intellectual property law, and management of competition.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "1de568efbb57cc4e5d5ffbbfaf8d39ae",
"text": "The Insider Threat Study, conducted by the U.S. Secret Service and Carnegie Mellon University’s Software Engineering Institute CERT Program, analyzed insider cyber crimes across U.S. critical infrastructure sectors. The study indicates that management decisions related to organizational and employee performance sometimes yield unintended consequences magnifying risk of insider attack. Lack of tools for understanding insider threat, analyzing risk mitigation alternatives, and communicating results exacerbates the problem. The goal of Carnegie Mellon University’s MERIT (Management and Education of the Risk of Insider Threat) project is to develop such tools. MERIT uses system dynamics to model and analyze insider threats and produce interactive learning environments. These tools can be used by policy makers, security officers, information technology, human resources, and management to understand the problem and assess risk from insiders based on simulations of policies, cultural, technical, and procedural factors. This paper describes the MERIT insider threat model and simulation results.",
"title": ""
},
{
"docid": "013e96c212f7f58698acdae0adfcf374",
"text": "Since our ability to engineer biological systems is directly related to our ability to control gene expression, a central focus of synthetic biology has been to develop programmable genetic regulatory systems. Researchers are increasingly turning to RNA regulators for this task because of their versatility, and the emergence of new powerful RNA design principles. Here we review advances that are transforming the way we use RNAs to engineer biological systems. First, we examine new designable RNA mechanisms that are enabling large libraries of regulators with protein-like dynamic ranges. Next, we review emerging applications, from RNA genetic circuits to molecular diagnostics. Finally, we describe new experimental and computational tools that promise to accelerate our understanding of RNA folding, function and design.",
"title": ""
},
{
"docid": "a41bb1fe5670cc865bf540b34848f45f",
"text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.",
"title": ""
},
{
"docid": "906ef2b4130ff5c264835ff3c15918e5",
"text": "Exploratory big data applications often run on raw unstructured or semi-structured data formats, such as JSON files or text logs. These applications can spend 80–90% of their execution time parsing the data. In this paper, we propose a new approach for reducing this overhead: apply filters on the data’s raw bytestream before parsing. This technique, which we call raw filtering, leverages the features of modern hardware and the high selectivity of queries found in many exploratory applications. With raw filtering, a user-specified query predicate is compiled into a set of filtering primitives called raw filters (RFs). RFs are fast, SIMD-based operators that occasionally yield false positives, but never false negatives. We combine multiple RFs into an RF cascade to decrease the false positive rate and maximize parsing throughput. Because the best RF cascade is datadependent, we propose an optimizer that dynamically selects the combination of RFs with the best expected throughput, achieving within 10% of the global optimum cascade while adding less than 1.2% overhead. We implement these techniques in a system called Sparser, which automatically manages a parsing cascade given a data stream in a supported format (e.g., JSON, Avro, Parquet) and a user query. We show that many real-world applications are highly selective and benefit from Sparser. Across diverse workloads, Sparser accelerates state-of-the-art parsers such as Mison by up to 22× and improves end-to-end application performance by up to 9×. PVLDB Reference Format: S. Palkar, F. Abuzaid, P. Bailis, M. Zaharia. Filter Before You Parse: Faster Analytics on Raw Data with Sparser. PVLDB, 11(11): xxxx-yyyy, 2018. DOI: https://doi.org/10.14778/3236187.3236207",
"title": ""
},
{
"docid": "6cf4994b5ed0e17885f229856b7cd58d",
"text": "Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "dfa62c69b1ab26e7e160100b69794674",
"text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.",
"title": ""
},
{
"docid": "498a4b526633c06d6eac9aa52ff5e1d2",
"text": "This talk surveys three challenge areas for mechanism design and describes the role approximation plays in resolving them. Challenge 1: optimal mechanisms are parameterized by knowledge of the distribution of agent's private types. Challenge 2: optimal mechanisms require precise distributional information. Challenge 3: in multi-dimensional settings economic analysis has failed to characterize optimal mechanisms. The theory of approximation is well suited to address these challenges. While the optimal mechanism may be parameterized by the distribution of agent's private types, there may be a single mechanism that approximates the optimal mechanism for any distribution. While the optimal mechanism may require precise distributional assumptions, there may be approximately optimal mechanism that depends only on natural characteristics of the distribution. While the multi-dimensional optimal mechanism may resist precise economic characterization, there may be simple description of approximately optimal mechanisms. Finally, these approximately optimal mechanisms, because of their simplicity and tractability, may be much more likely to arise in practice, thus making the theory of approximately optimal mechanism more descriptive than that of (precisely) optimal mechanisms. The talk will cover positive resolutions to these challenges with emphasis on basic techniques, relevance to practice, and future research directions.",
"title": ""
},
{
"docid": "f175e9c17aa38a17253de2663c4999f1",
"text": "As we increasingly rely on computers to process and manage our personal data, safeguarding sensitive information from malicious hackers is a fast growing concern. Among many forms of information leakage, covert timing channels operate by establishing an illegitimate communication channel between two processes and through transmitting information via timing modulation, thereby violating the underlying system's security policy. Recent studies have shown the vulnerability of popular computing environments, such as cloud computing, to these covert timing channels. In this work, we propose a new micro architecture-level framework, CC-Hunter, that detects the possible presence of covert timing channels on shared hardware. Our experiments demonstrate that Chanter is able to successfully detect different types of covert timing channels at varying bandwidths and message patterns.",
"title": ""
},
{
"docid": "1f4ff9d732b3512ee9b105f084edd3d2",
"text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.",
"title": ""
},
{
"docid": "074d4a552c82511d942a58b93d51c38a",
"text": "This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.",
"title": ""
},
{
"docid": "ec5d4c571f8cd85bf94784199ab10884",
"text": "Researchers have shown that a wordnet for a new language, possibly resource-poor, can be constructed automatically by translating wordnets of resource-rich languages. The quality of these constructed wordnets is affected by the quality of the resources used such as dictionaries and translation methods in the construction process. Recent work shows that vector representation of words (word embeddings) can be used to discover related words in text. In this paper, we propose a method that performs such similarity computation using word embeddings to improve the quality of automatically constructed wordnets.",
"title": ""
},
{
"docid": "6773b060fd16b6630f581eb65c5c6488",
"text": "Proximity detection is one of the most common location-based applications in daily life when users intent to find their friends who get into their proximity. Studies on protecting user privacy information during the detection process have been widely concerned. In this paper, we first analyze a theoretical and experimental analysis of existing solutions for proximity detection, and then demonstrate that these solutions either provide a weak privacy preserving or result in a high communication and computational complexity. Accordingly, a location difference-based proximity detection protocol is proposed based on the Paillier cryptosystem for the purpose of dealing with the above shortcomings. The analysis results through an extensive simulation illustrate that our protocol outperforms traditional protocols in terms of communication and computation cost.",
"title": ""
},
{
"docid": "3e28cbfc53f6c42bb0de2baf5c1544aa",
"text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.",
"title": ""
},
{
"docid": "d67e0fa20185e248a18277e381c9d42d",
"text": "Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.",
"title": ""
}
] |
scidocsrr
|
fcefc579d2dc466c358a72842a49889a
|
Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach
|
[
{
"docid": "ee97c467539a3e08cd3cfe7a8f7ee3e2",
"text": "The problem of geometric alignment of two roughly pre-registered, partially overlapping, rigid, noisy 3D point sets is considered. A new natural and simple, robustified extension of the popular Iterative Closest Point (ICP) algorithm [1] is presented, called Trimmed ICP. The new algorithm is based on the consistent use of the Least Trimmed Squares approach in all phases of the operation. Convergence is proved and an efficient implementation is discussed. TrICP is fast, applicable to overlaps under 50%, robust to erroneous and incomplete measurements, and has easy-to-set parameters. ICP is a special case of TrICP when the overlap parameter is 100%. Results of a performance evaluation study on the SQUID database of 1100 shapes are presented. The tests compare TrICP and the Iterative Closest Reciprocal Point algorithm [2].",
"title": ""
}
] |
[
{
"docid": "97444c5b944beb30697dfad626a5b5a4",
"text": "While eye tracking is becoming more and more relevant as a promising input channel, diverse applications using gaze control in a more natural way are still rather limited. Though several researchers have indicated the particular high potential of gaze-based interaction for pointing tasks, often gaze-only approaches are investigated. However, time-consuming dwell-time activations limit this potential. To overcome this, we present a gaze-supported fisheye lens in combination with (1) a keyboard and (2) and a tilt-sensitive mobile multi-touch device. In a user-centered design approach, we elicited how users would use the aforementioned input combinations. Based on the received feedback we designed a prototype system for the interaction with a remote display using gaze and a touch-and-tilt device. This eliminates gaze dwell-time activations and the well-known Midas Touch problem (unintentionally issuing an action via gaze). A formative user study testing our prototype provided further insights into how well the elaborated gaze-supported interaction techniques were experienced by users.",
"title": ""
},
{
"docid": "4e37f91af78d1c275bcf69685ebde914",
"text": "OBJECTIVES\nThis narrative literature review aims to consider the impact of removable partial dentures (RPDs) on oral and systemic health.\n\n\nDATA AND SOURCES\nA review of the literature was performed using Medline/PubMed database resources up to July 2011 to identify appropriate articles that addressed the objectives of this review. This was followed by extensive hand searching using reference lists from relevant articles.\n\n\nCONCLUSIONS\nThe proportion of partially dentate adults who wear RPDs is increasing in many populations. A major public health challenge is to plan oral healthcare for this group of patients in whom avoidance of further tooth loss is of particular importance. RPDs have the potential to negatively impact on different aspects of oral health. There is clear evidence that RPDs increase plaque and gingivitis. However, RPDs have not clearly been shown to increase the risk for periodontitis. The risk for caries, particularly root caries, appears to be higher in wearers of RPDs. Regular recall is therefore essential to minimise the risk for dental caries, as well as periodontitis. There is no evidence to support a negative impact on nutritional status, though research in this area is particularly deficient. Furthermore, there are very few studies that have investigated whether RPDs have any impact on general health. From the limited literature available, it appears that RPDs can possibly improve quality of life, and this is relevant in the era of patient-centred care. Overall, further research is required to investigate the impact of RPDs on all aspects of oral and general health, nutritional status and quality of life.",
"title": ""
},
{
"docid": "65d938eee5da61f27510b334312afe41",
"text": "This paper reviews the actual and potential use of social media in emergency, disaster and crisis situations. This is a field that has generated intense interest. It is characterised by a burgeoning but small and very recent literature. In the emergencies field, social media (blogs, messaging, sites such as Facebook, wikis and so on) are used in seven different ways: listening to public debate, monitoring situations, extending emergency response and management, crowd-sourcing and collaborative development, creating social cohesion, furthering causes (including charitable donation) and enhancing research. Appreciation of the positive side of social media is balanced by their potential for negative developments, such as disseminating rumours, undermining authority and promoting terrorist acts. This leads to an examination of the ethics of social media usage in crisis situations. Despite some clearly identifiable risks, for example regarding the violation of privacy, it appears that public consensus on ethics will tend to override unscrupulous attempts to subvert the media. Moreover, social media are a robust means of exposing corruption and malpractice. In synthesis, the widespread adoption and use of social media by members of the public throughout the world heralds a new age in which it is imperative that emergency managers adapt their working practices to the challenge and potential of this development. At the same time, they must heed the ethical warnings and ensure that social media are not abused or misused when crises and emergencies occur.",
"title": ""
},
{
"docid": "0867eb365ca19f664bd265a9adaa44e5",
"text": "We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call “dynamic marginalization”. This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art.",
"title": ""
},
{
"docid": "2db8aee20badadc39f0fa089e8deb2d0",
"text": "Detecting people remains a popular and challenging problem in computer vision. In this paper, we analyze parts-based models for person detection to determine which components of their pipeline could benefit the most if improved. We accomplish this task by studying numerous detectors formed from combinations of components performed by human subjects and machines. The parts-based model we study can be roughly broken into four components: feature detection, part detection, spatial part scoring and contextual reasoning including non-maximal suppression. Our experiments conclude that part detection is the weakest link for challenging person detection datasets. Non-maximal suppression and context can also significantly boost performance. However, the use of human or machine spatial models does not significantly or consistently affect detection accuracy.",
"title": ""
},
{
"docid": "768240033185f6464d2274181370843a",
"text": "Most of today's commercial companies heavily rely on social media and community management tools to interact with their clients and analyze their online behaviour. Nonetheless, these tools still lack evolved data mining and visualization features to tailor the analysis in order to support useful marketing decisions. We present an original methodology that aims at formalizing the marketing need of the company and develop a tool that can support it. The methodology is derived from the Cross-Industry Standard Process for Data Mining (CRISP-DM) and includes additional steps dedicated to the design and development of visualizations of mined data. We followed the methodology in two use cases with Swiss companies. First, we developed a prototype that aims at understanding the needs of tourists based on Flickr and Instagram data. In that use case, we extend the existing literature by enriching hashtags analysis methods with a semantic network based on Linked Data. Second, we analyzed internal customer data of an online discount retailer to help them define guerilla marketing measures. We report on the challenges of integrating Facebook data in the process. Informal feedback from domain experts confirms the strong potential of such advanced analytic features based on social data to inform marketing decisions.",
"title": ""
},
{
"docid": "3e1b4fb4ac5222c70b871ebb7ea43408",
"text": "Modern graph embedding procedures can efficiently extract features of nodes from graphs with millions of nodes. The features are later used as inputs for downstream predictive tasks. In this paper we propose GEMSEC a graph embedding algorithm which learns a clustering of the nodes simultaneously with computing their features. The procedure places nodes in an abstract feature space where the vertex features minimize the negative log likelihood of preserving sampled vertex neighborhoods, while the nodes are clustered into a fixed number of groups in this space. GEMSEC is a general extension of earlier work in the domain as it is an augmentation of the core optimization problem of sequence based graph embedding procedures and is agnostic of the neighborhood sampling strategy. We show that GEMSEC extracts high quality clusters on real world social networks and is competitive with other community detection algorithms. We demonstrate that the clustering constraint has a positive effect on representation quality and also that our procedure learns to embed and cluster graphs jointly in a robust and scalable manner.",
"title": ""
},
{
"docid": "d32887dfac583ed851f607807c2f624e",
"text": "For a through-wall ultrawideband (UWB) random noise radar using array antennas, subtraction of successive frames of the cross-correlation signals between each received element signal and the transmitted signal is able to isolate moving targets in heavy clutter. Images of moving targets are subsequently obtained using the back projection (BP) algorithm. This technique is not constrained to noise radar, but can also be applied to other kinds of radar systems. Different models based on the finite-difference time-domain (FDTD) algorithm are set up to simulate different through-wall scenarios of moving targets. Simulation results show that the heavy clutter is suppressed, and the signal-to-clutter ratio (SCR) is greatly enhanced using this approach. Multiple moving targets can be detected, localized, and tracked for any random movement.",
"title": ""
},
{
"docid": "419116a3660f1c1f7127de31f311bd1e",
"text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.",
"title": ""
},
{
"docid": "5174b54a546002863a50362c70921176",
"text": "The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain's abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise.",
"title": ""
},
{
"docid": "96f42b3a653964cffa15d9b3bebf0086",
"text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,",
"title": ""
},
{
"docid": "25ccaa5a71d0a3f46296c59328e0b9b5",
"text": "Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users.",
"title": ""
},
{
"docid": "4edb705f4e60421327a77e9d7624f708",
"text": "We introduce a new neural architecture and an unsupervised a lgorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicative ly: one that represents the content of the image, constrained to be constant over severa l consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encod er to extract features, and a decoder to reconstruct the input from the features. The meth od was applied to patches extracted from consecutive movie frames and produces orien tat o and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive fiel d spread over a large image of arbitrary size. A layer of complex cells, subject to spars ity constraints, pool feature units over overlapping local neighborhoods, which causes t h feature units to organize themselves into pinwheel patterns of orientation-selecti v receptive fields, similar to those observed in the mammalian visual cortex. A feed-forwa rd encoder efficiently computes the feature representation of full images.",
"title": ""
},
{
"docid": "a21f04b6c8af0b38b3b41f79f2661fa6",
"text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.",
"title": ""
},
{
"docid": "0e866292de7de9b478e1facc2b042eda",
"text": "The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search space is significantly different for the types of instances studied. Moreover, with increasing epistasis, the amount of gene interactions in the representation of a solution in an evolutionary algorithm, the number of local minima for one type of instance decreases and, thus, the search becomes easier. We suggest that other characteristics besides high epistasis might have greater influence on the hardness of a problem. To understand these characteristics, the notion of a dependency graph describing gene interactions is introduced. In particular, the local structure and the regularity of the dependency graph seems to be important for the performance of an algorithm, and in fact, algorithms that exploit these properties perform significantly better than others which do not. It will be shown that a simple hybrid multi-start local search exploiting locality in the structure of the graphs is able to find optimum or near optimum solutions very quickly. However, if the problem size increases or the graphs become unstructured, a memetic algorithm (a genetic algorithm incorporating local search) is shown to be much more effective.",
"title": ""
},
{
"docid": "5048a090adfdd3ebe9d9253ca4f72644",
"text": "Movement disorders or extrapyramidal symptoms (EPS) associated with selective serotonin reuptake inhibitors (SSRIs) have been reported. Although akathisia was found to be the most common EPS, and fluoxetine was implicated in the majority of the adverse reactions, there were also cases with EPS due to sertraline treatment. We present a child and an adolescent who developed torticollis (cervical dystonia) after using sertraline. To our knowledge, the child case is the first such report of sertraline-induced torticollis, and the adolescent case is the third in the literature.",
"title": ""
},
{
"docid": "b112b59ff092255faf98314562eff7b0",
"text": "The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition - whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a high-quality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perception-related problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http://rll.eecs.berkeley.edu/bigbird.",
"title": ""
},
{
"docid": "bda980d41e0b64ec7ec41502cada6e7f",
"text": "In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.",
"title": ""
},
{
"docid": "5daeccb1a01df4f68f23c775828be41d",
"text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.",
"title": ""
},
{
"docid": "cf4070e227334632eb4386e6f48a9adb",
"text": "Increased usage of mobile devices, such as smartphones and tablets, has led to widespread popularity and usage of mobile apps. If not carefully developed, such apps may demonstrate energy-inefficient behaviour, where one or more energy-intensive hardware components (such as Wifi, GPS, etc) are left in a high-power state, even when no apps are using these components. We refer to such kind of energy-inefficiencies as energy bugs. Executing an app with an energy bug causes the mobile device to exhibit poor energy consumption behaviour and a drastically shortened battery life. Since mobiles apps can have huge input domains, therefore exhaustive exploration is often impractical. We believe that there is a need for a framework that can systematically detect and fix energy bugs in mobile apps in a scalable fashion. To address this need, we have developed EnergyPatch, a framework that uses a combination of static and dynamic analysis techniques to detect, validate and repair energy bugs in Android apps. The use of a light-weight, static analysis technique enables EnergyPatch to quickly narrow down to the potential program paths along which energy bugs may occur. Subsequent exploration of these potentially buggy program paths using a dynamic analysis technique helps in validations of the reported bugs and to generate test cases. Finally, EnergyPatch generates repair expressions to fix the validated energy bugs. Evaluation with real-life apps from repositories such as F-droid and Github, shows that EnergyPatch is scalable and can produce results in reasonable amount of time. Additionally, we observed that the repair expressions generated by EnergyPatch could bring down the energy consumption on tested apps up to 60 percent.",
"title": ""
}
] |
scidocsrr
|
24fc0959f0cf5649e13c6338f8a89b91
|
Measuring Latency in Virtual Environments
|
[
{
"docid": "a81c87374e7ea9a3066f643ac89bfd2b",
"text": "Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.",
"title": ""
}
] |
[
{
"docid": "e646f3cd80aecac679558148bff3b1e5",
"text": "Analog front-end (AFE) circuits, which mainly consist of a transimpedance amplifier (TIA) with wide dynamic range and a timing discriminator with double threshold voltage, were designed and implemented for a pulsed time-of-fight 4-D imaging LADAR receiver. The preamplifier of the proposed TIA adopts a shunt-feedback topology to amplify weak echo signal, and a current-mirror topology to amplify strong one, respectively. The proposed AFE can capture directly the pulsed echo amplitude with wide dynamic range through programmable gain control switches. The proposed AFE circuits, which achieve a high gain of 106 dB<inline-formula> <tex-math notation=\"LaTeX\">$\\Omega $ </tex-math></inline-formula>, a linear dynamic range of 80 dB, an averaged input-referred noise density of 0.89 pA/Hz<sup>0.5</sup> and a minimum detectable signal of <inline-formula> <tex-math notation=\"LaTeX\">$0.36~\\mu \\text{A}$ </tex-math></inline-formula> at SNR = 5, and a sensitivity of 8 nW with APD of 45 A/W, were designed with 3.3 V devices and fabricated in a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> standard CMOS process. The total area of AFE, which includes the circuit core, bandgap and bias circuits, and I/O PAD, is approximately equal to <inline-formula> <tex-math notation=\"LaTeX\">$1.20\\times 1.13$ </tex-math></inline-formula> mm<sup>2</sup>.",
"title": ""
},
{
"docid": "4bca13cc04fc128844ecc48c0357b974",
"text": "From its roots in physics, mathematics, and biology, the study of complexity science, or complex adaptive systems, has expanded into the domain of organizations and systems of organizations. Complexity science is useful for studying the evolution of complex organizations -entities with multiple, diverse, interconnected elements. Evolution of complex organizations often is accompanied by feedback effects, nonlinearity, and other conditions that add to the complexity of existing organizations and the unpredictability of the emergence of new entities. Health care organizations are an ideal setting for the application of complexity science due to the diversity of organizational forms and interactions among organizations that are evolving. Too, complexity science can benefit from attention to the world’s most complex human organizations. Organizations within and across the health care sector are increasingly interdependent. Not only are new, highly powerful and diverse organizational forms being created, but also the restructuring has occurred within very short periods of time. In this chapter, we review the basic tenets of complexity science. We identify a series of key differences between the complexity science and established theoretical approaches to studying health organizations, based on the ways in which time, space, and constructs are framed. The contrasting perspectives are demonstrated using two case examples drawn from healthcare innovation and healthcare integrated systems research. Complexity science broadens and deepens the scope of inquiry into health care organizations, expands corresponding methods of research, and increases the ability of theory to generate valid research on complex organizational forms. Formatted",
"title": ""
},
{
"docid": "59db435e906db2c198afdc5cc7c7de2c",
"text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.",
"title": ""
},
{
"docid": "95a58a9fa31373296af2c41e47fa0884",
"text": "Force.com is the preeminent on-demand application development platform in use today, supporting some 55,000+ organizations. Individual enterprises and commercial software-as-a-service (SaaS) vendors trust the platform to deliver robust, reliable, Internet-scale applications. To meet the extreme demands of its large user population, Force.com's foundation is a metadatadriven software architecture that enables multitenant applications.\n The focus of this paper is multitenancy, a fundamental design approach that can dramatically improve SaaS application management. This paper defines multitenancy, explains its benefits, and demonstrates why metadata-driven architectures are the premier choice for implementing multitenancy.",
"title": ""
},
{
"docid": "5b2b0a3a857d06246cebb69e6e575b5f",
"text": "This paper develops a novel framework for feature extraction based on a combination of Linear Discriminant Analysis and cross-correlation. Multiple Electrocardiogram (ECG) signals, acquired from the human heart in different states such as in fear, during exercise, etc. are used for simulations. The ECG signals are composed of P, Q, R, S and T waves. They are characterized by several parameters and the important information relies on its HRV (Heart Rate Variability). Human interpretation of such signals requires experience and incorrect readings could result in potentially life threatening and even fatal consequences. Thus a proper interpretation of ECG signals is of paramount importance. This work focuses on designing a machine based classification algorithm for ECG signals. The proposed algorithm filters the ECG signals to reduce the effects of noise. It then uses the Fourier transform to transform the signals into the frequency domain for analysis. The frequency domain signal is then cross correlated with predefined classes of ECG signals, in a manner similar to pattern recognition. The correlated co-efficients generated are then thresholded. Moreover Linear Discriminant Analysis is also applied. Linear Discriminant Analysis makes classes of different multiple ECG signals. LDA makes classes on the basis of mean, global mean, mean subtraction, transpose, covariance, probability and frequencies. And also setting thresholds for the classes. The distributed space area is divided into regions corresponding to each of the classes. Each region associated with a class is defined by its thresholds. So it is useful in distinguishing ECG signals from each other. And pedantic details from LDA (Linear Discriminant Analysis) output graph can be easily taken in account rapidly. The output generated after applying cross-correlation and LDA displays either normal, fear, smoking or exercise ECG signal. As a result, the system can help clinically on large scale by providing reliable and accurate classification in a fast and computationally efficient manner. The doctors can use this system by gaining more efficiency. As very few errors are involved in it, showing accuracy between 90% 95%.",
"title": ""
},
{
"docid": "361bc333d47d2e1d4b6a6e8654d2659d",
"text": "Both the industrial organization theory (IO) and the resource-based view of the firm (RBV) have advanced our understanding of the antecedents of competitive advantage but few have attempted to verify the outcome variables of competitive advantage and the persistence of such outcome variables. Here by integrating both IO and RBV perspectives in the analysis of competitive advantage at the firm level, our study clarifies a conceptual distinction between two types of competitive advantage: temporary competitive advantage and sustainable competitive advantage, and explores how firms transform temporary competitive advantage into sustainable competitive advantage. Testing of the developed hypotheses, based on a survey of 165 firms from Taiwan’s information and communication technology industry, suggests that firms with a stronger market position can only attain a better outcome of temporary competitive advantage whereas firms possessing a superior position in technological resources or capabilities can attain a better outcome of sustainable competitive advantage. More importantly, firms can leverage a temporary competitive advantage as an outcome of market position, to improving their technological resource and capability position, which in turn can enhance their sustainable competitive advantage.",
"title": ""
},
{
"docid": "856e7eeca46eb2c1a27ac0d1b5f0dc0b",
"text": "The World Health Organization recommends four antenatal visits for pregnant women in developing countries. Cash transfers have been used to incentivize participation in health services. We examined whether modest cash transfers for participation in antenatal care would increase antenatal care attendance and delivery in a health facility in Kisoro, Uganda. Twenty-three villages were randomized into four groups: 1) no cash; 2) 0.20 United States Dollars (USD) for each of four visits; 3) 0.40 USD for a single first trimester visit only; 4) 0.40 USD for each of four visits. Outcomes were three or more antenatal visits and delivery in a health facility. Chi-square, analysis of variance, and generalized estimating equation analyses were performed to detect differences in outcomes. Women in the 0.40 USD/visit group had higher odds of three or more antenatal visits than the control group (OR 1.70, 95% CI: 1.13-2.57). The odds of delivering in a health facility did not differ between groups. However, women with more antenatal visits had higher odds of delivering in a health facility (OR 1.21, 95% CI: 1.03-1.42). These findings are important in an area where maternal mortality is high, utilization of health services is low, and resources are scarce.",
"title": ""
},
{
"docid": "eaf0693dd5447d58d04e10aef02ef331",
"text": "A key step in the semantic analysis of network traffic is to parse the traffic stream according to the high-level protocols it contains. This process transforms raw bytes into structured, typed, and semantically meaningful data fields that provide a high-level representation of the traffic. However, constructing protocol parsers by hand is a tedious and error-prone affair due to the complexity and sheer number of application protocols.This paper presents binpac, a declarative language and compiler designed to simplify the task of constructing robust and efficient semantic analyzers for complex network protocols. We discuss the design of the binpac language and a range of issues in generating efficient parsers from high-level specifications. We have used binpac to build several protocol parsers for the \"Bro\" network intrusion detection system, replacing some of its existing analyzers (handcrafted in C++), and supplementing its operation with analyzers for new protocols. We can then use Bro's powerful scripting language to express application-level analysis of network traffic in high-level terms that are both concise and expressive. binpac is now part of the open-source Bro distribution.",
"title": ""
},
{
"docid": "c61a6e26941409db9cb4a95c05a82785",
"text": "An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research.",
"title": ""
},
{
"docid": "baafff8270bf3d33d70544130968f6d3",
"text": "The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, /spl rho/(x), from the samples and then looking at the distribution of values that /spl rho/(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent.",
"title": ""
},
{
"docid": "0102748c7f9969fb53a3b5ee76b6eefe",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
},
{
"docid": "222c51f079c785bb2aa64d2937e50ff0",
"text": "Security and privacy in cloud computing are critical components for various organizations that depend on the cloud in their daily operations. Customers' data and the organizations' proprietary information have been subject to various attacks in the past. In this paper, we develop a set of Moving Target Defense (MTD) strategies that randomize the location of the Virtual Machines (VMs) to harden the cloud against a class of Multi-Armed Bandit (MAB) policy-based attacks. These attack policies capture the behavior of adversaries that seek to explore the allocation of VMs in the cloud and exploit the ones that provide the highest rewards (e.g., access to critical datasets, ability to observe credit card transactions, etc). We assess through simulation experiments the performance of our MTD strategies, showing that they can make MAB policy-based attacks no more effective than random attack policies. Additionally, we show the effects of critical parameters – such as discount factors, the time between randomizing the locations of the VMs and variance in the rewards obtained – on the performance of our defenses. We validate our results through simulations and a real OpenStack system implementation in our lab to assess migration times and down times under different system loads.",
"title": ""
},
{
"docid": "3230ef371e7475cfa82c7ab240fdd610",
"text": "After a decade of fundamental interdisciplinary research in machine learning, the spadework in this field has been done; the 1990s should see the widespread exploitation of knowledge discovery as an aid to assembling knowledge bases. The contributors to the AAAI Press book Knowledge Discovery in Databases were excited at the potential benefits of this research. The editors hope that some of this excitement will communicate itself to \"AI Magazine readers of this article.",
"title": ""
},
{
"docid": "424b80d94ec00c6795d8c8a689c1d119",
"text": "With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.",
"title": ""
},
{
"docid": "8fd28fb7c30c3dc30d4a92f95d38c966",
"text": "In recent years, iris recognition is becoming a very active topic in both research and practical applications. However, fake iris is a potential threat there are potential threats for iris-based systems. This paper presents a novel fake iris detection method based on the analysis of 2-D Fourier spectra together with iris image quality assessment. First, image quality assessment method is used to exclude the defocused, motion blurred fake iris. Then statistical properties of Fourier spectra for fake iris are used for clear fake iris detection. Experimental results show that the proposed method can detect photo iris and printed iris effectively.",
"title": ""
},
{
"docid": "5f01cb5c34ac9182f6485f70d19101db",
"text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.",
"title": ""
},
{
"docid": "7228073bef61131c2efcdc736d90ca1b",
"text": "With the advent of word representations, word similarity tasks are becoming increasing popular as an evaluation metric for the quality of the representations. In this paper, we present manually annotated monolingual word similarity datasets of six Indian languages – Urdu, Telugu, Marathi, Punjabi, Tamil and Gujarati. These languages are most spoken Indian languages worldwide after Hindi and Bengali. For the construction of these datasets, our approach relies on translation and re-annotation of word similarity datasets of English. We also present baseline scores for word representation models using state-of-the-art techniques for Urdu, Telugu and Marathi by evaluating them on newly created word similarity datasets.",
"title": ""
},
{
"docid": "511c90eadbbd4129fdf3ee9e9b2187d3",
"text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.",
"title": ""
},
{
"docid": "5a1f4efc96538c1355a2742f323b7a0e",
"text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.",
"title": ""
}
] |
scidocsrr
|
d13b9b82be0cc86e59f4579988430fc0
|
Pairs trading strategy optimization using the reinforcement learning method: a cointegration approach
|
[
{
"docid": "f72f55da6ec2fdf9d0902648571fd9fc",
"text": "Recently, numerous investigations for stock price prediction and portfolio management using machine learning have been trying to develop efficient mechanical trading systems. But these systems have a limitation in that they are mainly based on the supervised leaming which is not so adequate for leaming problems with long-term goals and delayed rewards. This paper proposes a method of applying reinforcement leaming, suitable for modeling and leaming various kinds of interactions in real situations, to the problem of stock price prediction. The stock price prediction problem is considered as Markov process which can be optimized by reinforcement learning based algorithm. TD(O), a reinforcement learning algorithm which leams only from experiences, is adopted and function approximation by artificial neural network is performed to leam the values of states each of which corresponds to a stock price trend at a given time. An experimental result based on the Korean stock market is presented to evaluate the performance of the proposed method.",
"title": ""
},
{
"docid": "51f2ba8b460be1c9902fb265b2632232",
"text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.",
"title": ""
},
{
"docid": "427796f5c37e41363c1664b47596eacf",
"text": "A trading and portfolio management system called QSR is proposed. It uses Q-learning and Sharpe ratio maximization algorithm. We use absolute proot and relative risk-adjusted proot as performance function to train the system respectively, and employ a committee of two networks to do the testing. The new proposed algorithm makes use of the advantages of both parts and can be used in a more general case. We demonstrate with experimental results that the proposed approach generates appreciable proots from trading in the foreign exchange markets.",
"title": ""
}
] |
[
{
"docid": "30fda7dabb70dffbf297096671802c93",
"text": "Much attention has recently been given to a printing method because they are easily designable, have a low cost, and can be mass produced. Numerous electronic devices are fabricated using printing methods because of these advantages. In paper mechatronics, attempts have been made to fabricate robots by printing on paper substrates. The robots are given structures through self-folding and functions using printed actuators. We developed a new system and device to fabricate more sophisticated printed robots. First, we successfully fabricated complex self-folding structures by applying an automatic cutting. Second, a rapidly created and low-voltage electrothermal actuator was developed using an inkjet printed circuit. Finally, a printed robot was fabricated by combining two techniques and two types of paper; a structure design paper and a circuit design paper. Gripper and conveyor robots were fabricated, and their functions were verified. These works demonstrate the possibility of paper mechatronics for rapid and low-cost prototyping as well as of printed robots.",
"title": ""
},
{
"docid": "58c488555240ded980033111a9657be4",
"text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.",
"title": ""
},
{
"docid": "31a2e6948a816a053d62e3748134cdc2",
"text": "In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent’s representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.",
"title": ""
},
{
"docid": "ba7701a94880b59bbbd49fbfaca4b8c3",
"text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.",
"title": ""
},
{
"docid": "10d380b25a03c608c11fe5dde545f4b4",
"text": "The increasing complexity and diversity of technical products plus the massive amount of product-related data overwhelms humans dealing with them at all stages of the life-cycle. We present a novel architecture for building smart products that are able to interact with humans in a natural and proactive way, and assist and guide them in performing their tasks. Further, we show how communication capabilities of smart products are used to account for the limited resources of individual products by leveraging resources provided by the environment or other smart products for storage and natural interaction.",
"title": ""
},
{
"docid": "dffb89c39f11934567f98a31a0ef157c",
"text": "We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.",
"title": ""
},
{
"docid": "97ba22fa685384e9dfd0402798fe7019",
"text": "We consider the problems of i) using public-key encryption to enforce dynamic access control on clouds; and ii) key rotation of data stored on clouds. Historically, proxy re-encryption, ciphertext delegation, and related technologies have been advocated as tools that allow for revocation and the ability to cryptographically enforce dynamic access control on the cloud, and more recently they have suggested for key rotation of data stored on clouds. Current literature frequently assumes that data is encrypted directly with public-key encryption primitives. However, for efficiency reasons systems would need to deploy with hybrid encryption. Unfortunately, we show that if hybrid encryption is used, then schemes are susceptible to a key-scraping attack. Given a proxy re-encryption or delegation primitive, we show how to construct a new hybrid scheme that is resistant to this attack and highly efficient. The scheme only requires the modification of a small fraction of the bits of the original ciphertext. The number of modifications scales linearly with the security parameter and logarithmically with the file length: it does not require the entire symmetric-key ciphertext to be re-encrypted! Beyond the construction, we introduce new security definitions for the problem at hand, prove our construction secure, discuss use cases, and provide quantitative data showing its practical benefits and efficiency. We show the construction extends to identity-based proxy re-encryption and revocable-storage attribute-based encryption, and thus that the construction is robust, supporting most primitives of interest.",
"title": ""
},
{
"docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4",
"text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.",
"title": ""
},
{
"docid": "8fa135e5d01ba2480dea4621ceb1e9f4",
"text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.",
"title": ""
},
{
"docid": "493893b0eb606477b3d0a5b10ddf9ade",
"text": "While new therapies for chronic hepatitis C virus infection have delivered remarkable cure rates, curative therapies for chronic hepatitis B virus (HBV) infection remain a distant goal. Although current direct antiviral therapies are very efficient in controlling viral replication and limiting the progression to cirrhosis, these treatments require lifelong administration due to the frequent viral rebound upon treatment cessation, and immune modulation with interferon is only effective in a subgroup of patients. Specific immunotherapies can offer the possibility of eliminating or at least stably maintaining low levels of HBV replication under the control of a functional host antiviral response. Here, we review the development of immune cell therapy for HBV, highlighting the potential antiviral efficiency and potential toxicities in different groups of chronically infected HBV patients. We also discuss the chronic hepatitis B patient populations that best benefit from therapeutic immune interventions.",
"title": ""
},
{
"docid": "11ecb3df219152d33020ba1c4f8848bb",
"text": "Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.",
"title": ""
},
{
"docid": "15ad5044900511277e0cd602b0c07c5e",
"text": "Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.",
"title": ""
},
{
"docid": "eedcff8c2a499e644d1343b353b2a1b9",
"text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.",
"title": ""
},
{
"docid": "382ac4d3ba3024d0c760cff1eef505c3",
"text": "We seek to close the gap between software engineering (SE) and human-computer interaction (HCI) by indicating interdisciplinary interfaces throughout the different phases of SE and HCI lifecycles. As agile representatives of SE, Extreme Programming (XP) and Agile Modeling (AM) contribute helpful principles and practices for a common engineering approach. We present a cross-discipline user interface design lifecycle that integrates SE and HCI under the umbrella of agile development. Melting IT budgets, pressure of time and the demand to build better software in less time must be supported by traveling as light as possible. We did, therefore, choose not just to mediate both disciplines. Following our surveys, a rather radical approach best fits the demands of engineering organizations.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "5eb9e759ec8fc9ad63024130f753d136",
"text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications",
"title": ""
},
{
"docid": "71a4399f8ccbeee4dced4d2eba3cf9ff",
"text": "Generating text from structured data is important for various tasks such as question answering and dialog systems. We show that in at least one domain, without any supervision and only based on unlabeled text, we are able to build a Natural Language Generation (NLG) system with higher performance than supervised approaches. In our approach, we interpret the structured data as a corrupt representation of the desired output and use a denoising auto-encoder to reconstruct the sentence. We show how to introduce noise into training examples that do not contain structured data, and that the resulting denoising auto-encoder generalizes to generate correct sentences when given structured data.",
"title": ""
},
{
"docid": "081b15c3dda7da72487f5a6e96e98862",
"text": "The CEDAR real-time address block location system, which determines candidates for the location of the destination address from a scanned mail piece image, is described. For each candidate destination address block (DAB), the address block location (ABL) system determines the line segmentation, global orientation, block skew, an indication of whether the address appears to be handwritten or machine printed, and a value indicating the degree of confidence that the block actually contains the destination address. With 20-MHz Sparc processors, the average time per mail piece for the combined hardware and software system components is 0.210 seconds. The system located 89.0% of the addresses as the top choice. Recent developments in the system include the use of a top-down segmentation tool, address syntax analysis using only connected component data, and improvements to the segmentation refinement routines. This has increased top choice performance to 91.4%.<<ETX>>",
"title": ""
},
{
"docid": "da36aa77b26e5966bdb271da19bcace3",
"text": "We present Brian, a new clock driven simulator for spiking neural networks which is available on almost all platforms. Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is very well adapted to these goals. Python is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages. Brian allows you to write very concise, natural and readable code for simulations, and makes it quick and efficient to play with these models (for example, changing the differential equations doesn't require a recompile of the code). Figure 1 shows an example of a complete network implemented in Brian, a randomly connected network of integrate and fire neurons with exponential inhibitory and excitatory currents (the CUBA network from [1]). Defining the model, running from Seventeenth Annual Computational Neuroscience Meeting: CNS*2008 Portland, OR, USA. 19–24 July 2008",
"title": ""
},
{
"docid": "10a6bccb77b6b94149c54c9e343ceb6c",
"text": "Clone detectors find similar code fragments (i.e., instances of code clones) and report large numbers of them for industrial systems. To maintain or manage code clones, developers often have to investigate differences of multiple cloned code fragments. However,existing program differencing techniques compare only two code fragments at a time. Developers then have to manually combine several pairwise differencing results. In this paper, we present an approach to automatically detecting differences across multiple clone instances. We have implemented our approach as an Eclipse plugin and evaluated its accuracy with three Java software systems. Our evaluation shows that our algorithm has precision over 97.66% and recall over 95.63% in three open source Java projects. We also conducted a user study of 18 developers to evaluate the usefulness of our approach for eight clone-related refactoring tasks. Our study shows that our approach can significantly improve developers’performance in refactoring decisions, refactoring details, and task completion time on clone-related refactoring tasks. Automatically detecting differences across multiple clone instances also opens opportunities for building practical applications of code clones in software maintenance, such as auto-generation of application skeleton, intelligent simultaneous code editing.",
"title": ""
}
] |
scidocsrr
|
e1fa522b8efde8d421969f7fee55a2f4
|
Online and Incremental Appearance-based SLAM in Highly Dynamic Environments
|
[
{
"docid": "beb22339057840dc9a7876a871d242cf",
"text": "We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.",
"title": ""
},
{
"docid": "3982c66e695fdefe36d8d143247add88",
"text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"title": ""
}
] |
[
{
"docid": "645f4db902246c01476ae941004bcd94",
"text": "The Internet of Things is part of our everyday life, which applies to all aspects of human life; from smart phones and environmental sensors to smart devices used in the industry. Although the Internet of Things has many advantages, there are risks and dangers as well that need to be addressed. The information used and transmitted on Internet of Things contain important info about the daily lives of people, banking information, location and geographical information, environmental and medical information, together with many other sensitive data. Therefore, it is critical to identify and address the security issues and challenges of Internet of Things. In this article, considering the broad scope of this field and its literature, we are going to express some comprehensive information on security challenges of the Internet of Things.",
"title": ""
},
{
"docid": "da86c72fff98d51d4d78ece7516664fe",
"text": "OBJECTIVE\nThe purpose of this study was to establish an Indian reference for normal fetal nasal bone length at 16-26 weeks of gestation.\n\n\nMETHODS\nThe fetal nasal bone was measured by ultrasound in 2,962 pregnant women at 16-26 weeks of gestation from 2004 to 2009 by a single operator, who performed three measurements for each woman when the fetus was in the midsagittal plane and the nasal bone was between a 45 and 135° angle to the ultrasound beam. All neonates were examined after delivery to confirm the absence of congenital abnormalities.\n\n\nRESULTS\nThe median nasal bone length increased with gestational age from 3.3 mm at 16 weeks to 6.65 mm at 26 weeks in a linear relationship. The fifth percentile nasal bone lengths were 2.37, 2.4, 2.8, 3.5, 3.6, 3.9, 4.3, 4.6, 4.68, 4.54, and 4.91 mm at 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, and 26 weeks, respectively.\n\n\nCONCLUSIONS\nWe have established the nasal bone length in South Indian fetuses at 16-26 weeks of gestation and there is progressive increase in the fifth percentile of nasal bone length with advancing gestational age. Hence, gestational age should be considered while defining hypoplasia of the nasal bone.",
"title": ""
},
{
"docid": "06abf2a7c6d0c25cfe54422268300e58",
"text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.",
"title": ""
},
{
"docid": "c6ec311353b0872bcc1dfd09abb7632e",
"text": "Deep neural network algorithms are difficult to analyze because they lack structure allowing to understand the properties of underlying transforms and invariants. Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data. Each new layer is computed with multidimensional convolutions along spatial and attribute variables. We introduce an efficient implementation of such networks where the dimensionality is progressively reduced by averaging intermediate layers along attribute indices. Hierarchical networks are tested on CIFAR image data bases where they obtain comparable precisions to state of the art networks, with much fewer parameters. We study some properties of the attributes learned from these databases.",
"title": ""
},
{
"docid": "eb34879a227b5e3e2374bbb5a85a2c08",
"text": "According to the Taiwan Ministry of Education statistics, about one million graduates each year, some of them will go to countries, high schools or tertiary institutions to continue to attend, and some will be ready to enter the workplace employment. During the course of study, the students' all kinds of excellent performance certificates, score transcripts, diplomas, etc., will become an important reference for admitting new schools or new works. As schools make various awards or diplomas, only the names of the schools and the students are input. Due to the lack of effective anti-forge mechanism, events that cause the graduation certificate to be forged often get noticed. In order to solve the problem of counterfeiting certificates, the digital certificate system based on blockchain technology would be proposed. By the unmodifiable property of blockchain, the digital certificate with anti-counterfeit and verifiability could be made. The procedure of issuing the digital certificate in this system is as follows. First, generate the electronic file of a paper certificate accompanying other related data into the database, meanwhile calculate the electronic file for its hash value. Finally, store the hash value into the block in the chain system. The system will create a related QR-code and inquiry string code to affix to the paper certificate. It will provide the demand unit to verify the authenticity of the paper certificate through mobile phone scanning or website inquiries. Through the unmodifiable properties of the blockchain, the system not only enhances the credibility of various paper-based certificates, but also electronically reduces the loss risks of various types of certificates.",
"title": ""
},
{
"docid": "7e208f65cf33a910cc958ec57bdff262",
"text": "This study proposed to address a new method that could select subsets more efficiently. In addition, the reasons why employers voluntarily turnover were also investigated in order to increase the classification accuracy and to help managers to prevent employers’ turnover. The mixed subset selection used in this study combined Taguchi method and Nearest Neighbor Classification Rules to select subset and analyze the factors to find the best predictor of employer turnover. All the samples used in this study were from industry A, in which the employers left their job during 1st of February, 2001 to 31st of December, 2007, compared with those incumbents. The results showed that through the mixed subset selection method, total 18 factors were found that are important to the employers. In addition, the accuracy of correct selection was 87.85% which was higher than before using this subset selection (80.93%). The new subset selection method addressed in this study does not only provide industries to understand the reasons of employers’ turnover, but also could be a long-term classification prediction for industries. Key-Words: Voluntary Turnover; Subset Selection; Taguchi Methods; Nearest Neighbor Classification Rules; Training pattern",
"title": ""
},
{
"docid": "20c5dfcc5dec2efd1345de1d863bb346",
"text": "An important task of public health officials is to keep track of spreading epidemics, and the locations and speed with which they appear. Furthermore, there is interest in understanding how concerned the population is about a disease outbreak. Twitter can serve as an important data source to provide this information in real time. In this paper, we focus on sentiment classification of Twitter messages to measure the Degree of Concern (DOC) of the Twitter users. In order to achieve this goal, we develop a novel two-step sentiment classification workflow to automatically identify personal tweets and negative tweets. Based on this workflow, we present an Epidemic Sentiment Monitoring System (ESMOS) that provides tools for visualizing Twitter users' concern towards different diseases. The visual concern map and chart in ESMOS can help public health officials to identify the progression and peaks of concern for a disease in space and time, so that appropriate preventive actions can be taken. The DOC measure is based on the sentiment-based classifications. We compare clue-based and different Machine Learning methods to classify sentiments of Twitter users regarding diseases, first into personal and neutral tweets and then into negative from neutral personal tweets. In our experiments, Multinomial Naïve Bayes achieved overall the best results and took significantly less time to build the classifier than other methods.",
"title": ""
},
{
"docid": "2fa5646f8a29de75b476add775ac679f",
"text": "(ABSTRACT) As traditional control schemes, open-loop Hysteresis and closed-loop pulse-width-modulation (PWM) have been used for the switched reluctance motor (SRM) current controller. The Hysteresis controller induces large unpleasant audible noises because it needs to vary the switching frequency to maintain constant Hysteresis current band. In contract, the PWM controller is very quiet but difficult to design proper gains and control bandwidth due to the nonlinear nature of the SRM. In this thesis, the ac small signal modeling technique is proposed for linearization of the SRM model such that a conventional PI controller can be designed accordingly for the PWM current controller. With the linearized SRM model, the duty-cycle to output transfer function can be derived, and the controller can be designed with sufficient stability margins. The proposed PWM controller has been simulated to compare the performance against the conventional Hysteresis controller based system. It was found that through the frequency spectrum analysis, the noise spectra in audible range disappeared with the fixed switching frequency PWM controller, but was pronounced with the conventional Hysteresis controller. A hardware prototype is then implemented with digital signal processor to verify the quiet nature of the PWM controller when running at 20 kHz switching frequency. The experimental results also indicate a stable current loop operation.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "cca9b3cb4a0d6fb8a690f2243cf7abce",
"text": "In this paper, we propose to predict immediacy for interacting persons from still images. A complete immediacy set includes interactions, relative distance, body leaning direction and standing orientation. These measures are found to be related to the attitude, social relationship, social interaction, action, nationality, and religion of the communicators. A large-scale dataset with 10,000 images is constructed, in which all the immediacy measures and the human poses are annotated. We propose a rich set of immediacy representations that help to predict immediacy from imperfect 1-person and 2-person pose estimation results. A multi-task deep recurrent neural network is constructed to take the proposed rich immediacy representation as input and learn the complex relationship among immediacy predictions multiple steps of refinement. The effectiveness of the proposed approach is proved through extensive experiments on the large scale dataset.",
"title": ""
},
{
"docid": "bd37aa47cf495c7ea327caf2247d28e4",
"text": "The purpose of this study is to identify the negative effects of social network sites such as Facebook among Asia Pacific University scholars. The researcher, distributed 152 surveys to students of the chosen university to examine and study the negative effects. Electronic communication is emotionally gratifying but how do such technological distraction impact on academic performance? Because of social media platform’s widespread adoption by university students, there is an interest in how Facebook is related to academic performance. This paper measure frequency of use, participation in activities and time spent preparing for class, in order to know if Facebook affects the performance of students. Moreover, the impact of social network site on academic performance also raised another major concern which is health. Today social network sites are running the future and carrier of students. Social network sites were only an electronic connection between users, but unfortunately it has become an addiction for students. This paper examines the relationship between social network sites and health threat. Lastly, the paper provides a comprehensive analysis of the law and privacy of Facebook. It shows how Facebook users socialize on the site, while they are not aware or misunderstand the risk involved and how their privacy suffers as a result.",
"title": ""
},
{
"docid": "55e9346ae7bcdac1de999534de34eca5",
"text": "Semantic computing and enterprise Linked Data have recently gained traction in enterprises. Although the concept of Enterprise Knowledge Graphs (EKGs) has meanwhile received some attention, a formal conceptual framework for designing such graphs has not yet been developed. By EKG we refer to a semantic network of concepts, properties, individuals and links representing and referencing foundational and domain knowledge relevant for an enterprise. Through the efforts reported in this paper, we aim to bridge the gap between the increasing need for EKGs and the lack of formal methods for realising them. We present a thorough study of the key concepts of knowledge graphs design along with an analysis of the advantages and disadvantages of various design decisions. In particular, we distinguish between two polar approaches towards data fusion, i.e., the unified and the federated approach, describe their benefits and point out shortages.",
"title": ""
},
{
"docid": "457e2f2583a94bf8b6f7cecbd08d7b34",
"text": "We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.",
"title": ""
},
{
"docid": "56d6528588a70de9a0dd19bbe5c3e896",
"text": "We are concerned with learning models that generalize well to different unseen domains. We consider a worst-case formulation over data distributions that are near the source domain in the feature space. Only using training data from a single source distribution, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is \"hard\" under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration. For softmax losses, we show that our method is a data-dependent regularization scheme that behaves differently from classical regularizers that regularize towards zero (e.g., ridge or lasso). On digit recognition and semantic segmentation tasks, our method learns models improve performance across a range of a priori unknown target domains.",
"title": ""
},
{
"docid": "28a6111c13e9554bf32533f13e56e92b",
"text": "OBJECTIVES\nTo better categorize the epidemiologic profile, clinical features, and disease associations of loose anagen hair syndrome (LAHS) compared with other forms of childhood alopecia.\n\n\nDESIGN\nRetrospective survey.\n\n\nSETTING\nAcademic pediatric dermatology practice. Patients Three hundred seventy-four patients with alopecia referred from July 1, 1997, to June 31, 2007.\n\n\nMAIN OUTCOME MEASURES\nEpidemiologic data for all forms of alopecia were ascertained, such as sex, age at onset, age at the time of evaluation, and clinical diagnosis. Patients with LAHS were further studied by the recording of family history, disease associations, hair-pull test or biopsy results, hair color, laboratory test result abnormalities, initial treatment, and involvement of eyelashes, eyebrows, and nails.\n\n\nRESULTS\nApproximately 10% of all children with alopecia had LAHS. The mean age (95% confidence interval) at onset differed between patients with LAHS (2.8 [1.2-4.3] years) vs patients without LAHS (7.1 [6.6-7.7] years) (P < .001), with 3 years being the most common age at onset for patients with LAHS. All but 1 of 37 patients with LAHS were female. The most common symptom reported was thin, sparse hair. Family histories were significant for LAHS (n = 1) and for alopecia areata (n = 3). In 32 of 33 patients, trichograms showed typical loose anagen hairs. Two children had underlying genetic syndromes. No associated laboratory test result abnormalities were noted among patients who underwent testing.\n\n\nCONCLUSIONS\nLoose anagen hair syndrome is a common nonscarring alopecia in young girls with a history of sparse or fine hair. Before ordering extensive blood testing in young girls with diffusely thin hair, it is important to perform a hair-pull test, as a trichogram can be instrumental in the confirmation of a diagnosis of LAHS.",
"title": ""
},
{
"docid": "f2521fbfd566fcf31b5810695e748ba0",
"text": "A facile approach for coating red fluoride phosphors with a moisture-resistant alkyl phosphate layer with a thickness of 50-100 nm is reported. K2 SiF6 :Mn(4+) particles were prepared by co-precipitation and then coated by esterification of P2 O5 with alcohols (methanol, ethanol, and isopropanol). This route was adopted to encapsulate the prepared phosphors using transition-metal ions as cross-linkers between the alkyl phosphate moieties. The coated phosphor particles exhibited a high water tolerance and retained approximately 87 % of their initial external quantum efficiency after aging under high-humidity (85 %) and high-temperature (85 °C) conditions for one month. Warm white-light-emitting diodes that consisted of blue InGaN chips, the prepared K2 SiF6 :Mn(4+) phosphors, and either yellow Y3 Al5 O12 :Ce(3+) phosphors or green β-SiAlON: Eu(2+) phosphors showed excellent color rendition.",
"title": ""
},
{
"docid": "bd7841688d039371f85d34f982130105",
"text": "Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.",
"title": ""
},
{
"docid": "1b85bf53970400f6005623382f29ce60",
"text": "An approach of rapidly computing the projective width of lanes is presented to predict the projective positions and widths of lanes. The Lane Marking Extraction Finite State Machine is designed to extract points with features of lane markings in the image, and a cubic B-spline is adopted to conduct curve fitting to reconstruct road geometry. A statistical search algorithm is also proposed to correctly and adaptively determine thresholds under various kinds of illumination conditions. Furthermore, the parameters of the camera in a moving car may change with the vibration, so a dynamic calibration algorithm is applied to calibrate camera parameters and lane widths with the information of lane projection. Moreover, a fuzzy logic is applied to determine the situation of occlusion. Finally, a region-of-interest determination strategy is developed to reduce the search region and to make the detection more robust with respect to the occlusion on the lane markings or complicated changes of curves and road boundaries.",
"title": ""
},
{
"docid": "3bca1dd8dc1326693f5ebbe0eaf10183",
"text": "This paper presents a novel multi-way multi-stage power divider design method based on the theory of small reflections. Firstly, the application of the theory of small reflections is extended from transmission line to microwave network. Secondly, an explicit closed-form analytical formula of the input reflection coefficient, which consists of the scattering parameters of power divider elements and the lengths of interconnection lines between each element, is derived. Thirdly, the proposed formula is applied to determine the lengths of interconnection lines. A prototype of a 16-way 4-stage power divider working at 4 GHz is designed and fabricated. Both the simulation and measurement results demonstrate the validity of the proposed method.",
"title": ""
},
{
"docid": "ab927f80c37446fd649cd75f9bc15c1c",
"text": "In this work, we ask the following question: Can visual analogies, learned in an unsupervised way, be used in order to transfer knowledge between pairs of games and even play one game using an agent trained for another game? We attempt to answer this research question by creating visual analogies between a pair of games: a source game and a target game. For example, given a video frame in the target game, we map it to an analogous state in the source game and then attempt to play using a trained policy learned for the source game. We demonstrate convincing visual mapping between four pairs of games (eight mappings), which are used to evaluate three transfer learning approaches.",
"title": ""
}
] |
scidocsrr
|
3607293589205489da619f7cc6a8cc23
|
Deep Convolutional Neural Networks for Anomaly Event Classification on Distributed Systems
|
[
{
"docid": "dbb9db490ae3c1bb91d22ecd8d679270",
"text": "The growing computational and storage needs of several scientific applications mandate the deployment of extreme-scale parallel machines, such as IBM's BlueGene/L, which can accommodate as many as 128K processors. In this paper, we present our experiences in collecting and filtering error event logs from a 8192 processor BlueGene/L prototype at IBM Rochester, which is currently ranked #8 in the Top-500 list. We analyze the logs collected from this machine over a period of 84 days starting from August 26, 2004. We perform a three-step filtering algorithm on these logs: extracting and categorizing failure events; temporal filtering to remove duplicate reports from the same location; and finally coalescing failure reports of the same error across different locations. Using this approach, we can substantially compress these logs, removing over 99.96% of the 828,387 original entries, and more accurately portray the failure occurrences on this system.",
"title": ""
},
{
"docid": "4dc9360837b5793a7c322f5b549fdeb1",
"text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering",
"title": ""
}
] |
[
{
"docid": "86c0b7d49d0cecc3a2554b85ec08f3ed",
"text": "Advanced driver assistance systems and the environment perception for autonomous vehicles will benefit from systems robustly tracking objects while simultaneously estimating their shape. Unlike many recent approaches that represent object shapes by approximated models such as boxes or ellipses, this paper proposes an algorithm that estimates a free-formed shape derived from raw laser measurements. For that purpose local occupancy grid maps are used to model arbitrary object shapes. Beside shape estimation the algorithm keeps a stable reference point on the object. This will be important to avoid apparent motion if the observable part of an object contour changes. The algorithm is part of a perception system and is tested with two 4-layer laser scanners.",
"title": ""
},
{
"docid": "807dedfe0c5d71ac87bb7fed194c47be",
"text": "DRAM memory is a major contributor for the total power consumption in modern computing systems. Consequently, power reduction for DRAM memory is critical to improve system-level power efficiency. Fine-grained DRAM architecture [1, 2] has been proposed to reduce the activation/ precharge power. However, those prior work either incurs significant performance degradation or introduces large area overhead. In this paper, we propose a novel memory architecture Half-DRAM, in which the DRAM array is reorganized to enable only half of a row being activated. The half-row activation can effectively reduce activation power and meanwhile sustain the full bandwidth one bank can provide. In addition, the half-row activation in Half-DRAM relaxes the power constraint in DRAM, and opens up opportunities for further performance gain. Furthermore, two half-row accesses can be issued in parallel by integrating the sub-array level parallelism to improve the memory level parallelism. The experimental results show that Half-DRAM can achieve both significant performance improvement and power reduction, with negligible design overhead",
"title": ""
},
{
"docid": "ec87def0b881822e6a3df6c523c0eec5",
"text": "OH-PBDEs have been reported to be more potent than the postulated precursor PBDEs or corresponding MeO-PBDEs. However, there are contradictory reports for transformation of these compounds in organisms, particularly, for biotransformation of OH-PBDEs and MeO-PBDEs, only one study reported transformation of 6-OH-BDE-47 and 6-MeO-BDE-47 in Japanese medaka. In present study zebrafish (Danio rerio) were exposed to BDE-47, 6-OH-BDE-47, 6-MeO-BDE-47, 2'-OH-BDE-28 and 2'-MeO-BDE-28 in the diet for 20 d. Concentrations of each exposed compound were measured in eggs collected on days 0, 5, 10, 15 or 20. After 20 d exposure, concentrations of precursor and biotransformation products in liver and liver-free residual carcass were measured by use of GC-MS/MS. Total mass of the five compounds in bodies of adults were: 2'-MeO-BDE-28 ∼ 6-MeO-BDE-47>BDE-47>2'-OH-BDE-28>6-OH-BDE-47. MeO-PBDEs were also accumulated more into parental fish body than in liver, while OH-PBDEs accumulated in liver more than in liver-free residual carcass. Concentrations in liver of males were greater than those of females. This result suggests sex-related differences in accumulation. Ratios between concentration in eggs and liver (E/L) were: 2.9, 1.7, 0.8, 0.4 and 0.1 for 6-MeO-BDE-47, BDE-47, 6-OH-BDE-47, 2'-MeO-BDE-28 and 2'-OH-BDE-28, respectively. This result suggests transfer from adult females to eggs. BDE-47 was not transformed into OH-PBDEs or MeO-PBDEs. Inter-conversions of 6-OH-BDE-47 and 6-MeO-BDE-47, 2'-OH-BDE-28 and 2'-MeO-BDE-28 were observed, with metabolite/precursor concentration ratios for 6-OH-BDE-47, 6-MeO-BDE-47, 2'-OH-BDE-28 and 2'-MeO-BDE-28 being 3.8%, 14.6%, 2.9% and 76.0%, respectively. Congener-specific differences were observed in distributions between liver and carcass, maternal transfer and transformation. The two MeO-PBDEs were accumulated into adults, transferred to eggs, and were transformed to the structural similar OH-PBDEs, which might be more toxic. BDE-47 was accumulated into adults and transferred from females to eggs, but not transformed to MeO-PBDEs and/or OH-PBDEs. Accumulation of OH-PBDEs into adults as well as rates of transformation of OH-PBDEs to MeO-PBDEs were all several orders of magnitude less. Thus, MeO-PBDEs are likely to present more of a risk in the environment.",
"title": ""
},
{
"docid": "2c39f8c440a89f72db8814e633cb5c04",
"text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.",
"title": ""
},
{
"docid": "094f1e41fde1392cbdc3e1956cf2fc53",
"text": "This paper investigates the characteristics of the active and reactive power sharing in a parallel inverters system under different system impedance conditions. The analyses conclude that the conventional droop method cannot achieve efficient power sharing for the case of a system with complex impedance condition. To achieve the proper power balance and minimize the circulating current in the different impedance situations, a novel droop controller that considers the impact of complex impedance is proposed in this paper. This controller can simplify the coupled active and reactive power relationships, which are caused by the complex impedance in the parallel system. In addition, a virtual complex impedance loop is included in the proposed controller to minimize the fundamental and harmonic circulating current that flows in the parallel system. Compared to the other methods, the proposed controller can achieve accurate power sharing, offers efficient dynamic performance, and is more adaptive to different line impedance situations. Simulation and experimental results are presented to prove the validity and the improvements achieved by the proposed controller.",
"title": ""
},
{
"docid": "05eaf278ed39cd6a8522f812589388c6",
"text": "Several recent software systems have been designed to obtain novel annotation of cross-referencing text fragments and Wikipedia pages. Tagme is state of the art in this setting and can accurately manage short textual fragments (such as snippets of search engine results, tweets, news, or blogs) on the fly.",
"title": ""
},
{
"docid": "2f8361f2943ff90bf98c6b8a207086c4",
"text": "Real-life bugs are successful because of their unfailing ability to adapt. In particular this applies to their ability to adapt to strategies that are meant to eradicate them as a species. Software bugs have some of these same traits. We will discuss these traits, and consider what we can do about them.",
"title": ""
},
{
"docid": "c05b6720cdfdf6170ccce6486d485dc0",
"text": "The naturalness of warps is gaining extensive attention in image stitching. Recent warps, such as SPHP and AANAP, use global similarity warps to mitigate projective distortion (which enlarges regions); however, they necessarily bring in perspective distortion (which generates inconsistencies). In this paper, we propose a novel quasi-homography warp, which effectively balances the perspective distortion against the projective distortion in the non-overlapping region to create a more natural-looking panorama. Our approach formulates the warp as the solution of a bivariate system, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization, respectively. Because our proposed warp only relies on a global homography, it is thus totally parameter free. A comprehensive experiment shows that a quasi-homography warp outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that it wins most users’ favor, compared to homography and SPHP.",
"title": ""
},
{
"docid": "77326d21f3bfdbf0d6c38c2cde871bf5",
"text": "There have been a number of linear, feature-based models proposed by the information retrieval community recently. Although each model is presented differently, they all share a common underlying framework. In this paper, we explore and discuss the theoretical issues of this framework, including a novel look at the parameter space. We then detail supervised training algorithms that directly maximize the evaluation metric under consideration, such as mean average precision. We present results that show training models in this way can lead to significantly better test set performance compared to other training methods that do not directly maximize the metric. Finally, we show that linear feature-based models can consistently and significantly outperform current state of the art retrieval models with the correct choice of features.",
"title": ""
},
{
"docid": "aee250663a05106c4c0fad9d0f72828c",
"text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.",
"title": ""
},
{
"docid": "8fe823702191b4a56defaceee7d19db6",
"text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.",
"title": ""
},
{
"docid": "e11c486975fb5c277f39f131de87f399",
"text": "OBJECTIVES\nThere is a clinical impression of dissatisfaction with treatment for hypothyroidism among some patients. Psychometric properties of the new ThyTSQ questionnaire are evaluated. The questionnaire, measuring patients' satisfaction with their treatment for hypothyroidism, has two parts: the seven-item ThyTSQ-Present and four-item ThyTSQ-Past, measuring satisfaction with present and past treatment, respectively, on scales from 6 (very satisfied) to 0 (very dissatisfied).\n\n\nMETHODS\nThe questionnaire was completed once by 103 adults with hypothyroidism, age (mean [SD]) 55.2 [14.4], range 23-84 years (all treated with thyroxine).\n\n\nRESULTS\nCompletion rates were very high. Internal consistency reliability was excellent for both ThyTSQ-Present and ThyTSQ-Past (Cronbach's alpha = 0.91 and 0.90, respectively [N = 102 and 103]). Principal components analyses indicated that the seven items of the ThyTSQ-Present and the four items of the ThyTSQ-Past could be summed into separate Present Satisfaction and Past Satisfaction total scores. Mean Present Satisfaction was 32.5 (7.8), maximum range 0-42, and mean Past Satisfaction was 17.5 (6.1), maximum range 0-24, indicating considerable room for improvement. Patients were least satisfied with their present understanding of their condition, mean 4.2 (1.7) (maximum range 0-6), and with information provided about hypothyroidism around the time of diagnosis, mean 3.9 (1.8) (maximum range 0-6).\n\n\nCONCLUSIONS\nThe ThyTSQ is highly acceptable to patients with hypothyroidism (excellent completion rates), and has established internal consistency reliability. It will assist health professionals in considering psychological outcomes when treating people with hypothyroidism, and is suitable for clinical trials and routine clinical monitoring.",
"title": ""
},
{
"docid": "0f325e4fe9faf6c43a68ea2721b85f58",
"text": "Prosopis juliflora is characterized by distinct and profuse growth even in nutritionally poor soil and environmentally stressed conditions and is believed to harbor some novel heavy metal-resistant bacteria in the rhizosphere and endosphere. This study was performed to isolate and characterize Cr-resistant bacteria from the rhizosphere and endosphere of P. juliflora growing on the tannery effluent contaminated soil. A total of 5 and 21 bacterial strains were isolated from the rhizosphere and endosphere, respectively, and were shown to tolerate Cr up to 3000 mg l(-1). These isolates also exhibited tolerance to other toxic heavy metals such as, Cd, Cu, Pb, and Zn, and high concentration (174 g l(-1)) of NaCl. Moreover, most of the isolated bacterial strains showed one or more plant growth-promoting activities. The phylogenetic analysis of the 16S rRNA gene showed that the predominant species included Bacillus, Staphylococcus and Aerococcus. As far as we know, this is the first report analyzing rhizo- and endophytic bacterial communities associated with P. juliflora growing on the tannery effluent contaminated soil. The inoculation of three isolates to ryegrass (Lolium multiflorum L.) improved plant growth and heavy metal removal from the tannery effluent contaminated soil suggesting that these bacteria could enhance the establishment of the plant in contaminated soil and also improve the efficiency of phytoremediation of heavy metal-degraded soils.",
"title": ""
},
{
"docid": "107960c3c2e714804133f5918ac03b74",
"text": "This paper reports on a data-driven motion planning approach for interaction-aware, socially-compliant robot navigation among human agents. Autonomous mobile robots navigating in workspaces shared with human agents require motion planning techniques providing seamless integration and smooth navigation in such. Smooth integration in mixed scenarios calls for two abilities of the robot: predicting actions of others and acting predictably for them. The former requirement requests trainable models of agent behaviors in order to accurately forecast their actions in the future, taking into account their reaction on the robot's decisions. A human-like navigation style of the robot facilitates other agents-most likely not aware of the underlying planning technique applied-to predict the robot motion vice versa, resulting in smoother joint navigation. The approach presented in this paper is based on a feature-based maximum entropy model and is able to guide a robot in an unstructured, real-world environment. The model is trained to predict joint behavior of heterogeneous groups of agents from onboard data of a mobile platform. We evaluate the benefit of interaction-aware motion planning in a realistic public setting with a total distance traveled of over 4 km. Interestingly the motion models learned from human-human interaction did not hold for robot-human interaction, due to the high attention and interest of pedestrians in testing basic braking functionality of the robot.",
"title": ""
},
{
"docid": "956771bbfb0610a28090de1678c23774",
"text": "Finding data governance practices that maintain a balance between value creation and risk exposure is the new organizational imperative for unlocking competitive advantage and maximizing value from the application of big data. The first Web extra at http://youtu.be/B2RlkoNjrzA is a video in which author Paul Tallon expands on his article \"Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost\" and discusses how finding data governance practices that maintain a balance between value creation and risk exposure is the new organizational imperative for unlocking competitive advantage and maximizing value from the application of big data. The second Web extra at http://youtu.be/g0RFa4swaf4 is a video in which author Paul Tallon discusses the supplementary material to his article \"Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost\" and how projection models can help individuals responsible for data handling plan for and understand big data storage issues.",
"title": ""
},
{
"docid": "3c44f2bf1c8a835fb7b86284c0b597cd",
"text": "This paper explores some of the key electromagnetic design aspects of a synchronous reluctance motor that is equipped with single-tooth windings (i.e., fractional slot concentrated windings). The analyzed machine, a 6-slot 4-pole motor, utilizes a segmented stator core structure for ease of coil winding, pre-assembly, and facilitation of high slot fill factors (~60%). The impact on the motors torque producing capability and its power factor of these inter-segment air gaps between the stator segments is investigated through 2-D finite element analysis (FEA) studies where it is shown that they have a low impact. From previous studies, torque ripple is a known issue with this particular slot–pole combination of synchronous reluctance motor, and the use of two different commercially available semi-magnetic slot wedges is investigated as a method to improve torque quality. An analytical analysis of continuous rotor skewing is also investigated as an attempt to reduce the torque ripple. Finally, it is shown that through a combination of 2-D and 3-D FEA studies in conjunction with experimentally derived results on a prototype machine that axial fringing effects cannot be ignored when predicting the q-axis reactance in such machines. A comparison of measured orthogonal axis flux linkages/reactances with 3-D FEA studies is presented for the first time.",
"title": ""
},
{
"docid": "fba109e4627d4bb580d07368e3c00cc1",
"text": "-Wheeled-tracked vehicles are undoubtedly the most popular means of transportation. However, these vehicles are mainly suitable for relatively flat terrain. Legged vehicles, on the other hand, have the potential to handle wide variety of terrain. Robug IIs is a legged climbing robot designed to work in relatively unstructured and rough terrain. It has the capability of walking, climbing vertical surfaces and performing autonomous floor to wall transfer. The sensing technique used in Robug IIs is mainly tactile and ultrasonic sensing. A set of reflexive rules have been developed for the robot to react to the uncertainty of the working environment. The robot also has the intelligence to seek and verify its own foot-holds. It is envisaged that the main application of robot is for remote inspection and maintenance in hazardous environments. Keywords—Legged robot, climbing service robot, insect inspired robot, pneumatic control, fuzzy logic.",
"title": ""
},
{
"docid": "1b625a1136bec100f459a39b9b980575",
"text": "This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k non-zero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key motivation for developing the new algorithm. The proposed method is tested on applications such as sparse principal component analysis and the densest k-subgraph problem. Extensive experiments on several synthetic and real-world data sets demonstrate the competitive empirical performance of our method.",
"title": ""
},
{
"docid": "213149a116dabdd43c51707b07bc06b4",
"text": "This work introduces the Green Vehicle Routing Problem (GVRP). The GVRP is an extension of the well-known vehicle routing problem (VRP). Moreover, the GVRP includes an objective function that minimizes weighted distance. Minimizing weighted distance reduces fuel consumption and consequently CO2 emissions. Therefore, the GVRP is more environmentally friendly than traditional versions of the VRP. This work presents a Mixed Integer Linear Program formulation for the problem and a Local Search algorithm to find local optima. Also, the problem is illustrated using a small problem instance.",
"title": ""
},
{
"docid": "52e1c2f6df368e9bed3f5532e14e75b6",
"text": "Fast visual recognition in the mammalian cortex seems to be a hierarchical process by which the representation of the visual world is transformed in multiple stages from low-level retinotopic features to high-level, global and invariant features, and to object categories. Every single step in this hierarchy seems to be subject to learning. How does the visual cortex learn such hierarchical representations by just looking at the world? How could computers learn such representations from data? Computer vision models that are weakly inspired by the visual cortex will be described. A number of unsupervised learning algorithms to train these models will be presented, which are based on the sparse auto-encoder concept. The effectiveness of these algorithms for learning invariant feature hierarchies will be demonstrated with a number of practical tasks such as scene parsing, pedestrian detection, and object classification.",
"title": ""
}
] |
scidocsrr
|
d98c577fad1ae62fd3895ed2f6ac8d1f
|
Standardization for evaluating software-defined networking controllers
|
[
{
"docid": "3e066a6f96e74963046c9c24239196b4",
"text": "This paper presents an independent comprehensive analysis of the efficiency indexes of popular open source SDN/OpenFlow controllers (NOX, POX, Beacon, Floodlight, MuL, Maestro, Ryu). The analysed indexes include performance, scalability, reliability, and security. For testing purposes we developed the new framework called hcprobe. The test bed and the methodology we used are discussed in detail so that everyone could reproduce our experiments. The result of the evaluation show that modern SDN/OpenFlow controllers are not ready to be used in production and have to be improved in order to increase all above mentioned characteristics.",
"title": ""
}
] |
[
{
"docid": "3604f1ef7df6e0c224bd19034d7c0929",
"text": "BACKGROUND\nMost individuals at risk for developing cardiovascular disease (CVD) can reduce risk factors through diet and exercise before resorting to drug treatment. The effect of a combination of resistance training with vegetable-based (soy) versus animal-based (whey) protein supplementation on CVD risk reduction has received little study. The study's purpose was to examine the effects of 12 weeks of resistance exercise training with soy versus whey protein supplementation on strength gains, body composition and serum lipid changes in overweight, hyperlipidemic men.\n\n\nMETHODS\nTwenty-eight overweight, male subjects (BMI 25-30) with serum cholesterol >200 mg/dl were randomly divided into 3 groups (placebo (n = 9), and soy (n = 9) or whey (n = 10) supplementation) and participated in supervised resistance training for 12 weeks. Supplements were provided in a double blind fashion.\n\n\nRESULTS\nAll 3 groups had significant gains in strength, averaging 47% in all major muscle groups and significant increases in fat free mass (2.6%), with no difference among groups. Percent body fat and waist-to-hip ratio decreased significantly in all 3 groups an average of 8% and 2%, respectively, with no difference among groups. Total serum cholesterol decreased significantly, again with no difference among groups.\n\n\nCONCLUSION\nParticipation in a 12 week resistance exercise training program significantly increased strength and improved both body composition and serum cholesterol in overweight, hypercholesterolemic men with no added benefit from protein supplementation.",
"title": ""
},
{
"docid": "7f24dc012f65770b391d182c525fdaff",
"text": "This paper focuses on the task of knowledge-based question answering (KBQA). KBQA aims to match the questions with the structured semantics in knowledge base. In this paper, we propose a two-stage method. Firstly, we propose a topic entity extraction model (TEEM) to extract topic entities in questions, which does not rely on hand-crafted features or linguistic tools. We extract topic entities in questions with the TEEM and then search the knowledge triples which are related to the topic entities from the knowledge base as the candidate knowledge triples. Then, we apply Deep Structured Semantic Models based on convolutional neural network and bidirectional long short-term memory to match questions and predicates in the candidate knowledge triples. To obtain better training dataset, we use an iterative approach to retrieve the knowledge triples from the knowledge base. The evaluation result shows that our system achieves an AverageF1 measure of 79.57% on test dataset.",
"title": ""
},
{
"docid": "028070222acb092767aadfdd6824d0df",
"text": "The autism spectrum disorders (ASDs) are a group of conditions characterized by impairments in reciprocal social interaction and communication, and the presence of restricted and repetitive behaviours. Individuals with an ASD vary greatly in cognitive development, which can range from above average to intellectual disability. Although ASDs are known to be highly heritable (∼90%), the underlying genetic determinants are still largely unknown. Here we analysed the genome-wide characteristics of rare (<1% frequency) copy number variation in ASD using dense genotyping arrays. When comparing 996 ASD individuals of European ancestry to 1,287 matched controls, cases were found to carry a higher global burden of rare, genic copy number variants (CNVs) (1.19 fold, P = 0.012), especially so for loci previously implicated in either ASD and/or intellectual disability (1.69 fold, P = 3.4 × 10-4). Among the CNVs there were numerous de novo and inherited events, sometimes in combination in a given family, implicating many novel ASD genes such as SHANK2, SYNGAP1, DLGAP2 and the X-linked DDX53–PTCHD1 locus. We also discovered an enrichment of CNVs disrupting functional gene sets involved in cellular proliferation, projection and motility, and GTPase/Ras signalling. Our results reveal many new genetic and functional targets in ASD that may lead to final connected pathways.",
"title": ""
},
{
"docid": "5cc3d79d7bd762e8cfd9df658acae3fc",
"text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",
"title": ""
},
{
"docid": "21324c71d70ca79d2f2c7117c759c915",
"text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.",
"title": ""
},
{
"docid": "d156813b45cb419d86280ee2947b6cde",
"text": "Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.",
"title": ""
},
{
"docid": "a74880697c58a2c4cb84ef1626344316",
"text": "This article provides an overview of contemporary and forward looking inter-cell interference coordination techniques for 4G OFDM systems with a specific emphasis on implementations for LTE. Viable approaches include the use of power control, opportunistic spectrum access, intra and inter-base station interference cancellation, adaptive fractional frequency reuse, spatial antenna techniques such as MIMO and SDMA, and adaptive beamforming, as well as recent innovations in decoding algorithms. The applicability, complexity, and performance gains possible with each of these techniques based on simulations and empirical measurements will be highlighted for specific cellular topologies relevant to LTE macro, pico, and femto deployments for both standalone and overlay networks.",
"title": ""
},
{
"docid": "8165a77b36b7c7dd26e5f8223e2564a7",
"text": "A novel design method of a wideband dual-polarized antenna is presented by using shorted dipoles, integrated baluns, and crossed feed lines. Simulation and equivalent circuit analysis of the antenna are given. To validate the design method, an antenna prototype is designed, optimized, fabricated, and measured. Measured results verify that the proposed antenna has an impedance bandwidth of 74.5% (from 1.69 to 3.7 GHz) for VSWR < 1.5 at both ports, and the isolation between the two ports is over 30 dB. Stable gain of 8–8.7 dBi and half-power beamwidth (HPBW) of 65°–70° are obtained for 2G/3G/4G base station frequency bands (1.7–2.7 GHz). Compared to the other reported dual-polarized dipole antennas, the presented antenna achieves wide impedance bandwidth, high port isolation, stable antenna gain, and HPBW with a simple structure and compact size.",
"title": ""
},
{
"docid": "0a625d5f0164f7ed987a96510c1b6092",
"text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.",
"title": ""
},
{
"docid": "f6362a62b69999bdc3d9f681b68842fc",
"text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.",
"title": ""
},
{
"docid": "70a07b906b31054646cf43eb543ba50c",
"text": "1. Cellular and Molecular Research Center, and Neuroscience Department, Tehran University of Medical Sciences, Tehran, Iran 2. Anatomy Department, Tehran University of Medical Science, Tehran, Iran. 3. Physiology Research Center (PRC), Tehran university of Medical Sciences, Tehran, Iran. 4. Institute for Cognitive Science studies (ICSS), Tehran, Iran. 5. Department of Material Science and Engineering, Sharif University of Technology, Tehran, Iran.",
"title": ""
},
{
"docid": "6fb72f68aa41a71ea51b81806d325561",
"text": "An important aspect related to the development of face-aging algorithms is the evaluation of the ability of such algorithms to produce accurate age-progressed faces. In most studies reported in the literature, the performance of face-aging systems is established based either on the judgment of human observers or by using machine-based evaluation methods. In this paper we perform an experimental evaluation that aims to assess the applicability of human-based against typical machine based performance evaluation methods. The results of our experiments indicate that machines can be more accurate in determining the performance of face-aging algorithms. Our work aims towards the development of a complete evaluation framework for age progression methodologies.",
"title": ""
},
{
"docid": "aaf6ed732f2cb5ceff714f1d84dac9ed",
"text": "Video caption refers to generating a descriptive sentence for a specific short video clip automatically, which has achieved remarkable success recently. However, most of the existing methods focus more on visual information while ignoring the synchronized audio cues. We propose three multimodal deep fusion strategies to maximize the benefits of visual-audio resonance information. The first one explores the impact on cross-modalities feature fusion from low to high order. The second establishes the visual-audio short-term dependency by sharing weights of corresponding front-end networks. The third extends the temporal dependency to long-term through sharing multimodal memory across visual and audio modalities. Extensive experiments have validated the effectiveness of our three cross-modalities fusion strategies on two benchmark datasets, including Microsoft Research Video to Text (MSRVTT) and Microsoft Video Description (MSVD). It is worth mentioning that sharing weight can coordinate visualaudio feature fusion effectively and achieve the state-of-art performance on both BELU and METEOR metrics. Furthermore, we first propose a dynamic multimodal feature fusion framework to deal with the part modalities missing case. Experimental results demonstrate that even in the audio absence mode, we can still obtain comparable results with the aid of the additional audio modality inference module.",
"title": ""
},
{
"docid": "a62c03417176b5751471bad386bbfa08",
"text": "Platforms are defined as multisided marketplaces with business models that enable producers and users to create value together by interacting with each other. In recent years, platforms have benefited from the advances of digitalization. Hence, digital platforms continue to triumph, and continue to be attractive for companies, also for startups. In this paper, we first explore the research of platforms compared to digital platforms. We then proceed to analyze digital platforms as business models, in the context of startups looking for business model innovation. Based on interviews conducted at a technology startup event in Finland, we analyzed how 34 startups viewed their business model innovations. Using the 10 sub-constructs from the business model innovation scale by Clauss in 2016, we found out that the idea of business model innovation resonated with startups, as all of them were able to identify the source of their business model innovation. Furthermore, the results indicated the complexity of business model innovation as 79 percent of the respondents explained it with more than one sub-construct. New technology/equipment, new processes and new customers and markets got the most mentions as sources of business model innovation. Overall, the emphasis at startups is on the value creation innovation, with new proposition innovation getting less, and value capture innovation even less emphasis as the source of business model innovation.",
"title": ""
},
{
"docid": "41b3b48c10753600e36a584003eebdd6",
"text": "This paper deals with reliability problems of common types of generators in hard conditions. It shows possibilities of construction changes that should increase the machine reliability. This contribution is dedicated to the study of brushless alternator for automotive industry. There are described problems with usage of common types of alternators and main benefits and disadvantages of several types of brushless alternators.",
"title": ""
},
{
"docid": "64cc022ac7052a9c82108c88e06b0bf7",
"text": "Influential people have an important role in the process of information diffusion. However, there are several ways to be influential, for example, to be the most popular or the first that adopts a new idea. In this paper we present a methodology to find trendsetters in information networks according to a specific topic of interest. Trendsetters are people that adopt and spread new ideas influencing other people before these ideas become popular. At the same time, not all early adopters are trendsetters because only few of them have the ability of propagating their ideas by their social contacts through word-of-mouth. Differently from other influence measures, a trendsetter is not necessarily popular or famous, but the one whose ideas spread over the graph successfully. Other metrics such as node in-degree or even standard Pagerank focus only in the static topology of the network. We propose a ranking strategy that focuses on the ability of some users to push new ideas that will be successful in the future. To that end, we combine temporal attributes of nodes and edges of the network with a Pagerank based algorithm to find the trendsetters for a given topic. To test our algorithm we conduct innovative experiments over a large Twitter dataset. We show that nodes with high in-degree tend to arrive late for new trends, while users in the top of our ranking tend to be early adopters that also influence their social contacts to adopt the new trend.",
"title": ""
},
{
"docid": "404a662b55baea9402d449fae6192424",
"text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.",
"title": ""
},
{
"docid": "1bdb24fb4c85b3aaf8a8e5d71328a920",
"text": "BACKGROUND\nHigh-grade intraepithelial neoplasia is known to progress to invasive squamous-cell carcinoma of the anus. There are limited reports on the rate of progression from high-grade intraepithelial neoplasia to anal cancer in HIV-positive men who have sex with men.\n\n\nOBJECTIVES\nThe purpose of this study was to describe in HIV-positive men who have sex with men with perianal high-grade intraepithelial neoplasia the rate of progression to anal cancer and the factors associated with that progression.\n\n\nDESIGN\nThis was a prospective cohort study.\n\n\nSETTINGS\nThe study was conducted at an outpatient clinic at a tertiary care center in Toronto.\n\n\nPATIENTS\nThirty-eight patients with perianal high-grade anal intraepithelial neoplasia were identified among 550 HIV-positive men who have sex with men.\n\n\nINTERVENTION\nAll of the patients had high-resolution anoscopy for symptoms, screening, or surveillance with follow-up monitoring/treatment.\n\n\nMAIN OUTCOME MEASURES\nWe measured the incidence of anal cancer per 100 person-years of follow-up.\n\n\nRESULTS\nSeven (of 38) patients (18.4%) with perianal high-grade intraepithelial neoplasia developed anal cancer. The rate of progression was 6.9 (95% CI, 2.8-14.2) cases of anal cancer per 100 person-years of follow-up. A diagnosis of AIDS, previously treated anal cancer, and loss of integrity of the lesion were associated with progression. Anal bleeding was more than twice as common in patients who progressed to anal cancer.\n\n\nLIMITATIONS\nThere was the potential for selection bias and patients were offered treatment, which may have affected incidence estimates.\n\n\nCONCLUSIONS\nHIV-positive men who have sex with men should be monitored for perianal high-grade intraepithelial neoplasia. Those with high-risk features for the development of anal cancer may need more aggressive therapy.",
"title": ""
},
{
"docid": "62688aa48180943a6fcf73fef154fe75",
"text": "Oxidative stress is a phenomenon associated with the pathology of several diseases including atherosclerosis, neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases, cancer, diabetes mellitus, inflammatory diseases, as well as psychiatric disorders or aging process. Oxidative stress is defined as an imbalance between the production of free radicals and reactive metabolites, so called oxidants, and their elimination by protective mechanisms named antioxidative systems. Free radicals and their metabolites prevail over antioxidants. This imbalance leads to damage of important biomolecules and organs with plausible impact on the whole organism. Oxidative and antioxidative processes are associated with electron transfer influencing the redox state of cells and organisms; therefore, oxidative stress is also known as redox stress. At present, the opinion that oxidative stress is not always harmful has been accepted. Depending on its intensity, it can play a role in regulation of other important processes through modulation of signal pathways, influencing synthesis of antioxidant enzymes, repair processes, inflammation, apoptosis and cell proliferation, and thus process of a malignity. Therefore, improper administration of antioxidants can potentially negatively impact biological systems.",
"title": ""
},
{
"docid": "91c792fac981d027ac1f2a2773674b10",
"text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.",
"title": ""
}
] |
scidocsrr
|
912e28e6ac67ccba52c59c59f68d9f48
|
Straight to the Facts: Learning Knowledge Base Retrieval for Factual Visual Question Answering
|
[
{
"docid": "0323cfb6e74e160c44e0922a49ecc28b",
"text": "Generating diverse questions for given images is an important task for computational education, entertainment and AI assistants. Different from many conventional prediction techniques is the need for algorithms to generate a diverse set of plausible questions, which we refer to as creativity. In this paper we propose a creative algorithm for visual question generation which combines the advantages of variational autoencoders with long short-term memory networks. We demonstrate that our framework is able to generate a large set of varying questions given a single input image.",
"title": ""
},
{
"docid": "8b998b9f8ea6cfe5f80a5b3a1b87f807",
"text": "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.",
"title": ""
},
{
"docid": "c0e2d1740bbe2c40e7acf262cb658ea2",
"text": "The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset.",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
},
{
"docid": "db806183810547435075eb6edd28d630",
"text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] |
[
{
"docid": "8508162ac44f56aaaa9c521e6628b7b2",
"text": "Pervasive or ubiquitous computing was developed thanks to the technological evolution of embedded systems and computer communication means. Ubiquitous computing has given birth to the concept of smart spaces that facilitate our daily life and increase our comfort where devices provide proactively adpated services. In spite of the significant previous works done in this domain, there still a lot of work and enhancement to do in particular the taking into account of current user's context when providing adaptable services. In this paper we propose an approach for context-aware services adaptation for a smart living room using two machine learning methods.",
"title": ""
},
{
"docid": "e7230519f0bd45b70c1cbd42f09cb9e8",
"text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.",
"title": ""
},
{
"docid": "df2ccac20cdb63038af362ea8950c62d",
"text": "Data-intensive applications that operate on large volumes of data have motivated a fresh look at the design of data center networks. The first wave of proposals focused on designing pure packet-switched networks that provide full bisection bandwidth. However, these proposals significantly increase network complexity in terms of the number of links and switches required and the restricted rules to wire them up. On the other hand, optical circuit switching technology holds a very large bandwidth advantage over packet switching technology. This fact motivates us to explore how optical circuit switching technology could benefit a data center network. In particular, we propose a hybrid packet and circuit switched data center network architecture (or HyPaC for short) which augments the traditional hierarchy of packet switches with a high speed, low complexity, rack-to-rack optical circuit-switched network to supply high bandwidth to applications. We discuss the fundamental requirements of this hybrid architecture and their design options. To demonstrate the potential benefits of the hybrid architecture, we have built a prototype system called c-Through. c-Through represents a design point where the responsibility for traffic demand estimation and traffic demultiplexing resides in end hosts, making it compatible with existing packet switches. Our emulation experiments show that the hybrid architecture can provide large benefits to unmodified popular data center applications at a modest scale. Furthermore, our experimental experience provides useful insights on the applicability of the hybrid architecture across a range of deployment scenarios.",
"title": ""
},
{
"docid": "8a607387d2803985d28d386258ba7fae",
"text": "based on cross-cultural research. This approach expands earlier theoretical interpretations offered for the significance of cave art that fail to account for central aspects of cave art material. Clottes & Lewis-Williams (1998), Smith (1992) and Ryan (1999) concur in the interpretation that neurologically-based shamanic practices were central to cave art (cf. Lewis-Williams 1997a,b). Clottes & Lewis-Williams suggest that, in spite of the temporal distance, we have better access to Upper Palaeolithic peoples’ religious experiences than other aspects of their lives because of the neuropsychological basis of those experiences. The commonality in the experiences of shamanism across space and time provides a basis for forming ‘some idea of the social and mental context out of which Upper Palaeolithic religion and art came’ (Clottes & LewisMichael Winkelman",
"title": ""
},
{
"docid": "946e5205a93f71e0cfadf58df186ef7e",
"text": "Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "58c995ef5e6b2b46c1c1d716733f051f",
"text": "A printed dipole with an adjustable integrated balun is presented, featuring a broadband performance and flexibility for the matching to different impedance values. As a benchmarking topology, an eight-element linear antenna array is designed and built for base stations used in broadband wireless communications.",
"title": ""
},
{
"docid": "1ae41c6041c8c09277dab4ccbd4c2773",
"text": "While the Internet of Things (IoT) technology has been widely recognized as the essential part of Smart Cities, it also brings new challenges in terms of privacy and security. Access control (AC) is among the top security concerns, which is critical in resource and information protection over IoT devices. Traditional access control approaches, like Access Control Lists (ACL), Role-based Access Control (RBAC) and Attribute-based Access Control (ABAC), are not able to provide a scalable, manageable and efficient mechanism to meet the requirements of IoT systems. Another weakness in today’s AC is the centralized authorization server, which can be the performance bottleneck or the single point of failure. Inspired by the smart contract on top of a blockchain protocol, this paper proposes BlendCAC, which is a decentralized, federated capability-based AC mechanism to enable an effective protection for devices, services and information in large scale IoT systems. A federated capability-based delegation model (FCDM) is introduced to support hierarchical and multi-hop delegation. The mechanism for delegate authorization and revocation is explored. A robust identity-based capability token management strategy is proposed, which takes advantage of the smart contract for registering, propagating and revocating of the access authorization. A proof-of-concept prototype has been implemented on both resources-constrained devices (i.e., Raspberry PI node) and more powerful computing devices (i.e., laptops), and tested on a local private blockchain network. The experimental results demonstrate the feasibility of the BlendCAC to offer a decentralized, scalable, lightweight and fine-grained AC solution for IoT systems.",
"title": ""
},
{
"docid": "c1f9456f9479378cd887b3f1c4d15016",
"text": "Emerging Internet of Things system utilizes heterogeneous proximity-based ubiquitous resources to provide various real-time multimedia services to mobile application users. In general, such a system relies on distant Cloud services to perform all the data processing tasks, which results in explicit latency. Consequently, Fog computing, which utilizes the proximal computational and networking resources, has arisen. However, utilizing Fog for real-time mobile applications faces the new challenge of ensuring the seamless accessibility of Fog services on the move. This paper proposes a framework for proactive Fog service discovery and process migration using Mobile Ad hoc Social Network in proximity. The proposed framework enables Fog-assisted ubiquitous multimedia service provisioning in proximity without distant Cloud services. A proof-of-concept prototype has been implemented and tested on real devices. Additionally, the proposed Fog service discovery and process migration algorithm have been tested on the ONE simulator.",
"title": ""
},
{
"docid": "b90b7b44971cf93ba343b5dcdd060875",
"text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.",
"title": ""
},
{
"docid": "104b72422962b2fe339eae3616dced0e",
"text": "We present an efficient algorithm to compute the intersection of algebraic and NURBS surfaces. Our approach is based on combining the marching methods with the algbraic formulation. In particular, we propose and matrix computations. We present algorithms to compute a start point on each component of the intersection curve (both open and closed components), detect the presence of singularities, and find all the curve branches near the singularity. We also suggest methods to compute the step size during tracing to prevent component jumping. The algorithm runs an order of magnitude faster than previously published robust algorithms. The complexity of the algorithm is output sensitive.",
"title": ""
},
{
"docid": "a2f46b51b65c56acf6768f8e0d3feb79",
"text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit",
"title": ""
},
{
"docid": "14d480e4c9256d0ef5e5684860ae4d7f",
"text": "Changes in land use and land cover (LULC) as well as climate are likely to affect the geographic distribution of malaria vectors and parasites in the coming decades. At present, malaria transmission is concentrated mainly in the Amazon basin where extensive agriculture, mining, and logging activities have resulted in changes to local and regional hydrology, massive loss of forest cover, and increased contact between malaria vectors and hosts. Employing presence-only records, bioclimatic, topographic, hydrologic, LULC and human population data, we modeled the distribution of malaria and two of its dominant vectors, Anopheles darlingi, and Anopheles nuneztovari s.l. in northern South America using the species distribution modeling platform Maxent. Results from our land change modeling indicate that about 70,000 km2 of forest land would be lost by 2050 and 78,000 km2 by 2070 compared to 2010. The Maxent model predicted zones of relatively high habitat suitability for malaria and the vectors mainly within the Amazon and along coastlines. While areas with malaria are expected to decrease in line with current downward trends, both vectors are predicted to experience range expansions in the future. Elevation, annual precipitation and temperature were influential in all models both current and future. Human population mostly affected An. darlingi distribution while LULC changes influenced An. nuneztovari s.l. distribution. As the region tackles the challenge of malaria elimination, investigations such as this could be useful for planning and management purposes and aid in predicting and addressing potential impediments to elimination.",
"title": ""
},
{
"docid": "a2f5bb20d262b8bab9450ae16cd43abc",
"text": "The design and implementation of a high efficiency Class-J power amplifier (PA) for basestation applications is reported. A commercially available 10W GaN HEMT device was used, for which a large-signal model and an extrinsic parasitic model were available. Following Class-J theory, the needed harmonic terminations at the output of the transistor were defined and realised. Experimental results show good agreement with simulations verifying the class of operation. Efficiency above 70% is demonstrated with an output power of 39.7dBm at an input drive of 29dBm. High efficiency is sustained over a bandwidth of 140MHz.",
"title": ""
},
{
"docid": "62e386315d2f4b8ed5ca3bcce71c4e83",
"text": "Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.",
"title": ""
},
{
"docid": "e5dbd27e1dca8f920c416a570d05e94f",
"text": "OBJECTIVE\nTo examine the effect of the adapted virtual reality cognitive training program in older adults with chronic schizophrenia.\n\n\nMETHODS\nOlder adults with chronic schizophrenia were recruited from a long-stay care setting and were randomly assigned into intervention (n = 12) and control group (n = 15). The intervention group received 10-session of VR program that consisted of 2 VR activities using IREX. The control group attended the usual programs in the setting.\n\n\nRESULTS\nAfter the 10-session intervention, older adults with chronic schizophrenia preformed significantly better than control in overall cognitive function (p .000), and in two cognitive subscales: repetition (p .001) and memory (p .040). These participants engaged in the VR activities volitionally. No problem of cybersickness was observed.\n\n\nCONCLUSIONS\nThe results of the current study indicate that engaging in the adapted virtual reality cognitive training program offers the potential for significant gains in cognitive function of the older adults with chronic schizophrenia.",
"title": ""
},
{
"docid": "96aa1f19a00226af7b5bbe0bb080582e",
"text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.",
"title": ""
},
{
"docid": "799bc245ecfabf59416432ab62fe9320",
"text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.",
"title": ""
},
{
"docid": "3f255fa3dcb8b027f1736b30e98254f9",
"text": "We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. We provide theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood, along with a definition of an appropriate joint distribution and sampling mechanism even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. We validate these theoretical results with experiments on two image datasets using an architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows training to proceed with simple backprop, without the need for layerwise pretraining.",
"title": ""
},
{
"docid": "490dc6ee9efd084ecf2496b72893a39a",
"text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.",
"title": ""
},
{
"docid": "d64d589068d68ef19d7ac77ab55c8318",
"text": "Cloud computing is a revolutionary paradigm to deliver computing resources, ranging from data storage/processing to software, as a service over the network, with the benefits of efficient resource utilization and improved manageability. The current popular cloud computing models encompass a cluster of expensive and dedicated machines to provide cloud computing services, incurring significant investment in capital outlay and ongoing costs. A more cost effective solution would be to exploit the capabilities of an ad hoc cloud which consists of a cloud of distributed and dynamically untapped local resources. The ad hoc cloud can be further classified into static and mobile clouds: an ad hoc static cloud harnesses the underutilized computing resources of general purpose machines, whereas an ad hoc mobile cloud harnesses the idle computing resources of mobile devices. However, the dynamic and distributed characteristics of ad hoc cloud introduce challenges in system management. In this article, we propose a generic em autonomic mobile cloud (AMCloud) management framework for automatic and efficient service/resource management of ad hoc cloud in both static and mobile modes. We then discuss in detail the possible security and privacy issues in ad hoc cloud computing. A general security architecture is developed to facilitate the study of prevention and defense approaches toward a secure autonomic cloud system. This article is expected to be useful for exploring future research activities to achieve an autonomic and secure ad hoc cloud computing system.",
"title": ""
}
] |
scidocsrr
|
ceecd0bdda7f5916200f2659a333cdf1
|
DemographicVis: Analyzing demographic information based on user generated content
|
[
{
"docid": "deda12e60ddba97be009ce1f24feba7e",
"text": "It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.",
"title": ""
}
] |
[
{
"docid": "53569b0225db62d4c627e41469cb91b8",
"text": "A first proof-of-concept mm-sized implant based on ultrasonic power transfer and RF uplink data transmission is presented. The prototype consists of a 1 mm × 1 mm piezoelectric receiver, a 1 mm × 2 mm chip designed in 65 nm CMOS and a 2.5 mm × 2.5 mm off-chip antenna, and operates through 3 cm of chicken meat which emulates human tissue. The implant supports a DC load power of 100 μW allowing for high-power applications. It also transmits consecutive UWB pulse sequences activated by the ultrasonic downlink data path, demonstrating sufficient power for an Mary PPM transmitter in uplink.",
"title": ""
},
{
"docid": "bbd1e7e579d2543be236a5f69cf42981",
"text": "To date, there is almost no work on the use of adverbs in sentiment analysis, nor has there been any work on the use of adverb-adjective combinations (AACs). We propose an AAC-based sentiment analysis technique that uses a linguistic analysis of adverbs of degree. We define a set of general axioms (based on a classification of adverbs of degree into five categories) that all adverb scoring techniques must satisfy. Instead of aggregating scores of both adverbs and adjectives using simple scoring functions, we propose an axiomatic treatment of AACs based on the linguistic classification of adverbs. Three specific AAC scoring methods that satisfy the axioms are presented. We describe the results of experiments on an annotated set of 200 news articles (annotated by 10 students) and compare our algorithms with some existing sentiment analysis algorithms. We show that our results lead to higher accuracy based on Pearson correlation with human subjects.",
"title": ""
},
{
"docid": "12524304546ca59b7e8acb2a7f6d6699",
"text": "Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4or 5-option items per testing time while improving content coverage, without detrimental effects on psychometric quality of test scores. Researchers have endorsed 3-option items for over 80 years with empirical evidence—the results of which have been synthesized in an effort to unify this endorsement and encourage its adoption.",
"title": ""
},
{
"docid": "c43b77b56a6e2cb16a6b85815449529d",
"text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.",
"title": ""
},
{
"docid": "e63dfda54251b861691d88b8d7f00298",
"text": "The negative effects of sleep deprivation on alertness and cognitive performance suggest decreases in brain activity and function, primarily in the thalamus, a subcortical structure involved in alertness and attention, and in the prefrontal cortex, a region subserving alertness, attention, and higher-order cognitive processes. To test this hypothesis, 17 normal subjects were scanned for quantifiable brain activity changes during 85 h of sleep deprivation using positron emission tomography (PET) and (18)Fluorine-2-deoxyglucose ((18)FDG), a marker for regional cerebral metabolic rate for glucose (CMRglu) and neuronal synaptic activity. Subjects were scanned prior to and at 24-h intervals during the sleep deprivation period, for a total of four scans per subject. During each 30 min (18)FDG uptake, subjects performed a sleep deprivation-sensitive Serial Addition/Subtraction task. Polysomnographic monitoring confirmed that subjects were awake. Twenty-four hours of sleep deprivation, reported here, resulted in a significant decrease in global CMRglu, and significant decreases in absolute regional CMRglu in several cortical and subcortical structures. No areas of the brain evidenced a significant increase in absolute regional CMRglu. Significant decreases in relative regional CMRglu, reflecting regional brain reductions greater than the global decrease, occurred predominantly in the thalamus and prefrontal and posterior parietal cortices. Alertness and cognitive performance declined in association with these brain deactivations. This study provides evidence that short-term sleep deprivation produces global decreases in brain activity, with larger reductions in activity in the distributed cortico-thalamic network mediating attention and higher-order cognitive processes, and is complementary to studies demonstrating deactivation of these cortical regions during NREM and REM sleep.",
"title": ""
},
{
"docid": "6a61dc5ea4f3c664f56f0449da181ef4",
"text": "In recent times, the study and use of induced pluripotent stem cells (iPSC) have become important in order to avoid the ethical issues surrounding the use of embryonic stem cells. Therapeutic, industrial and research based use of iPSC requires large quantities of cells generated in vitro. Mammalian cells, including pluripotent stem cells, have been expanded using 3D culture, however current limitations have not been overcome to allow a uniform, optimized platform for dynamic culture of pluripotent stem cells to be achieved. In the current work, we have expanded mouse iPSC in a spinner flask using Cytodex 3 microcarriers. We have looked at the effect of agitation on the microcarrier survival and optimized an agitation speed that supports bead suspension and iPS cell expansion without any bead breakage. Under the optimized conditions, the mouse iPSC were able to maintain their growth, pluripotency and differentiation capability. We demonstrate that microcarrier survival and iPS cell expansion in a spinner flask are reliant on a very narrow range of spin rates, highlighting the need for precise control of such set ups and the need for improved design of more robust systems.",
"title": ""
},
{
"docid": "754fb355da63d024e3464b4656ea5e8d",
"text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "5673fc81ba9a1d26531bcf7a1572e873",
"text": "Spatio-temporal channel information obtained via channel sounding is invaluable for implementing equalizers, multi-antenna systems, and dynamic modulation schemes in next-generation wireless systems. The most straightforward means of performing channel measurements is in the frequency domain using a vector network analyzer (VNA). However, the high cost of VNAs often leads engineers to seek more economical solutions by measuring the wireless channel in the time domain. The bandwidth compression of the sliding correlator channel sounder makes it the preferred means of performing time-domain channel measurements.",
"title": ""
},
{
"docid": "d4858f49894fcceb15a121a25da4d861",
"text": "Remote backup copies of databases are often maintained to ensure availability of data even in the presence of extensive failures, for which local replication mechanisms may be inadequate. We present two versions of an epoch algorithm for maintaining a consistent remote backup copy of a database. The algorithms ensure scalability, which makes them suitable for very large databases. The correctness and the performance of the algorithms are discussed, and an additional application for distributed group commit is given.",
"title": ""
},
{
"docid": "5d91cf986b61bf095c04b68da2bb83d3",
"text": "The adeno-associated virus (AAV) vector has been used in preclinical and clinical trials of gene therapy for central nervous system (CNS) diseases. One of the biggest challenges of effectively delivering AAV to the brain is to surmount the blood-brain barrier (BBB). Herein, we identified several potential BBB shuttle peptides that significantly enhanced AAV8 transduction in the brain after a systemic administration, the best of which was the THR peptide. The enhancement of AAV8 brain transduction by THR is dose-dependent, and neurons are the primary THR targets. Mechanism studies revealed that THR directly bound to the AAV8 virion, increasing its ability to cross the endothelial cell barrier. Further experiments showed that binding of THR to the AAV virion did not interfere with AAV8 infection biology, and that THR competitively blocked transferrin from binding to AAV8. Taken together, our results demonstrate, for the first time, that BBB shuttle peptides are able to directly interact with AAV and increase the ability of the AAV vectors to cross the BBB for transduction enhancement in the brain. These results will shed important light on the potential applications of BBB shuttle peptides for enhancing brain transduction with systemic administration of AAV vectors.",
"title": ""
},
{
"docid": "c71635ec5c0ef83c850cab138330f727",
"text": "Academic institutions are now drawing attention in finding methods for making effective learning process, for identifying learner’s achievements and weakness, for tracing academic progress and also for predicting future performance. People’s increased expectation for accountability and transparency makes it necessary to implement big data analytics in the educational institution. But not all the educationalist and administrators are ready to take the challenge. So, it is now obvious to know about the necessity and opportunity as well as challenges of implementing big data analytics. This paper will describe the needs, opportunities and challenges of implementing big data analytics in the education sector.",
"title": ""
},
{
"docid": "872d589cd879dee7d88185851b9546ab",
"text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.",
"title": ""
},
{
"docid": "ce098e1e022235a2c322a231bff8da6c",
"text": "In recent years, due to the development of three-dimensional scanning technology, the opportunities for real objects to be three-dimensionally measured, taken into the PC as point cloud data, and used for various contents are increasing. However, the point cloud data obtained by three-dimensional scanning has many problems such as data loss due to occlusion or the material of the object to be measured, and occurrence of noise. Therefore, it is necessary to edit the point cloud data obtained by scanning. Particularly, since the point cloud data obtained by scanning contains many data missing, it takes much time to fill holes. Therefore, we propose a method to automatically filling hole obtained by three-dimensional scanning. In our method, a surface is generated from a point in the vicinity of a hole, and a hole region is filled by generating a point sequence on the surface. This method is suitable for processing to fill a large number of holes because point sequence interpolation can be performed automatically for hole regions without requiring user input.",
"title": ""
},
{
"docid": "525ddfaae4403392e8817986f2680a68",
"text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.",
"title": ""
},
{
"docid": "7196b6f6b14827d60f968534d52b4852",
"text": "Therapeutic applications of the psychedelics or hallucinogens found cross-culturally involve treatment of a variety of physical, psychological, and social maladies. Modern medicine has similarly found that a range of conditions may be successfully treated with these agents. The ability to treat a wide variety of conditions derives from variation in active ingredients, doses and modes of application, and factors of set and setting manipulated in ritual. Similarities in effects reported cross-culturally reflect biological mechanisms, while success in the treatment of a variety of specific psychological conditions points to the importance of ritual in eliciting their effects. Similar bases involve action on the serotonin and dopamine neurotransmitter systems that can be characterized as psychointegration: an elevation of ancient brain processes. Therapeutic Application of Sacred Medicines in the Premodern and Modern World Societies worldwide have discovered therapeutic applications of psychoactive plants, often referred to as sacred medicines, particularly those called psychedelics or hallucinogens. Hundreds of species of such plants and fungi were used for medicinal and religious purposes (see Schultes et al. 1992; Rätsch 2005), as well as for a variety of psychological and social conditions, culture-bound syndromes, and Thanks to Ilsa Jerome for providing some updated references for this paper. M. J. Winkelman (&) Retired from the School of Human Evolution and Social Change, Arizona State University Tempe Arizona, Caixa Postal 62, Pirenópolis, GO 72980-000, Brazil e-mail: [email protected] B. C. Labate and C. Cavnar (eds.), The Therapeutic Use of Ayahuasca, DOI: 10.1007/978-3-642-40426-9_1, Springer-Verlag Berlin Heidelberg 2014 1 a range of physical diseases (see Schultes and Winkelman 1996). This review illustrates the range of uses and the diverse potential of these substances for addressing human maladies. The ethnographic data on indigenous uses of these substances, combined with a brief overview of some of the modern medical studies, illustrate that a wide range of effects are obtained with these plants. These cultural therapies involve both pharmacological and ritual manipulations. Highly developed healing traditions selectively utilized different species of the same genus, different preparation methods and doses, varying admixtures, and a variety of ritual and psychotherapeutic processes to obtain specific desired effects. The wide range of uses of these plants suggests that they can contribute new active ingredients for modern medicine, particularly in psychiatry. As was illustrated by our illustrious contributors to Psychedelic Medicine (Winkelman and Roberts 2007a, b), there are a number of areas in which psychedelics have been established in treating what have been considered intractable health problems. While double-blind clinical trials have been sparse (but see Griffiths et al. 2006), this is not due to the lack of evidence for efficacy, but rather the administrative prohibitions that have drastically restricted clinical research. Nonetheless, using the criteria of phases of clinical evaluation, Winkelman and Roberts (2007c) concluded that there is at least Phase II evidence for the effectiveness of most of these psychedelics, supporting the continuation of more advanced trials. Furthermore, their success with the often intractable maladies, ranging from depression and cluster headaches to posttraumatic stress disorder (PTSD), obsessive-compulsive disorders, wasting syndromes, and addictions justifies their immediate use with these desperate patient populations. In addition, the wide variety of therapeutic uses found for these substances in cultures around the world suggest the potential for far greater applications. Therapeutic Uses of Psilocybin-containing ‘‘Magic Mushrooms’’ The Aztecs called these fungi teonanacatl, meaning ‘‘food of the gods’’; there is evidence of the use of psilocybin-containing mushrooms from many different genera in ritual healing practices in cultures around the world and deep in prehistory (see Rätsch 2005). One of the best documented therapeutic uses of psilocybin involves Maria Sabina, the Mazatec ‘‘Wise One’’ (Estrada 1981). Several different Psilocybe species are used by the Mazatec, as well as mushrooms of the Conocybe genera. In addition, other psychoactive plants are also employed, including Salvia divinorum Epl. and tobacco (Nicotiana rustica L., Solanaceae). 1 Phase II studies or trials use small groups of selected patients to determine effectiveness and ideal doses for a specific illness after Phase I trials have established safety (lack of toxicity) and safe dose ranges. 2 M. J. Winkelman",
"title": ""
},
{
"docid": "3906227f9766e1434e33f1d817f99641",
"text": "With the advent of large labelled datasets and highcapacity models, the performance of machine vision systems has been improving rapidly. However, the technology has still major limitations, starting from the fact that different vision problems are still solved by different models, trained from scratch or fine-tuned on the target data. The human visual system, in stark contrast, learns a universal representation for vision in the early life of an individual. This representation works well for an enormous variety of vision problems, with little or no change, with the major advantage of requiring little training data to solve any of them. In this paper we investigate whether neural networks may work as universal representations by studying their capacity in relation to the “size” of a large combination of vision problems. We do so by showing that a single neural network can learn simultaneously several very different visual domains (from sketches to planktons and MNIST digits) as well as, or better than, a number of specialized networks. However, we also show that this requires to carefully normalize the information in the network, by using domainspecific scaling factors or, more generically, by using an instance normalization layer.",
"title": ""
},
{
"docid": "bc23df5db0a87c44c944ddf2898db407",
"text": "B-trees have been ubiquitous in database management systems for several decades, and they serve in many other storage systems as well. Their basic structure and their basic operations are well understood including search, insertion, and deletion. However, implementation of transactional guarantees such as all-or-nothing failure atomicity and durability in spite of media and system failures seems to be difficult. High-performance techniques such as pseudo-deleted records, allocation-only logging, and transaction processing during crash recovery are widely used in commercial B-tree implementations but not widely understood. This survey collects many of these techniques as a reference for students, researchers, system architects, and software developers. Central in this discussion are physical data independence, separation of logical database contents and physical representation, and the concepts of user transactions and system transactions. Many of the techniques discussed are applicable beyond B-trees.",
"title": ""
},
{
"docid": "be8eb6c72936af75c1e41f9e17ba2579",
"text": "The use of unmanned aerial vehicles (UAVs) is growing rapidly across many civil application domains including realtime monitoring, providing wireless coverage, remote sensing, search and rescue, delivery of goods, security and surveillance, precision agriculture, and civil infrastructure inspection. Smart UAVs are the next big revolution in UAV technology promising to provide new opportunities in different applications, especially in civil infrastructure in terms of reduced risks and lower cost. Civil infrastructure is expected to dominate the more that $45 Billion market value of UAV usage. In this survey, we present UAV civil applications and their challenges. We also discuss current research trends and provide future insights for potential UAV uses. Furthermore, we present the key challenges for UAV civil applications, including: charging challenges, collision avoidance and swarming challenges, and networking and security related challenges. Based on our review of the recent literature, we discuss open research challenges and draw high-level insights on how these challenges might be approached.",
"title": ""
}
] |
scidocsrr
|
379bc9f0d7e44547dd6a08eb885ccc15
|
Anomaly Detection in Wireless Sensor Networks in a Non-Stationary Environment
|
[
{
"docid": "60fe7f27cd6312c986b679abce3fdea7",
"text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
}
] |
[
{
"docid": "2fa6f761f22e0484a84f83e5772bef40",
"text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.",
"title": ""
},
{
"docid": "ba0dce539f33496dedac000b61efa971",
"text": "The webpage aesthetics is one of the factors that affect the way people are attracted to a site. But two questions emerge: how can we improve a webpage's aesthetics and how can we evaluate this item? In order to solve this problem, we identified some of the theory that is underlying graphic design, gestalt theory and multimedia design. Based in the literature review, we proposed principles for web site design. We also propose a tool to evaluate web design.",
"title": ""
},
{
"docid": "e726e11f855515017de77508b79d3308",
"text": "OBJECTIVES\nThis study was conducted to better understand the characteristics of chronic pain patients seeking treatment with medicinal cannabis (MC).\n\n\nDESIGN\nRetrospective chart reviews of 139 patients (87 males, median age 47 years; 52 females, median age 48 years); all were legally qualified for MC use in Washington State.\n\n\nSETTING\nRegional pain clinic staffed by university faculty.\n\n\nPARTICIPANTS\n\n\n\nINCLUSION CRITERIA\nage 18 years and older; having legally accessed MC treatment, with valid documentation in their medical records. All data were de-identified.\n\n\nMAIN OUTCOME MEASURES\nRecords were scored for multiple indicators, including time since initial MC authorization, qualifying condition(s), McGill Pain score, functional status, use of other analgesic modalities, including opioids, and patterns of use over time.\n\n\nRESULTS\nOf 139 patients, 15 (11 percent) had prior authorizations for MC before seeking care in this clinic. The sample contained 236.4 patient-years of authorized MC use. Time of authorized use ranged from 11 days to 8.31 years (median of 1.12 years). Most patients were male (63 percent) yet female patients averaged 0.18 years longer authorized use. There were no other gender-specific trends or factors. Most patients (n = 123, 88 percent) had more than one pain syndrome present. Myofascial pain syndrome was the most common diagnosis (n = 114, 82 percent), followed by neuropathic pain (n = 89, 64 percent), discogenic back pain (n = 72, 51.7 percent), and osteoarthritis (n = 37, 26.6 percent). Other diagnoses included diabetic neuropathy, central pain syndrome, phantom pain, spinal cord injury, fibromyalgia, rheumatoid arthritis, HIV neuropathy, visceral pain, and malignant pain. In 51 (37 percent) patients, there were documented instances of major hurdles related to accessing MC, including prior physicians unwilling to authorize use, legal problems related to MC use, and difficulties in finding an affordable and consistent supply of MC.\n\n\nCONCLUSIONS\nData indicate that males and females access MC at approximately the same rate, with similar median authorization times. Although the majority of patient records documented significant symptom alleviation with MC, major treatment access and delivery barriers remain.",
"title": ""
},
{
"docid": "b6dcf2064ad7f06fd1672b1348d92737",
"text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.",
"title": ""
},
{
"docid": "d47143c38598cf88eeb8be654f8a7a00",
"text": "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6% character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64%. On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15% (Fontane) and 1.47% (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.",
"title": ""
},
{
"docid": "0b0273a1e2aeb98eb4115113c8957fd2",
"text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.",
"title": ""
},
{
"docid": "affa4a43b68f8c158090df3a368fe6b6",
"text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.",
"title": ""
},
{
"docid": "49f96e96623502ffe6053cab43054edf",
"text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.",
"title": ""
},
{
"docid": "21ad29105c4b6772b05156afd33ac145",
"text": "High resolution Digital Surface Models (DSMs) produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2) according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements. Remote Sens. 2013, 5 1682",
"title": ""
},
{
"docid": "c89ce1ded524ff65c1ebd3d20be155bc",
"text": "Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their predictive efficacies for violence. The effect sizes were extracted from 28 original reports published between 1999 and 2008, which assessed the predictive accuracy of more than one tool. We used a within-subject design to improve statistical power and multilevel regression models to disentangle random effects of variation between studies and tools and to adjust for study features. All 9 tools and their subscales predicted violence at about the same moderate level of predictive efficacy with the exception of Psychopathy Checklist--Revised (PCL-R) Factor 1, which predicted violence only at chance level among men. Approximately 25% of the total variance was due to differences between tools, whereas approximately 85% of heterogeneity between studies was explained by methodological features (age, length of follow-up, different types of violent outcome, sex, and sex-related interactions). Sex-differentiated efficacy was found for a small number of the tools. If the intention is only to predict future violence, then the 9 tools are essentially interchangeable; the selection of which tool to use in practice should depend on what other functions the tool can perform rather than on its efficacy in predicting violence. The moderate level of predictive accuracy of these tools suggests that they should not be used solely for some criminal justice decision making that requires a very high level of accuracy such as preventive detention.",
"title": ""
},
{
"docid": "16741aac03ea1a864ddab65c8c73eb7c",
"text": "This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid \"CMOL\" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of \"tiles\". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.",
"title": ""
},
{
"docid": "cffce89fbb97dc1d2eb31a060a335d3c",
"text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.",
"title": ""
},
{
"docid": "8c853251e0fb408c829e6f99a581d4cf",
"text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.",
"title": ""
},
{
"docid": "fb89a5aa87f1458177d6a32ef25fdf3b",
"text": "The increase in population, the rapid economic growth and the rise in community living standards accelerate municipal solid waste (MSW) generation in developing cities. This problem is especially serious in Pudong New Area, Shanghai, China. The daily amount of MSW generated in Pudong was about 1.11 kg per person in 2006. According to the current population growth trend, the solid waste quantity generated will continue to increase with the city's development. In this paper, we describe a waste generation and composition analysis and provide a comprehensive review of municipal solid waste management (MSWM) in Pudong. Some of the important aspects of waste management, such as the current status of waste collection, transport and disposal in Pudong, will be illustrated. Also, the current situation will be evaluated, and its problems will be identified.",
"title": ""
},
{
"docid": "bcd16100ca6814503e876f9f15b8c7fb",
"text": "OBJECTIVE\nBrain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG.\n\n\nMETHODS\nA total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters.\n\n\nRESULTS\nThe classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard.\n\n\nCONCLUSIONS\nThis is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.",
"title": ""
},
{
"docid": "8e324cf4900431593d9ebc73e7809b23",
"text": "Even though there is a plethora of studies investigating the challenges of adopting ebanking services, a search through the literature indicates that prior studies have investigated either user adoption challenges or the bank implementation challenges. This study integrated both perspectives to provide a broader conceptual framework for investigating challenges banks face in marketing e-banking services in developing country such as Ghana. The results from the mixed method study indicated that institutional–based challenges as well as userbased challenges affect the marketing of e-banking products in Ghana. The strategic implications of the findings for marketing ebanking services are discussed to guide managers to implement e-banking services in Ghana.",
"title": ""
},
{
"docid": "62166980f94bba5e75c9c6ad4a4348f1",
"text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "13974867d98411b6a999374afcc5b2cb",
"text": "Current best local descriptors are learned on a large dataset of matching and non-matching keypoint pairs. However, data of this kind is not always available since detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly-labeled data.",
"title": ""
},
{
"docid": "bc7f80192416aa7787657aed1bda3997",
"text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.",
"title": ""
}
] |
scidocsrr
|
8f37b402bb1ac9b58883707aee4a2b5c
|
RELIABILITY-BASED MANAGEMENT OF BURIED PIPELINES
|
[
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
}
] |
[
{
"docid": "8abd03202f496de4bec6270946d53a9c",
"text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.",
"title": ""
},
{
"docid": "80e9f9261397cb378920a6c897fd352a",
"text": "Purpose: This study develops a comprehensive research model that can explain potential customers’ behavioral intentions to adopt and use smart home services. Methodology: This study proposes and validates a new theoretical model that extends the theory of planned behavior (TPB). Partial least squares analysis (PLS) is employed to test the research model and corresponding hypotheses on data collected from 216 survey samples. Findings: Mobility, security/privacy risk, and trust in the service provider are important factors affecting the adoption of smart home services. Practical implications: To increase potential users’ adoption rate, service providers should focus on developing mobility-related services that enable people to access smart home services while on the move using mobile devices via control and monitoring functions. Originality/Value: This study is the first empirical attempt to examine user acceptance of smart home services, as most of the prior literature has concerned technical features.",
"title": ""
},
{
"docid": "7bd440a6c7aece364877dbb5170cfcfb",
"text": "Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN, which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets.",
"title": ""
},
{
"docid": "29e56287071ca1fc1bf3d83f67b3ce8d",
"text": "In this paper, we seek to identify factors that might increase the likelihood of adoption and continued use of cyberinfrastructure by scientists. To do so, we review the main research on Information and Communications Technology (ICT) adoption and use by addressing research problems, theories and models used, findings, and limitations. We focus particularly on the individual user perspective. We categorize previous studies into two groups: Adoption research and post-adoption (continued use) research. In addition, we review studies specifically regarding cyberinfrastructure adoption and use by scientists and other special user groups. We identify the limitations of previous theories, models and research findings appearing in the literature related to our current interest in scientists’ adoption and continued use of cyber-infrastructure. We synthesize the previous theories and models used for ICT adoption and use, and then we develop a theoretical framework for studying scientists’ adoption and use of cyber-infrastructure. We also proposed a research design based on the research model developed. Implications for researchers and practitioners are provided.",
"title": ""
},
{
"docid": "da9ffb00398f6aad726c247e3d1f2450",
"text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.",
"title": ""
},
{
"docid": "59e02bc986876edc0ee0a97fd4d12a28",
"text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.",
"title": ""
},
{
"docid": "b13c9597f8de229fb7fec3e23c0694d1",
"text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.",
"title": ""
},
{
"docid": "dc33d2edcfb124af607bcb817589f6e9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "a4e6b629ec4b0fdf8784ba5be1a62260",
"text": "Today's real-world databases typically contain millions of items with many thousands of fields. As a result, traditional distribution-based outlier detection techniques have more and more restricted capabilities and novel k-nearest neighbors based approaches have become more and more popular. However, the problems with these k-nearest neighbors rankings for top n outliers, are very computationally expensive for large datasets, and doubts exist in general whether they would work well for high dimensional datasets. To partially circumvent these problems, we propose in this paper a new global outlier factor and a new local outlier factor and an efficient outlier detection algorithm developed upon them that is easy to implement and can provide competing performances with existing solutions. Experiments performed on both synthetic and real data sets demonstrate the efficacy of our method. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "494d720d5a8c7c58b795c5c6131fa8d1",
"text": "The increasing emergence of pervasive information systems requires a clearer understanding of the underlying characteristics in relation to user acceptance. Based on the integration of UTAUT2 and three pervasiveness constructs, we derived a comprehensive research model to account for pervasive information systems. Data collected from 346 participants in an online survey was analyzed to test the developed model using structural equation modeling and taking into account multigroup analysis. The results confirm the applicability of the integrated UTAUT2 model to measure pervasiveness. Implications for research and practice are discussed together with future research opportunities.",
"title": ""
},
{
"docid": "d94d49cde6878e0841c1654090062559",
"text": "In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary experimental results. In this paper we extend the experimental results in several ways, including extensions for dynamic insertion and deletion of edges, a comparison of a variety of coding schemes, and an implementation of two applications using the representation. The results show that the representation is quite effective for a wide variety of real-world graphs, including graphs from finite-element meshes, circuits, street maps, router connectivity, and web links. In addition to significantly reducing the memory requirements, our implementation of the representation is faster than standard representations for queries. The byte codes we introduce lead to DFT times that are a factor of 2.5 faster than our previous results with gamma codes and a factor of between 1 and 9 faster than adjacency lists, while using a factor of between 3 and 6 less space.",
"title": ""
},
{
"docid": "0e45e57b4e799ebf7e8b55feded7e9e1",
"text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.",
"title": ""
},
{
"docid": "0218c583a8658a960085ddf813f38dbf",
"text": "The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.",
"title": ""
},
{
"docid": "1b5fc0a7b39bedcac9bdc52584fb8a22",
"text": "Neem (Azadirachta indica) is a medicinal plant of containing diverse chemical active substances of several biological properties. So, the aim of the current investigation was to assess the effects of water leaf extract of neem plant on the survival and healthy status of Nile tilapia (Oreochromis niloticus), African cat fish (Clarias gariepinus) and zooplankton community. The laboratory determinations of lethal concentrations (LC 100 and LC50) through a static bioassay test were performed. The 24 h LC100 of neem leaf extract was estimated as 4 and 11 g/l, for juvenile's O. niloticus and C. gariepinus, respectively, while, the 96-h LC50 was 1.8 and 4 g/l, respectively. On the other hand, the 24 h LC100 for cladocera and copepoda were 0.25 and 0.45 g/l, respectively, while, the 96-h LC50 was 0.1 and 0.2 g/l, respectively. At the highest test concentrations, adverse effects were obvious with significant reductions in several cladoceran and copepod species. Some alterations in glucose levels, total protein, albumin, globulin as well as AST and ALT in plasma of treated O. niloticus and C. gariepinus with /2 and /10 LC50 of neem leaf water extract compared with non-treated one after 2 and 7 days of exposure were recorded and discussed. It could be concluded that the application of neem leaf extract can be used to control unwanted organisms in ponds as environment friendly material instead of deleterious pesticides. Also, extensive investigations should be established for the suitable methods of application in aquatic animal production facilities to be fully explored in future.",
"title": ""
},
{
"docid": "cd4e2e3af17cd84d4ede35807e71e783",
"text": "A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.",
"title": ""
},
{
"docid": "f73cd33c8dfc9791558b239aede6235b",
"text": "Web clustering engines organize search results by topic, thus offering a complementary view to the flat-ranked list returned by conventional search engines. In this survey, we discuss the issues that must be addressed in the development of a Web clustering engine, including acquisition and preprocessing of search results, their clustering and visualization. Search results clustering, the core of the system, has specific requirements that cannot be addressed by classical clustering algorithms. We emphasize the role played by the quality of the cluster labels as opposed to optimizing only the clustering structure. We highlight the main characteristics of a number of existing Web clustering engines and also discuss how to evaluate their retrieval performance. Some directions for future research are finally presented.",
"title": ""
},
{
"docid": "4dba2a9a29f58b55a6b2c3101acf2437",
"text": "Clinical and neurobiological findings have reported the involvement of endocannabinoid signaling in the pathophysiology of schizophrenia. This system modulates dopaminergic and glutamatergic neurotransmission that is associated with positive, negative, and cognitive symptoms of schizophrenia. Despite neurotransmitter impairments, increasing evidence points to a role of glial cells in schizophrenia pathobiology. Glial cells encompass three main groups: oligodendrocytes, microglia, and astrocytes. These cells promote several neurobiological functions, such as myelination of axons, metabolic and structural support, and immune response in the central nervous system. Impairments in glial cells lead to disruptions in communication and in the homeostasis of neurons that play role in pathobiology of disorders such as schizophrenia. Therefore, data suggest that glial cells may be a potential pharmacological tool to treat schizophrenia and other brain disorders. In this regard, glial cells express cannabinoid receptors and synthesize endocannabinoids, and cannabinoid drugs affect some functions of these cells that can be implicated in schizophrenia pathobiology. Thus, the aim of this review is to provide data about the glial changes observed in schizophrenia, and how cannabinoids could modulate these alterations.",
"title": ""
},
{
"docid": "e2807120a8a04a9c5f5f221e413aec4d",
"text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.",
"title": ""
},
{
"docid": "6a470404c36867a18a98fafa9df6848f",
"text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.",
"title": ""
},
{
"docid": "570e48e839bd2250473d4332adf2b53f",
"text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.",
"title": ""
}
] |
scidocsrr
|
a0e335a8701573127983e2681022b9d4
|
Quantified Markov Logic Networks
|
[
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
}
] |
[
{
"docid": "233c63982527a264b91dfb885361b657",
"text": "One unfortunate consequence of the success story of wireless sensor networks (WSNs) in separate research communities is an evergrowing gap between theory and practice. Even though there is a increasing number of algorithmic methods for WSNs, the vast majority has never been tried in practice; conversely, many practical challenges are still awaiting efficient algorithmic solutions. The main cause for this discrepancy is the fact that programming sensor nodes still happens at a very technical level. We remedy the situation by introducing Wiselib, our algorithm library that allows for simple implementations of algorithms onto a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, allowing to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems, such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. For any algorithm, the Wiselib provides data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, and analyze the overhead. We demonstrate the effectiveness of our approach by showing how routing algorithms can be implemented. We also report on results from experiments with real sensor-node hardware.",
"title": ""
},
{
"docid": "98cfa94144ddcc5caf2a06dab8872de4",
"text": "Protocols this text provides a very helpful especially for students teachers. I was like new provides, academic researchers shows. All topics related to be comfortable with excellent comprehensive reference section is on communications issues. Provides academic researchers he has, published numerous papers and applications free. This book of wireless sensor networks there is on ad hoc networks. Shows which circumstances they generally walk through of references.",
"title": ""
},
{
"docid": "99c29c6cacb623a857817c412d6d9515",
"text": "Considering the rapid growth of China’s elderly rural population, establishing both an adequate and a financially sustainable rural pension system is a major challenge. Focusing on financial sustainability, this article defines this concept of financial sustainability before constructing sound actuarial models for China’s rural pension system. Based on these models and statistical data, the analysis finds that the rural pension funding gap should rise from 97.80 billion Yuan in 2014 to 3062.31 billion Yuan in 2049, which represents an annual growth rate of 10.34%. This implies that, as it stands, the rural pension system in China is not financially sustainable. Finally, the article explains how this problem could be fixed through policy recommendations based on recent international experiences.",
"title": ""
},
{
"docid": "7bd091ed5539b90e5864308895b0d5d4",
"text": "We discuss the design of a high-performance field programmable gate array (FPGA) architecture that efficiently prototypes asynchronous (clockless) logic. In this FPGA architecture, low-level application logic is described using asynchronous dataflow functions that obey a token-based compute model. We implement these dataflow functions using finely pipelined asynchronous circuits that achieve high computation rates. This asynchronous dataflow FPGA architecture maintains most of the performance benefits of a custom asynchronous design, while also providing postfabrication logic reconfigurability. We report results for two asynchronous dataflow FPGA designs that operate at up to 400 MHz in a typical TSMC 0.25 /spl mu/m CMOS process.",
"title": ""
},
{
"docid": "413d0b457cc1b96bf65d8a3e1c98ed41",
"text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.",
"title": ""
},
{
"docid": "c2df8cc7775bd4ec2bfdf4498d136c9f",
"text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.",
"title": ""
},
{
"docid": "c09256d7daaff6e2fc369df0857a3829",
"text": "Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression.",
"title": ""
},
{
"docid": "b7b3690f547e479627cc1262ae080b8f",
"text": "This article investigates the vulnerabilities of Supervisory Control and Data Acquisition (SCADA) systems which monitor and control the modern day irrigation canal systems. This type of monitoring and control infrastructure is also common for many other water distribution systems. We present a linearized shallow water partial differential equation (PDE) system that can model water flow in a network of canal pools which are equipped with lateral offtakes for water withdrawal and are connected by automated gates. The knowledge of the system dynamics enables us to develop a deception attack scheme based on switching the PDE parameters and proportional (P) boundary control actions, to withdraw water from the pools through offtakes. We briefly discuss the limits on detectability of such attacks. We use a known formulation based on low frequency approximation of the PDE model and an associated proportional integral (PI) controller, to create a stealthy deception scheme capable of compromising the performance of the closed-loop system. We test the proposed attack scheme in simulation, using a shallow water solver; and show that the attack is indeed realizable in practice by implementing it on a physical canal in Southern France: the Gignac canal. A successful field experiment shows that the attack scheme enables us to steal water stealthily from the canal until the end of the attack.",
"title": ""
},
{
"docid": "933a43bb4564a683415da49009626ce7",
"text": "In recent years, deep learning methods applying unsupervised learning to train deep layers of neural networks have achieved remarkable results in numerous fields. In the past, many genetic algorithms based methods have been successfully applied to training neural networks. In this paper, we extend previous work and propose a GA-assisted method for deep learning. Our experimental results indicate that this GA-assisted approach improves the performance of a deep autoencoder, producing a sparser neural network.",
"title": ""
},
{
"docid": "6c75e0532f637448cdec57bf30e76a4e",
"text": "A wide range of machine learning problems, including astronomical inference about galaxy clusters, natural image scene classification, parametric statistical inference, and predictions of public opinion, can be well-modeled as learning a function on (samples from) distributions. This thesis explores problems in learning such functions via kernel methods. The first challenge is one of computational efficiency when learning from large numbers of distributions: the computation of typicalmethods scales between quadratically and cubically, and so they are not amenable to large datasets. We investigate the approach of approximate embeddings into Euclidean spaces such that inner products in the embedding space approximate kernel values between the source distributions. We present a new embedding for a class of information-theoretic distribution distances, and evaluate it and existing embeddings on several real-world applications. We also propose the integration of these techniques with deep learning models so as to allow the simultaneous extraction of rich representations for inputs with the use of expressive distributional classifiers. In a related problem setting, common to astrophysical observations, autonomous sensing, and electoral polling, we have the following challenge: when observing samples is expensive, but we can choose where we would like to do so, how do we pick where to observe? We propose the development of a method to do so in the distributional learning setting (which has a natural application to astrophysics), as well as giving a method for a closely related problem where we search for instances of patterns by making point observations. Our final challenge is that the choice of kernel is important for getting good practical performance, but how to choose a good kernel for a given problem is not obvious. We propose to adapt recent kernel learning techniques to the distributional setting, allowing the automatic selection of good kernels for the task at hand. Integration with deep networks, as previously mentioned, may also allow for learning the distributional distance itself. Throughout, we combine theoretical results with extensive empirical evaluations to increase our understanding of the methods.",
"title": ""
},
{
"docid": "129759aca269b13c80270d2ba7311648",
"text": "Although the Capsule Network (CapsNet) has a better proven performance for the recognition of overlapping digits than Convolutional Neural Networks (CNNs), a large number of matrix-vector multiplications between lower-level and higher-level capsules impede efficient implementation of the CapsNet on conventional hardware platforms. Since three-dimensional (3-D) memristor crossbars provide a compact and parallel hardware implementation of neural networks, this paper provides an architecture design to accelerate convolutional and matrix operations of the CapsNet. By using 3-D memristor crossbars, the PrimaryCaps, DigitCaps, and convolutional layers of a CapsNet perform the matrix-vector multiplications in a highly parallel way. Simulations are conducted to recognize digits from the USPS database and to analyse the work efficiency of the proposed circuits. The proposed design provides a new approach to implement the CapsNet on memristor-based circuits.",
"title": ""
},
{
"docid": "8093219e7e2b4a7067f8d96118a5ea93",
"text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.",
"title": ""
},
{
"docid": "9e6899c27ea5ada89373c59617c57287",
"text": "In order to provide location information for indoor applications and context-aware computing, a lot of research is being done since last decade for development of real-time Indoor location system. In this paper, we have investigated indoor location concepts and have focused two major technologies used in many indoor location systems i.e. RF and ultrasonic. An overview of various RF systems that use different RF properties for location estimation has been given. Ultrasonic systems have been reviewed in detail as they provide low cost fine grained location systems. A few well known ultrasonic location systems have been investigated with a comparison of the system based on performance, accuracy and limitations.",
"title": ""
},
{
"docid": "0808637a7768609502b63bff5ffda1cb",
"text": "Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.",
"title": ""
},
{
"docid": "1fd5eefc05bbaec695449b8cfe3f8b3b",
"text": "-sided region with G ɛ continuous NURBS patches that interpolate boundary curves and approximate given cross-boundary derivatives. The NURBS surfaces joining along inner or boundary curves have normal vectors that do not deviate more than the user-specified angular tolerance ɛ. The method is general in that there are no restrictions on the number of boundary curves, and the cross-boundary derivatives can be specified independently. To satisfy all conditions, only one degree elevation is needed.",
"title": ""
},
{
"docid": "18defc8666f7fea7ae89ff3d5d833e0a",
"text": "[1] We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.",
"title": ""
},
{
"docid": "a579a45a917999f48846a29cd09a92f4",
"text": "Over the last fifty years, the “Big Five” model of personality traits has become a standard in psychology, and research has systematically documented correlations between a wide range of linguistic variables and the Big Five traits. A distinct line of research has explored methods for automatically generating language that varies along personality dimensions. We present PERSONAGE (PERSONAlity GEnerator), the first highly parametrizable language generator for extraversion, an important aspect of personality. We evaluate two personality generation methods: (1) direct generation with particular parameter settings suggested by the psychology literature; and (2) overgeneration and selection using statistical models trained from judge’s ratings. Results show that both methods reliably generate utterances that vary along the extraversion dimension, according to human judges.",
"title": ""
},
{
"docid": "b19e77ddb2c2ca5cc18bd8ba5425a698",
"text": "In pharmaceutical formulations, phospholipids obtained from plant or animal sources and synthetic phospholipids are used. Natural phospholipids are purified from, e.g., soybeans or egg yolk using non-toxic solvent extraction and chromatographic procedures with low consumption of energy and minimum possible waste. Because of the use of validated purification procedures and sourcing of raw materials with consistent quality, the resulting products differing in phosphatidylcholine content possess an excellent batch to batch reproducibility with respect to phospholipid and fatty acid composition. The natural phospholipids are described in pharmacopeias and relevant regulatory guidance documentation of the Food and Drug Administration (FDA) and European Medicines Agency (EMA). Synthetic phospholipids with specific polar head group, fatty acid composition can be manufactured using various synthesis routes. Synthetic phospholipids with the natural stereochemical configuration are preferably synthesized from glycerophosphocholine (GPC), which is obtained from natural phospholipids, using acylation and enzyme catalyzed reactions. Synthetic phospholipids play compared to natural phospholipid (including hydrogenated phospholipids), as derived from the number of drug products containing synthetic phospholipids, a minor role. Only in a few pharmaceutical products synthetic phospholipids are used. Natural phospholipids are used in oral, dermal, and parenteral products including liposomes. Natural phospholipids instead of synthetic phospholipids should be selected as phospholipid excipients for formulation development, whenever possible, because natural phospholipids are derived from renewable sources and produced with more ecologically friendly processes and are available in larger scale at relatively low costs compared to synthetic phospholipids. Practical applications: For selection of phospholipid excipients for pharmaceutical formulations, natural phospholipids are preferred compared to synthetic phospholipids because they are available at large scale with reproducible quality at lower costs of goods. They are well accepted by regulatory authorities and are produced using less chemicals and solvents at higher yields. In order to avoid scale up problems during pharmaceutical development and production, natural phospholipid excipients instead of synthetic phospholipids should be selected whenever possible.",
"title": ""
},
{
"docid": "704d068f791a8911068671cb3dca7d55",
"text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] |
scidocsrr
|
e2f0d77afa8df2c9f8a7d81c6b376ccc
|
Quantifying Generalization in Reinforcement Learning
|
[
{
"docid": "17162eac4f1292e4c2ad7ef83af803f1",
"text": "Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen “robustly”: commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.",
"title": ""
},
{
"docid": "6c35a65a231a66e6a5b49329450c98c7",
"text": "In this report, we present a new reinforcement learning (RL) benchmark based on the Sonic the HedgehogTM video game franchise. This benchmark is intended to measure the performance of transfer learning and few-shot learning algorithms in the RL domain. We also present and evaluate some baseline algorithms on the new benchmark.",
"title": ""
},
{
"docid": "af25bc1266003202d3448c098628aee8",
"text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.",
"title": ""
}
] |
[
{
"docid": "e681354e0423bcbeea534c5658d376a3",
"text": "Static Versus Dynamic Stretching Effect on Agility Performance",
"title": ""
},
{
"docid": "b582c9a07f473e980d1fd3d23bbb87a0",
"text": "I consider the problem of finding all the modes of a mixture of multivariate Gaussian distributions, which has applications in clustering and regression. I derive exact formulas for the gradient and Hessian and give a partial proof that the number of modes cannot be more than the number of components, and are contained in the convex hull of the component centroids. Then, I develop two exhaustive mode search algorithms: one based on combined quadratic maximisation and gradient ascent and the other one based on a fixed-point iterative scheme. Appropriate values for the search control parameters are derived by taking into account theoretical results regarding the bounds for the gradient and Hessian of the mixture. The significance of the modes is quantified locally (for each mode) by error bars, or confidence intervals (estimated using the values of the Hessian at each mode); and globally by the sparseness of the mixture, measured by its differential entropy (estimated through bounds). I conclude with some reflections about bump-finding.",
"title": ""
},
{
"docid": "99938bd08302e44839dbba06d54d2cc5",
"text": "Modeling the long-term facial aging process is extremely challenging due to the presence of large and non-linear variations during the face development stages. In order to efficiently address the problem, this work first decomposes the aging process into multiple short-term stages. Then, a novel generative probabilistic model, named Temporal Non-Volume Preserving (TNVP) transformation, is presented to model the facial aging process at each stage. Unlike Generative Adversarial Networks (GANs), which requires an empirical balance threshold, and Restricted Boltzmann Machines (RBM), an intractable model, our proposed TNVP approach guarantees a tractable density function, exact inference and evaluation for embedding the feature transformations between faces in consecutive stages. Our model shows its advantages not only in capturing the non-linear age related variance in each stage but also producing a smooth synthesis in age progression across faces. Our approach can model any face in the wild provided with only four basic landmark points. Moreover, the structure can be transformed into a deep convolutional network while keeping the advantages of probabilistic models with tractable log-likelihood density estimation. Our method is evaluated in both terms of synthesizing age-progressed faces and cross-age face verification and consistently shows the state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH, AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A large-scale face verification on Megaface challenge 1 is also performed to further show the advantages of our proposed approach.",
"title": ""
},
{
"docid": "01ebfe5e28bfcd111a014d1a47743028",
"text": "In this paper, we propose a Cognitive Caching approach for the Future Fog (CCFF) that takes into consideration the value of the exchanged data in Information Centric Sensor Networks (ICSNs). Our approach depends on four functional parameters in ICSNs. These four main parameters are: age of the data, popularity of on-demand requests, delay to receive the requested information and data fidelity. These parameters are considered together to assign a value to the cached data while retaining the most valuable one in the cache for prolonged time periods. This CCFF approach provides significant availability for most valuable and difficult to retrieve data in the ICSNs. Extensive simulations and case studies have been examined in this research in order to compare to other dominant cache management frameworks in the literature under varying circumstances such as data popularity, cache size, data publisher load, and node connectivity degree. Formal fidelity and trust analysis has been applied as well to emphasize the effectiveness of CCFF in Fog paradigms, where edge devices can retrieve unsecured data from the authorized nodes in the cloud.",
"title": ""
},
{
"docid": "862a46f0888ab8c4b37d4e63be45eb08",
"text": "In this paper we examine the notion of adaptive user interfac s, interactive systems that invoke machine learning to improve their interact ion with humans. We review some previous work in this emerging area, ranging from softw are that filters information to systems that support more complex tasks like scheduling. After this, we describe three ongoing research efforts that extend this framework in new d irections. Finally, we review previous work that has addressed similar issues and conside r some challenges that are presented by the design of adaptive user interfaces. 1 The Need for Automated User Modeling As computers have become more widespread, the software that runs on them has also become more interactive and responsive. Only a few early users remember the days of programming on punch cards and submitting overnight jobs, and even the era of time-s haring systems and text editors has become a dim memory. Modern operating systems support a wid e range of interactive software, from WYSIWYG editors to spreadsheets to computer games, most em bedded in some form of graphical user interface. Such packages have become an essential part of bus iness and academic life, with millions of people depending on them to accomplish thei r daily goals. Naturally, the increased emphasis on interactive software has led to greater in ter st in the study of human-computer interaction. However, most research in this area h s focused on the manner in which computer interfaces present information and choices to the user , and thus tells only part of the story. An equally important issue, yet one that has receiv d much less attention, concerns thecontentthat the interface offers to the user. And a concern with content leads directl y to a focus onuser models , since it seems likely that people will differ in the content they prefer to encounter during their interactions with computers. Developers of software for the Internet are quite aware of the need for p ersonalized content, and many established portals on the World Wide Web provide simple too s f r filtering information. But these tools typically focus on a narrow class of applications an d require manual setting of parameters, a process that users are likely to find tedious. Moreover, som e facets of users’ preferences may be reflected in their behavior but not subject to introspection . Clearly, there is a need for increased personalization in many areas of interactive software, both i n supporting a greater variety of tasks and in ways to automate this process. This suggests turning to techniques from machine learning in order to personalize computer interfaces. ? Also affiliated with the Institute for the Study of Learning a nd Expertise and the Center for the Study of Language and Information at Stanford University. 358 USERMODELING AND ADAPTIVE INTERFACES In the rest of this paper, we examine the notion of adaptive user interfaces – systems that learn a user model from traces of interaction with that user. We start by defining ad aptive interfaces more precisely, drawing a close analogy with algorithms for machine lear ning. Next, we consider some examples of such software artifacts that have appeared in the literatur e, af e which we report on three research efforts that attempt to extend the basic framework in new directions. Finally, we discuss kinships between adaptive user interfaces and some sim ilar paradigms, then close with some challenges they pose for researchers and software developer s. 2 Adaptive User Interfaces and Machine Learning For most readers, the basic idea of an adaptive user interface will already be cl ear, but for the sake of discussion, we should define this notion somewhat more preci sely: An adaptive user interface is a software artifact that improves its ability to interact with a user by constructing a user model based on partial experience with that user . This definition makes clear that an adaptive interface does not exist in isolat ion, but rather is designed to interact with a human user. Moreover, for the system to be adapt ive, it must improve its interaction with that user, and simple memorization of such interactio ns does not suffice. Rather, improvement should result from generalization over past experiences a d carry over to new user interactions. The above definition will seem familiar to some readers, and for good reason , since it takes the same form as common definitions of machine learning (e.g., Langley, 1 995). The main differences are that the user plays the role of the environment in which learning o ccurs, the user model takes the place of the learned knowledge base, and interaction with the user ser ves as the performance task on which learning should lead to improvement. In this view, adap tive user interfaces constitute a special class of learning systems that are designed to aid humans , in contrast with much of the early applied work on machine learning, which aimed to develop kn wledge-based systems that would replace domain experts. Despite this novel emphasis, many lessons acquired from these earlier appli cations of machine learning should prove relevant in the design of adaptive interfaces . The most important has been the realization that we are still far from entirely automating the lear ning process, and that some essential steps must still be done manually (Brodley and Smy th, 1997; Langley and Simon, 1995; Rudström, 1995). Briefly, to solve an applied probl em using established induction methods, the developer must typically: reformulate the problem in some form that these methods can directly addr ess; engineer a set of features that describe the training cases adequately; and devise some approach to collecting and preparing the training instances. Only after the developer has addressed these issues can he run some learning m ethod over the data to produce the desired domain knowledge or, in the case of an adaptive interface, the desired user model. Moreover, there is an emerging consensus within the applied learning commu nity that these steps of problem formulation, representation engineering, and data collect ion/preparation play a role at least as important as the induction stage itself. Indeed, there is a common belief that, once USERMODELING AND ADAPTIVE INTERFACES 359 they are handled well, the particular induction method one uses has littl e effect on the outcome (Langley and Simon, 1995). In contrast, most academic work on machine lear ning still focuses on refining induction techniques and downplays the steps that must occur b efore and after their invocation. Indeed, some research groups still emphasize differences between b road classes of learning methods, despite evidence that decision-tree induction, connect ionist algorithms, casebased methods, and probabilistic schemes often produce very similar resul ts. We will adopt the former viewpoint in our discussion of adaptive us r interfaces. As a result, we will have little to say about the particular learning methods used to construct and refine user models, but we will have comments about the formulation of the t ask, the features used to describe behavior, the source of data about user preferences, and similar is sues. This bias reflects our belief that strategies which have proved successful in other applicatio ns of machine learning will also serve us well in the design of adaptive interfaces. 3 Examples of Adaptive User Interfaces We can clarify the notion of an adaptive user interface by considering some e xamples that have appeared in the literature during recent years. Many of these systems focus on the generic task of information filtering, which involves directing a user’s attention toward items from a large s et that he is likely to find interesting or useful. Naturally, the most popul ar applications revolve around the World Wide Web, which provides both a wealth of information to fil ter and a convenient mechanism for interacting with users. However, the same basic techniques can be extended to broaderrecommendationtasks, such as suggesting products a consumer might want to buy. One example comes from Pazzani, Muramatsu, and Billsus (1996), who descri be SYSKILL & W EBERT, an adaptive interface which recommends web pages on a given topic that a user should find interesting. Much like typical search engines, this system pr sents the user with a list of web pages, but it also labels those candidates it predicts the user will esp ecially like or dislike. Moreover, it lets the user mark pages as desirable or undesirable, and the sys tem records the marked pages as training data for learning the user’s preferences. S YSKILL & W EBERT encodes each user model in terms of the probabilities that certain words will occur gi ven that the person likes (or dislikes) the document. The system invokes the naive Bayesian classifier to learn these probabilities and to predict whether the user will find a particular page des irable. This general approach to selection and learning is often referred to as c ntent-based filtering . Briefly, this scheme represents each item with a set of descriptors, usually t he words that occur in a document, and the filtering system uses these descriptors as predictive f ea ures when deciding whether to recommend a document to the user. This biases the selection process toward documents that are similar to ones the user has previously ranked highly. Ot er examples of adaptive user interfaces that embody the content-based approach include Lang’s (1995) NEWSWEEDER, which recommends news stories, and Boone’s (1998) Re:Agent, which sugg ests actions for handling electronic mail. Of course, content-based methods are also widely used in arch engines for the World Wide Web, and they predominate in the literature on inf ormation retrieval, but these typically do not employ learning algorithms to construct users mo dels. Another example of an adaptive interface is Shardanand and Maes’ (1995) R INGO, an interactive syste",
"title": ""
},
{
"docid": "0f1fd9d1daeea4f175f57c5b32c471fc",
"text": "An overview of cluster analysis techniques from a data mining point of view is given. This is done by a strict separation of the questions of various similarity and distance measures and related optimization criteria for clusterings from the methods to create and modify clusterings themselves. In addition to this general setting and overview, the second focus is used on discussions of the essential ingredients of the demographic cluster algorithm of IBM's Intelligent Miner, based Condorcet's criterion.",
"title": ""
},
{
"docid": "38b6660a0f246590ad97b75be074899d",
"text": "Technology has been playing a major role in our lives. One definition for technology is “all the knowledge, products, processes, tools, methods and systems employed in the creation of goods or in providing services”. This makes technological innovations raise the competitiveness between organizations that depend on supply chain and logistics in the global market. With increasing competitiveness, new challenges arise due to lack of information and assets tractability. This paper introduces three scenarios for solving these challenges using the Blockchain technology. In this work, Blockchain technology targets two main issues within the supply chain, namely, data transparency and resource sharing. These issues are reflected into the organization's strategies and",
"title": ""
},
{
"docid": "f835e60133415e3ec53c2c9490048172",
"text": "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.",
"title": ""
},
{
"docid": "72682ac5c2ec0a1ad1f211f3de562062",
"text": "Red blood cell (RBC) aggregation is greatly affected by cell deformability and reduced deformability and increased RBC aggregation are frequently observed in hypertension, diabetes mellitus, and sepsis, thus measurement of both these parameters is essential. In this study, we investigated the effects of cell deformability and fibrinogen concentration on disaggregating shear stress (DSS). The DSS was measured with varying cell deformability and geometry. The deformability of cells was gradually decreased with increasing concentration of glutaraldehyde (0.001~0.005%) or heat treatment at 49.0°C for increasing time intervals (0~7 min), which resulted in a progressive increase in the DSS. However, RBC rigidification by either glutaraldehyde or heat treatment did not cause the same effect on RBC aggregation as deformability did. The effect of cell deformability on DSS was significantly increased with an increase in fibrinogen concentration (2~6 g/L). These results imply that reduced cell deformability and increased fibrinogen levels play a synergistic role in increasing DSS, which could be used as a novel independent hemorheological index to characterize microcirculatory diseases, such as diabetic complications with high sensitivity.",
"title": ""
},
{
"docid": "31dfedb06716502fcf33871248fd7e9e",
"text": "Multi-sensor precipitation datasets including two products from the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and estimates from Climate Prediction Center Morphing Technique (CMORPH) product were quantitatively evaluated to study the monsoon variability over Pakistan. Several statistical and graphical techniques are applied to illustrate the nonconformity of the three satellite products from the gauge observations. During the monsoon season (JAS), the three satellite precipitation products captures the intense precipitation well, all showing high correlation for high rain rates (>30 mm/day). The spatial and temporal satellite rainfall error variability shows a significant geo-topography dependent distribution, as all the three products overestimate over mountain ranges in the north and coastal region in the south parts of Indus basin. The TMPA-RT product tends to overestimate light rain rates (approximately 100%) and the bias is low for high rain rates (about ±20%). In general, daily comparisons from 2005 to 2010 show the best agreement between the TMPA-V7 research product and gauge observations with correlation coefficient values ranging from moderate (0.4) to high (0.8) over the spatial domain of Pakistan. The seasonal variation of rainfall frequency has large biases (100–140%) over high latitudes (36N) with complex terrain for daily, monsoon, and pre-monsoon comparisons. Relatively low uncertainties and errors (Bias ±25% and MAE 1–10 mm) were associated with the TMPA-RT product during the monsoon-dominated region (32–35N), thus demonstrating their potential use for developing an operational hydrological application of the satellite-based near real-time products in Pakistan for flood monitoring. 2014 COSPAR. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a12b90cf2cf6927a8eb177776a5dc5ef",
"text": "Modern induction heating (IH) power converters operate marginally above resonant frequency to supply power to the targeted work-piece at near unity power factor (upf). The resonant frequency continuously increases as the work-piece gets heated up. A possible method of dynamically detecting the resonant frequency may be to calculate the phase-shift between current and voltage continuously during the process. The phase-shift between voltage and current is ideally zero at resonance. So resonant frequency may be identified by varying the frequency until the phase-shift is zero. For this some controllers use a phase-locked loop (PLL) strategy. In this paper, the dynamic tracking of resonant frequency, using a field-programmable gate array (FPGA) based digital-PLL, is presented. The scheme is first simulated off-line. Finally, the logic is implemented on controller hardware and practically tested in a laboratory made experimental set-up of 2kW at around 10kHz.",
"title": ""
},
{
"docid": "1ae735b903b6d2bfae8a304544342064",
"text": "Deep neural networks have achieved significant success for image recognition problems. Despite the wide success, recent experiments demonstrated that neural networks are sensitive to small input perturbations, or adversarial noise. The lack of robustness is intuitively undesirable and limits neural networks applications in adversarial settings, and for image search and retrieval problems. Current approaches consider augmenting training dataset using adversarial examples to improve robustness. However, when using data augmentation, the model fails to anticipate changes in an adversary. In this paper, we consider maximizing the geometric margin of the classifier. Intuitively, a large margin relates to classifier robustness. We introduce novel margin maximization objective for deep neural networks. We theoretically show that the proposed objective is equivalent to the robust optimization problem for a neural network. Our work seamlessly generalizes SVM margin objective to deep neural networks. In the experiments, we extensively verify the effectiveness of the proposed margin maximization objective to improve neural network robustness and to reduce overfitting on MNIST and CIFAR-10 dataset.",
"title": ""
},
{
"docid": "c6bdd8d88dd2f878ddc6f2e8be39aa78",
"text": "A wide variety of non-photorealistic rendering techniques make use of random variation in the placement or appearance of primitives. In order to avoid the \"shower-door\" effect, this random variation should move with the objects in the scene. Here we present coherent noise tailored to this purpose. We compute the coherent noise with a specialized filter that uses the depth and velocity fields of a source sequence. The computation is fast and suitable for interactive applications like games.",
"title": ""
},
{
"docid": "c795c3fbf976c5746c75eb33c622ad21",
"text": "We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems1.",
"title": ""
},
{
"docid": "51b8fe57500d1d74834d1f9faa315790",
"text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.",
"title": ""
},
{
"docid": "adad5599122e63cde59322b7ba46461b",
"text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.",
"title": ""
},
{
"docid": "91757d954a8972df713339a970872251",
"text": "Computational histopathology involves CAD for microscopic analysis of stained histopathological slides to study presence, localization or grading of disease. An important stage in a CAD system, stain color normalization, has been broadly studied. The existing approaches are mainly defined in the context of stain deconvolution and template matching. In this paper, we propose a novel approach to this problem by introducing a parametric, fully unsupervised generative model. Our model is based on end-to-end machine learning in the framework of generative adversarial networks. It can learn a nonlinear transformation of a set of latent variables, which are forced to have a prior Dirichlet distribution and control the color of staining hematoxylin and eosin (H&E) images. By replacing the latent variables of a source image with those extracted from a template image in the trained model, it can generate a new color copy of the source image while preserving the important tissue structures resembling the chromatic information of the template image. Our proposed method can instantly be applied to new unseen images, which is different from previous methods that need to compute some statistical properties on input test data. This is potentially problematic when the test sample sizes are limited. Experiments on H&E images from different laboratories show that the proposed model outperforms most state-of-the-art methods.",
"title": ""
},
{
"docid": "471579f955f8b68a357c8780a7775cc9",
"text": "In addition to practitioners who care for male patients, with the increased use of high-resolution anoscopy, practitioners who care for women are seeing more men in their practices as well. Some diseases affecting the penis can impact on their sexual partners. Many of the lesions and neoplasms of the penis occur on the vulva as well. In addition, there are common and rare lesions unique to the penis. A review of the scope of penile lesions and neoplasms that may present in a primary care setting is presented to assist in developing a differential diagnosis if such a patient is encountered, as well as for practitioners who care for their sexual partners. A familiarity will assist with recognition, as well as when consultation is needed.",
"title": ""
},
{
"docid": "361dc8037ebc30cd2f37f4460cf43569",
"text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.",
"title": ""
},
{
"docid": "4528c64444ce7350537b34823f91744b",
"text": "The anterior prefrontal cortex (APC) confers on humans the ability to simultaneously pursue several goals. How does the brain's motivational system, including the medial frontal cortex (MFC), drive the pursuit of concurrent goals? Using brain imaging, we observed that the left and right MFC, which jointly drive single-task performance according to expected rewards, divide under dual-task conditions: While the left MFC encodes the rewards driving one task, the right MFC concurrently encodes those driving the other task. The same dichotomy was observed in the lateral frontal cortex, whereas the APC combined the rewards driving both tasks. The two frontal lobes thus divide for representing simultaneously two concurrent goals coordinated by the APC. The human frontal function seems limited to driving the pursuit of two concurrent goals simultaneously.",
"title": ""
}
] |
scidocsrr
|
163518899d8830204e46d82f67d2714e
|
The Influence of Feature Representation of Text on the Performance of Document Classification
|
[
{
"docid": "db1b3a472b9d002cf8b901f96d20196b",
"text": "Recent studies in NER use the supervised machine learning. This study used CRF as a learning algorithm, and applied word embedding to feature for NER training. Word embedding is helpful in many learning algorithms of NLP, indicating that words in a sentence are mapped by a real vector in a lowdimension space. As a result of comparing the performance of multiple techniques for word embedding to NER, it was found that CCA (85.96%) in Test A and Word2Vec (80.72%) in Test B exhibited the best performance.",
"title": ""
},
{
"docid": "e2a5f57497e57881092e33c6ab3ec817",
"text": "Doc2Sent2Vec is an unsupervised approach to learn low-dimensional feature vector (or embedding) for a document. This embedding captures the semantics of the document and can be fed as input to machine learning algorithms to solve a myriad number of applications in the field of data mining and information retrieval. Some of these applications include document classification, retrieval, and ranking.\n The proposed approach is two-phased. In the first phase, the model learns a vector for each sentence in the document using a standard word-level language model. In the next phase, it learns the document representation from the sentence sequence using a novel sentence-level language model. Intuitively, the first phase captures the word-level coherence to learn sentence embeddings, while the second phase captures the sentence-level coherence to learn document embeddings. Compared to the state-of-the-art models that learn document vectors directly from the word sequences, we hypothesize that the proposed decoupled strategy of learning sentence embeddings followed by document embeddings helps the model learn accurate and rich document representations.\n We evaluate the learned document embeddings by considering two classification tasks: scientific article classification and Wikipedia page classification. Our model outperforms the current state-of-the-art models in the scientific article classification task by ?12.07% and the Wikipedia page classification task by ?6.93%, both in terms of F1 score. These results highlight the superior quality of document embeddings learned by the Doc2Sent2Vec approach.",
"title": ""
},
{
"docid": "8cd5cebfff2fdf282de1cd95d266b7b3",
"text": "Feature selection is known as a good solution to the high dimensionality of the feature space and mostly preferred feature selection methods for text classification are filter-based ones. In a common filter-based feature selection scheme, unique scores are assigned to features depending on their discriminative power and these features are sorted in descending order according to the scores. Then, the last step is to add top-N features to the feature set where N is generally an empirically determined number. In this paper, an improved global feature selection scheme (IGFSS) where the last step in a common feature selection scheme is modified in order to obtain a more representative feature set is proposed. Although feature set constructed by a common feature selection scheme successfully represents some of the classes, a number of classes may not be even represented. Consequently, IGFSS aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in IGFSS to label features according to their discriminative power on classes and these labels are used while producing the feature sets. Experimental results on well-known benchmark datasets with various classifiers indicate that IGFSS improves the performance of classification in terms of two widely-known metrics namely Micro-F1 and Macro-F1. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1b61dc674649cca3b46982a7c0b3e1d9",
"text": "Recently, with the explosive increase of automobiles in China, people park their cars on streets without following the rules and police has hard time to conduct law enforcement without introducing effective street parking system. To solve the problem, we propose a SPS (street parking system) based on wireless sensor networks. For accurately detecting a parking car, a parking algorithm based on state machine is also proposed. As the vehicle detection in SPS is absolutely critical, we use a Honeywell 3-axis magnetic sensor to detect vehicle. However, the magnetic sensor may be affected by the fluctuation of the outside temperature. To solve the problem, we introduce a moving drift factor. On the parking lot in Shenzhen Institutes of Advanced Technology (SIAT), 62 sensor devices are deployed to evaluate the performance of SPS. By running the system for several months, we observe the vehicle detection accurate rate of the SPS is nearly 99%. The proposed SPS is energy efficient. The end device can run about 7 years with one 2400mAh AA battery.",
"title": ""
},
{
"docid": "0d5b33ce7e1a1af17751559c96fdcf0a",
"text": "Urban-related data and geographic information are becoming mainstream in the Linked Data community due also to the popularity of Location-based Services. In this paper, we introduce the UrbanMatch game, a mobile gaming application that joins data linkage and data quality/trustworthiness assessment in an urban environment. By putting together Linked Data and Human Computation, we create a new interaction paradigm to consume and produce location-specific linked data by involving and engaging the final user. The UrbanMatch game is also offered as an example of value proposition and business model of a new family of linked data applications based on gaming in Smart Cities.",
"title": ""
},
{
"docid": "07cd4026ece4dc92e4c031e022181689",
"text": "In this paper we look at the problem of cleansing noisy text using a statistical machine translation model. Noisy text is produced in informal communications such as Short Message Service (SMS), Twitter and chat. A typical Statistical Machine Translation system is trained on parallel text comprising noisy and clean sentences. In this paper we propose an unsupervised method for the translation of noisy text to clean text. Our method has two steps. For a given noisy sentence, a weighted list of possible clean tokens for each noisy token are obtained. The clean sentence is then obtained by maximizing the product of the weighted lists and the language model scores.",
"title": ""
},
{
"docid": "25063f744836d2b245bfe3c658ff5285",
"text": "Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC), which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.",
"title": ""
},
{
"docid": "2864c9a396910aedbfa79ed54da6ab3e",
"text": "This paper describes the design for a content based approach to detecting insider misuse by an analyst producing reports in an environment supported by a document control system. The approach makes use of Hidden Markov Models to represent stages in the EvidenceBased Intelligence Analysis Process Model (EBIAPM). This approach is seen as a potential application for the Process Query System / Tracking and Fusion Engine (PQS/TRAFEN). Actions taken by the insider are viewed as processes that can be detected in PQS/TRAFEN. Text categorization of the content of analyst’s queries, documents accessed, and work product are used to disambiguate multiple EBIAPM processes.",
"title": ""
},
{
"docid": "e7b1d82b6716434da8bbeeeec895dac4",
"text": "Grapevine is the one of the most important fruit species in the world. Comparative genome sequencing of grape cultivars is very important for the interpretation of the grape genome and understanding its evolution. The genomes of four Georgian grape cultivars—Chkhaveri, Saperavi, Meskhetian green, and Rkatsiteli, belonging to different haplogroups, were resequenced. The shotgun genomic libraries of grape cultivars were sequenced on an Illumina HiSeq. Pinot Noir nuclear, mitochondrial, and chloroplast DNA were used as reference. Mitochondrial DNA of Chkhaveri closely matches that of the reference Pinot noir mitochondrial DNA, with the exception of 16 SNPs found in the Chkhaveri mitochondrial DNA. The number of SNPs in mitochondrial DNA from Saperavi, Meskhetian green, and Rkatsiteli was 764, 702, and 822, respectively. Nuclear DNA differs from the reference by 1,800,675 nt in Chkhaveri, 1,063,063 nt in Meskhetian green, 2,174,995 in Saperavi, and 5,011,513 in Rkatsiteli. Unlike mtDNA Pinot noir, chromosomal DNA is closer to the Meskhetian green than to other cultivars. Substantial differences in the number of SNPs in mitochondrial and nuclear DNA of Chkhaveri and Pinot noir cultivars are explained by backcrossing or introgression of their wild predecessors before or during the process of domestication. Annotation of chromosomal DNA of Georgian grape cultivars by MEGANTE, a web-based annotation system, shows 66,745 predicted genes (Chkhaveri—17,409; Saperavi—17,021; Meskhetian green—18,355; and Rkatsiteli—13,960). Among them, 106 predicted genes and 43 pseudogenes of terpene synthase genes were found in chromosomes 12, 18 random (18R), and 19. Four novel TPS genes not present in reference Pinot noir DNA were detected. Two of them—germacrene A synthase (Chromosome 18R) and (−) germacrene D synthase (Chromosome 19) can be identified as putatively full-length proteins. This work performs the first attempt of the comparative whole genome analysis of different haplogroups of Vitis vinifera cultivars. Based on complete nuclear and mitochondrial DNA sequence analysis, hypothetical phylogeny scheme of formation of grape cultivars is presented.",
"title": ""
},
{
"docid": "babdc78cd1b12f26afbeeb6696699503",
"text": "PURPOSE\nTo characterize fully all the major and minor carotenoids and their metabolites in human retina and probe for the presence of the oxidative metabolites of lutein and zeaxanthin.\n\n\nMETHODS\nCarotenoids of a composite of 58 pairs of human retinas and a monkey retina were elucidated by comparing their high-performance liquid chromatography (HPLC)-ultraviolet/visible absorption spectrophotometry (UV/Vis)-mass spectrometry (MS) profile with those of authentic standards prepared by organic synthesis.\n\n\nRESULTS\nIn addition to lutein and zeaxanthin, several oxidation products of these compounds were present in the extracts from human retina. A major carotenoid resulting from direct oxidation of lutein was identified as 3-hydroxy-beta, epsilon-caroten-3'-one. Minor carotenoids were identified as: 3'-epilutein, epsilon,epsilon-carotene-3,3'-diol, epsilon,epsilon-carotene-3,3'-dione, 3'-hydroxy-epsilon,epsilon-caroten-3-one, and 2,6-cyclolycopene-1,5-diol. Several of the geometric isomers of lutein and zeaxanthin were also detected at low concentrations. These were as follows: 9-cis-lutein, 9'-cislutein, 13-cis-lutein, 13'-cis-lutein, 9-cis-zeaxanthin, and 13-cis-zeaxanthin. Similar results were also obtained from HPLC analysis of a freshly dissected monkey retina.\n\n\nCONCLUSIONS\nLutein, zeaxanthin, 3'-epilutein, and 3-hydroxy-beta,epsilon-caroten-3'-one in human retina may be interconverted through a series of oxidation-reduction reactions similar to our earlier proposed metabolic transformation of these compounds in humans. The presence of the direct oxidation product of lutein and 3'-epilutein (metabolite of lutein and zeaxanthin) in human retina suggests that lutein and zeaxanthin may act as antioxidants to protect the macula against short-wavelength visible light. The proposed oxidative-reductive pathways for lutein and zeaxanthin in human retina, may therefore play an important role in prevention of age-related macular degeneration and cataracts.",
"title": ""
},
{
"docid": "a441f01dae68134b419aa33f1f9588a6",
"text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.",
"title": ""
},
{
"docid": "f2d27b79f1ac3809f7ea605203136760",
"text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.",
"title": ""
},
{
"docid": "d56855e068a4524fda44d93ac9763cab",
"text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.",
"title": ""
},
{
"docid": "19c230e85fb7556b6864ff332412bf71",
"text": "Given a graph G, a proper n − [p]-coloring is a mapping f : V (G) → 2{1,...,n} such that |f(v)| = p for any vertex v ∈ V (G) and f(v) ∩ f(u) = ∅ for any pair of adjacent vertices u and v. n − [p]-coloring is closely related to multicoloring. Finding multicoloring of induced subgraphs of the triangular lattice (called hexagonal graphs) has important applications in cellular networks. In this article we provide an algorithm to find a 7-[3]-coloring of triangle-free hexagonal graphs in linear time, which solves the open problem stated in [10] and improves the result of Sudeep and Vishwanathan [11], who proved the existence of a 14-[6]coloring. ∗This work was supported by grant N206 017 32/2452 for years 2007-2010",
"title": ""
},
{
"docid": "edc3562602fc9b275e18d44ea3a5d8ac",
"text": "The replicase of all cells is thought to utilize two DNA polymerases for coordinated synthesis of leading and lagging strands. The DNA polymerases are held to DNA by circular sliding clamps. We demonstrate here that the E. coli DNA polymerase III holoenzyme assembles into a particle that contains three DNA polymerases. The three polymerases appear capable of simultaneous activity. Furthermore, the trimeric replicase is fully functional at a replication fork with helicase, primase, and sliding clamps; it produces slightly shorter Okazaki fragments than replisomes containing two DNA polymerases. We propose that two polymerases can function on the lagging strand and that the third DNA polymerase can act as a reserve enzyme to overcome certain types of obstacles to the replication fork.",
"title": ""
},
{
"docid": "016eca10ff7616958ab8f55af71cf5d7",
"text": "This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.",
"title": ""
},
{
"docid": "36d0776ad44592db640bd205acee8e39",
"text": "1. A review of the literature shows that in nearly all cases tropical rain forest fragmentation has led to a local loss of species. Isolated fragments suffer eductions in species richness with time after excision from continuous forest, and small fragments often have fewer species recorded for the same effort of observation than large fragments orareas of continuous forest. 2. Birds have been the most frequently studied taxonomic group with respect o the effects of tropical forest fragmentation. 3. The mechanisms of fragmentation-related extinction i clude the deleterious effects of human disturbance during and after deforestation, the reduction of population sizes, the reduction of immigration rates, forest edge effects, changes in community structure (secondand higher-order effects) and the immigration fexotic species. 4. The relative importance of these mechanisms remains obscure. 5. Animals that are large, sparsely or patchily distributed, orvery specialized and intolerant of the vegetation surrounding fragments, are particularly prone to local extinction. 6. The large number of indigenous pecies that are very sparsely distributed and intolerant of conditions outside the forest make evergreen tropical rain forest particularly susceptible to species loss through fragmentation. 7. Much more research is needed to study what is probably the major threat o global biodiversity.",
"title": ""
},
{
"docid": "5452679e4381982e19c834fe448d2aef",
"text": "The paper addresses the analysis of forward kinematics of a 3-DOF medical parallel robot with R-P-S (Revolute-Prismatic-Spherical) joint structure using MATLAB/Simulink. Forward kinematics is solved numerically using Newton-Kantorovich (N-K) method and then is verified with a 1:1 CAD model through SimMechanics Toolbox from MATLAB/Simulink. Also the workspace of the robot is determined through forward kinematics equations.",
"title": ""
},
{
"docid": "0cd7c284f94c38f5d8d067048fc203b9",
"text": "We develop a Bayesian nonparametric Poisson factorization model for recommendation systems. Poisson factorization implicitly models each user’s limited budget of attention (or money) that allows consumption of only a small subset of the available items. In our Bayesian nonparametric variant, the number of latent components is theoretically unbounded and effectively estimated when computing a posterior with observed user behavior data. To approximate the posterior, we develop an efficient variational inference algorithm. It adapts the dimensionality of the latent components to the data, only requires iteration over the user/item pairs that have been rated, and has computational complexity on the same order as for a parametric model with fixed dimensionality. We studied our model and algorithm with large realworld data sets of user-movie preferences. Our model eases the computational burden of searching for the number of latent components and gives better predictive performance than its parametric counterpart.",
"title": ""
},
{
"docid": "5626f7c767ae20c3b58d2e8fb2b93ba7",
"text": "The presentation starts with a philosophical discussion about computer vision in general. The aim is to put the scope of the book into its wider context, and to emphasize why the notion of scale is crucial when dealing with measured signals, such as image data. An overview of different approaches to multi-scale representation is presented, and a number of special properties of scale-space are pointed out. Then, it is shown how a mathematical theory can be formulated for describing image structures at different scales. By starting from a set of axioms imposed on the first stages of processing, it is possible to derive a set of canonical operators, which turn out to be derivatives of Gaussian kernels at different scales. The problem of applying this theory computationally is extensively treated. A scale-space theory is formulated for discrete signals, and it demonstrated how this representation can be used as a basis for expressing a large number of visual operations. Examples are smoothed derivatives in general, as well as different types of detectors for image features, such as edges, blobs, and junctions. In fact, the resulting scheme for feature detection induced by the presented theory is very simple, both conceptually and in terms of practical implementations. Typically, an object contains structures at many different scales, but locally it is not unusual that some of these \"stand out\" and seem to be more significant than others. A problem that we give special attention to concerns how to find such locally stable scales, or rather how to generate hypotheses about interesting structures for further processing. It is shown how the scale-space theory, based on a representation called the scale-space primal sketch, allows us to extract regions of interest from an image without prior information about what the image can be expected to contain. Such regions, combined with knowledge about the scales at which they occur constitute qualitative information, which can be used for guiding and simplifying other low-level processes. Experiments on different types of real and synthetic images demonstrate how the suggested approach can be used for different visual tasks, such as image segmentation, edge detection, junction detection, and focusof-attention. This work is complemented by a mathematical treatment showing how the behaviour of different types of image structures in scalespace can be analysed theoretically.",
"title": ""
},
{
"docid": "64eee89ff60a739f3b496b663abb23fb",
"text": "Conservative care of the athlete with shoulder impingement includes activity modification, application of ice, nonsteroidal anti-inflammatory drugs, subacromial corticosteroid injections, and physiotherapy. This case report describes the clinical treatment and outcome of three patients with shoulder impingement syndrome who did not respond to traditional treatment. Two of the three were previously referred for arthroscopic surgery. All three were treated with subscapularis trigger point dry needling and therapeutic stretching. They responded to treatment and had returned to painless function at follow-up 2 years later.",
"title": ""
},
{
"docid": "0c16a0ad2e8fea48f332e783e20a1e2b",
"text": "Apolipoprotein E (Apo-E) is a major cholesterol carrier that supports lipid transport and injury repair in the brain. APOE polymorphic alleles are the main genetic determinants of Alzheimer disease (AD) risk: individuals carrying the ε4 allele are at increased risk of AD compared with those carrying the more common ε3 allele, whereas the ε2 allele decreases risk. Presence of the APOE ε4 allele is also associated with increased risk of cerebral amyloid angiopathy and age-related cognitive decline during normal ageing. Apo-E–lipoproteins bind to several cell-surface receptors to deliver lipids, and also to hydrophobic amyloid-β (Aβ) peptide, which is thought to initiate toxic events that lead to synaptic dysfunction and neurodegeneration in AD. Apo-E isoforms differentially regulate Aβ aggregation and clearance in the brain, and have distinct functions in regulating brain lipid transport, glucose metabolism, neuronal signalling, neuroinflammation, and mitochondrial function. In this Review, we describe current knowledge on Apo-E in the CNS, with a particular emphasis on the clinical and pathological features associated with carriers of different Apo-E isoforms. We also discuss Aβ-dependent and Aβ-independent mechanisms that link Apo-E4 status with AD risk, and consider how to design effective strategies for AD therapy by targeting Apo-E.",
"title": ""
},
{
"docid": "437d9a2146e05be85173b14176e4327c",
"text": "Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.",
"title": ""
}
] |
scidocsrr
|
32980997ad6f37a110ae57463c388881
|
Quantitative Analysis of the Full Bitcoin Transaction Graph
|
[
{
"docid": "cdefeefa1b94254083eba499f6f502fb",
"text": "problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a \"problem\" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that \"solves\" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out \"expensive\" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its \"complexity,\" that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a \"standard\" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by",
"title": ""
}
] |
[
{
"docid": "13774d2655f2f0ac575e11991eae0972",
"text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.",
"title": ""
},
{
"docid": "64389907530dd26392e037f1ab2d1da5",
"text": "Most current license plate (LP) detection and recognition approaches are evaluated on a small and usually unrepresentative dataset since there are no publicly available large diverse datasets. In this paper, we introduce CCPD, a large and comprehensive LP dataset. All images are taken manually by workers of a roadside parking management company and are annotated carefully. To our best knowledge, CCPD is the largest publicly available LP dataset to date with over 250k unique car images, and the only one provides vertices location annotations. With CCPD, we present a novel network model which can predict the bounding box and recognize the corresponding LP number simultaneously with high speed and accuracy. Through comparative experiments, we demonstrate our model outperforms current object detection and recognition approaches in both accuracy and speed. In real-world applications, our model recognizes LP numbers directly from relatively high-resolution images at over 61 fps and 98.5% accuracy.",
"title": ""
},
{
"docid": "7645c6a0089ab537cb3f0f82743ce452",
"text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.",
"title": ""
},
{
"docid": "9175794d83b5f110fb9f08dc25a264b8",
"text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.",
"title": ""
},
{
"docid": "0c24b767705b3a88acf9fe128c0e3477",
"text": "The studied camera is basically just a line of pixel sensors, which can be rotated on a full circle, describing a cylindrical surface this way. During a rotation we take individual shots, line by line. All these line images define a panoramic image on a cylindrical surface. This camera architecture (in contrast to the plane segment of the pinhole camera) comes with new challenges, and this report is about a classification of different models of such cameras and their calibration. Acknowledgment. The authors acknowledge comments, collaboration or support by various students and colleagues at CITR Auckland and DLR Berlin-Adlershof. report1_HWK.tex; 22/03/2006; 9:47; p.1",
"title": ""
},
{
"docid": "c42aaf64a6da2792575793a034820dcb",
"text": "Psychologists and psychiatrists commonly rely on self-reports or interviews to diagnose or treat behavioral addictions. The present study introduces a novel source of data: recordings of the actual problem behavior under investigation. A total of N = 58 participants were asked to fill in a questionnaire measuring problematic mobile phone behavior featuring several questions on weekly phone usage. After filling in the questionnaire, all participants received an application to be installed on their smartphones, which recorded their phone usage for five weeks. The analyses revealed that weekly phone usage in hours was overestimated; in contrast, numbers of call and text message related variables were underestimated. Importantly, several associations between actual usage and being addicted to mobile phones could be derived exclusively from the recorded behavior, but not from self-report variables. The study demonstrates the potential benefit to include methods of psychoinformatics in the diagnosis and treatment of problematic mobile phone use.",
"title": ""
},
{
"docid": "f6383e814999744b24e6a1ce6507e47b",
"text": "We propose a new approach, CCRBoost, to identify the hierarchical structure of spatio-temporal patterns at different resolution levels and subsequently construct a predictive model based on the identified structure. To accomplish this, we first obtain indicators within different spatio-temporal spaces from the raw data. A distributed spatio-temporal pattern (DSTP) is extracted from a distribution, which consists of the locations with similar indicators from the same time period, generated by multi-clustering. Next, we use a greedy searching and pruning algorithm to combine the DSTPs in order to form an ensemble spatio-temporal pattern (ESTP). An ESTP can represent the spatio-temporal pattern of various regularities or a non-stationary pattern. To consider all the possible scenarios of a real-world ST pattern, we then build a model with layers of weighted ESTPs. By evaluating all the indicators of one location, this model can predict whether a target event will occur at this location. In the case study of predicting crime events, our results indicate that the predictive model can achieve 80 percent accuracy in predicting residential burglary, which is better than other methods.",
"title": ""
},
{
"docid": "cc6cf6557a8be12d8d3a4550163ac0a9",
"text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.",
"title": ""
},
{
"docid": "8bbf5cc2424e0365d6968c4c465fe5f7",
"text": "We describe a method for assigning English tense and aspect in a system that realizes surface text for symbolically encoded narratives. Our testbed is an encoding interface in which propositions that are attached to a timeline must be realized from several temporal viewpoints. This involves a mapping from a semantic encoding of time to a set of tense/aspect permutations. The encoding tool realizes each permutation to give a readable, precise description of the narrative so that users can check whether they have correctly encoded actions and statives in the formal representation. Our method selects tenses and aspects for individual event intervals as well as subintervals (with multiple reference points), quoted and unquoted speech (which reassign the temporal focus), and modal events such as conditionals.",
"title": ""
},
{
"docid": "e0cf83bcc9830f2a94af4822576e4167",
"text": "Multiple kernel learning (MKL) optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels are missing, which is common in practical applications. This paper proposes an absent MKL (AMKL) algorithm to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithm directly classifies each sample with its observed channels. In specific, we define a margin for each sample in its own relevant space, which corresponds to the observed channels of that sample. The proposed AMKL algorithm then maximizes the minimum of all sample-based margins, and this leads to a difficult optimization problem. We show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio. Disciplines Engineering | Science and Technology Studies Publication Details Liu, X., Wang, L., Yin, J., Dou, Y. & Zhang, J. (2015). Absent multiple kernel learning. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (pp. 2807-2813). United States: IEEE. This conference paper is available at Research Online: http://ro.uow.edu.au/eispapers/5373 Absent Multiple Kernel Learning Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073 Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522 Jianping Yin, Yong Dou School of Computer National University of Defense Technology Changsha, China, 410073 Jian Zhang Faculty of Engineering and Information Technology University of Technology Sydney NSW, Australia, 2007",
"title": ""
},
{
"docid": "4b049e3fee1adfba2956cb9111a38bd2",
"text": "This paper presents an optimization based algorithm for underwater image de-hazing problem. Underwater image de-hazing is the most prominent area in research. Underwater images are corrupted due to absorption and scattering. With the effect of that, underwater images have the limitation of low visibility, low color and poor natural appearance. To avoid the mentioned problems, Enhanced fuzzy intensification method is proposed. For each color channel, enhanced fuzzy membership function is derived. Second, the correction of fuzzy based pixel intensification is carried out for each channel to remove haze and to enhance visibility and color. The post processing of fuzzy histogram equalization is implemented for red channel alone when the captured image is having highest value of red channel pixel values. The proposed method provides better results in terms maximum entropy and PSNR with minimum MSE with very minimum computational time compared to existing methodologies.",
"title": ""
},
{
"docid": "616354e134820867698abd3257606e62",
"text": "Supplementary to the description of diseases at symptom level, the International Classification of Functioning, Disability and Health (ICF), edited by the WHO, for the first time enables a systematic description also at the level of disabilities and impairments. The Mini-ICF-Rating for Mental Disorders (Mini-ICF-P) is a short observer rating instrument for the assessment of disabilities, especially with regard to occupational functioning. The Mini-ICF-P was first evaluated empirically in 125 patients of a Department of Behavioural Medicine and Psychosomatics. Parallel-test reliability was r = 0.59. Correlates were found with cognitive and motivational variables and duration of sick leave from work. In summary, the Mini-ICF-P is a quick and practicable instrument.",
"title": ""
},
{
"docid": "03e1ede18dcc78409337faf265940a4d",
"text": "Epidermal thickness and its relationship to age, gender, skin type, pigmentation, blood content, smoking habits and body site is important in dermatologic research and was investigated in this study. Biopsies from three different body sites of 71 human volunteers were obtained, and thickness of the stratum corneum and cellular epidermis was measured microscopically using a preparation technique preventing tissue damage. Multiple regressions analysis was used to evaluate the effect of the various factors independently of each other. Mean (SD) thickness of the stratum corneum was 18.3 (4.9) microm at the dorsal aspect of the forearm, 11.0 (2.2) microm at the shoulder and 14.9 (3.4) microm at the buttock. Corresponding values for the cellular epidermis were 56.6 (11.5) microm, 70.3 (13.6) microm and 81.5 (15.7) microm, respectively. Body site largely explains the variation in epidermal thickness, but also a significant individual variation was observed. Thickness of the stratum corneum correlated positively to pigmentation (p = 0.0008) and negatively to the number of years of smoking (p < 0.0001). Thickness of the cellular epidermis correlated positively to blood content (P = 0.028) and was greater in males than in females (P < 0.0001). Epidermal thickness was not correlated to age or skin type.",
"title": ""
},
{
"docid": "c399b42e2c7307a5b3c081e34535033d",
"text": "The Internet of Things (IoT) plays an ever-increasing role in enabling smart city applications. An ontology-based semantic approach can help improve interoperability between a variety of IoT-generated as well as complementary data needed to drive these applications. While multiple ontology catalogs exist, using them for IoT and smart city applications require significant amount of work. In this paper, we demonstrate how can ontology catalogs be more effectively used to design and develop smart city applications? We consider four ontology catalogs that are relevant for IoT and smart cities: 1) READY4SmartCities; 2) linked open vocabulary (LOV); 3) OpenSensingCity (OSC); and 4) LOVs for IoT (LOV4IoT). To support semantic interoperability with the reuse of ontology-based smart city applications, we present a methodology to enrich ontology catalogs with those ontologies. Our methodology is generic enough to be applied to any other domains as is demonstrated by its adoption by OSC and LOV4IoT ontology catalogs. Researchers and developers have completed a survey-based evaluation of the LOV4IoT catalog. The usefulness of ontology catalogs ascertained through this evaluation has encouraged their ongoing growth and maintenance. The quality of IoT and smart city ontologies have been evaluated to improve the ontology catalog quality. We also share the lessons learned regarding ontology best practices and provide suggestions for ontology improvements with a set of software tools.",
"title": ""
},
{
"docid": "19e2eaf78ec2723289e162503453b368",
"text": "Printing sensors and electronics over flexible substrates are an area of significant interest due to low-cost fabrication and possibility of obtaining multifunctional electronics over large areas. Over the years, a number of printing technologies have been developed to pattern a wide range of electronic materials on diverse substrates. As further expansion of printed technologies is expected in future for sensors and electronics, it is opportune to review the common features, the complementarities, and the challenges associated with various printing technologies. This paper presents a comprehensive review of various printing technologies, commonly used substrates and electronic materials. Various solution/dry printing and contact/noncontact printing technologies have been assessed on the basis of technological, materials, and process-related developments in the field. Critical challenges in various printing techniques and potential research directions have been highlighted. Possibilities of merging various printing methodologies have been explored to extend the lab developed standalone systems to high-speed roll-to-roll production lines for system level integration.",
"title": ""
},
{
"docid": "9a9d4d1d482333734d9b0efe87d1e53e",
"text": "Following acute therapeutic interventions, the majority of stroke survivors are left with a poorly functioning hemiparetic hand. Rehabilitation robotics has shown promise in providing patients with intensive therapy leading to functional gains. Because of the hand's crucial role in performing activities of daily living, attention to hand therapy has recently increased. This paper introduces a newly developed Hand Exoskeleton Rehabilitation Robot (HEXORR). This device has been designed to provide full range of motion (ROM) for all of the hand's digits. The thumb actuator allows for variable thumb plane of motion to incorporate different degrees of extension/flexion and abduction/adduction. Compensation algorithms have been developed to improve the exoskeleton's backdrivability by counteracting gravity, stiction and kinetic friction. We have also designed a force assistance mode that provides extension assistance based on each individual's needs. A pilot study was conducted on 9 unimpaired and 5 chronic stroke subjects to investigate the device's ability to allow physiologically accurate hand movements throughout the full ROM. The study also tested the efficacy of the force assistance mode with the goal of increasing stroke subjects' active ROM while still requiring active extension torque on the part of the subject. For 12 of the hand digits'15 joints in neurologically normal subjects, there were no significant ROM differences (P > 0.05) between active movements performed inside and outside of HEXORR. Interjoint coordination was examined in the 1st and 3rd digits, and no differences were found between inside and outside of the device (P > 0.05). Stroke subjects were capable of performing free hand movements inside of the exoskeleton and the force assistance mode was successful in increasing active ROM by 43 ± 5% (P < 0.001) and 24 ± 6% (P = 0.041) for the fingers and thumb, respectively. Our pilot study shows that this device is capable of moving the hand's digits through nearly the entire ROM with physiologically accurate trajectories. Stroke subjects received the device intervention well and device impedance was minimized so that subjects could freely extend and flex their digits inside of HEXORR. Our active force-assisted condition was successful in increasing the subjects' ROM while promoting active participation.",
"title": ""
},
{
"docid": "64d9f6973697749b6e2fa330101cbc77",
"text": "Evidence is presented that recognition judgments are based on an assessment of familiarity, as is described by signal detection theory, but that a separate recollection process also contributes to performance. In 3 receiver-operating characteristics (ROC) experiments, the process dissociation procedure was used to examine the contribution of these processes to recognition memory. In Experiments 1 and 2, reducing the length of the study list increased the intercept (d') but decreased the slope of the ROC and increased the probability of recollection but left familiarity relatively unaffected. In Experiment 3, increasing study time increased the intercept but left the slope of the ROC unaffected and increased both recollection and familiarity. In all 3 experiments, judgments based on familiarity produced a symmetrical ROC (slope = 1), but recollection introduced a skew such that the slope of the ROC decreased.",
"title": ""
},
{
"docid": "2950e3c1347c4adeeb2582046cbea4b8",
"text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.",
"title": ""
},
{
"docid": "3566e18518d80b2431c4fba34f790a82",
"text": "The aim of this paper is to present a nonlinear dynamic model for Voltage Source Converter-based HVDC (VSC-HVDC) links that can be used for dynamic studies. It includes the main physical elements and is controlled by PI controllers with antiwindup. A linear control model is derived for efficient tuning of the controllers of the nonlinear dynamic model. The nonlinear dynamic model is then tuned according to the performance of an ABB HVDC Light model.",
"title": ""
},
{
"docid": "f6227013273d148321cab1eef83c40e5",
"text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.",
"title": ""
}
] |
scidocsrr
|
c9314b3b21426a54196617cbdbed769a
|
A modular neural model of motor synergies
|
[
{
"docid": "ae287f0cce2d1652c7579c02b4692acf",
"text": "Recent studies have shown that multiple brain areas contribute to different stages and aspects of procedural learning. On the basis of a series of studies using a sequence-learning task with trial-and-error, we propose a hypothetical scheme in which a sequential procedure is acquired independently by two cortical systems, one using spatial coordinates and the other using motor coordinates. They are active preferentially in the early and late stages of learning, respectively. Both of the two systems are supported by loop circuits formed with the basal ganglia and the cerebellum, the former for reward-based evaluation and the latter for processing of timing. The proposed neural architecture would operate in a flexible manner to acquire and execute multiple sequential procedures.",
"title": ""
}
] |
[
{
"docid": "70a0d815aaee61633e42ec33ec55eb72",
"text": "Massive amounts of data are available in today’s Digital Libraries (DLs). The challenge is to find relevant information quickly and easily, and to use it effectively. A standard way to access DLs is via a text -based query issued by a single user. Typically, the query results in a potentially very long ordered list of matching documents, that makes it hard for users to find what they are looking for. This paper presents iScape, a shared virtual desktop world dedicated to the collaborative exploration and management of information. Data mining and information visualization techniques are applied to extract and visualize semantic relationships in search results. A three-dimensional (3-D) online browser system is exploited to facilitate complex and sophisticated human-computer and human-human interaction. Informal user studies have been conducted to compare the iScape world with a text -based, a 2-D visual Web interface, and a 3-D non-collaborative CAVE interface. We conclude with a discussion.",
"title": ""
},
{
"docid": "85678fca24cfa94efcc36570b3f1ef62",
"text": "Content-based recommender systems use preference ratings and features that characterize media to model users' interests or information needs for making future recommendations. While previously developed in the music and text domains, we present an initial exploration of content-based recommendation for spoken documents using a corpus of public domain internet audio. Unlike familiar speech technologies of topic identification and spoken document retrieval, our recommendation task requires a more comprehensive notion of document relevance than bags-of-words would supply. Inspired by music recommender systems, we automatically extract a wide variety of content-based features to characterize non-linguistic aspects of the audio such as speaker, language, gender, and environment. To combine these heterogeneous information sources into a single relevance judgement, we evaluate feature, score, and hybrid fusion techniques. Our study provides an essential first exploration of the task and clearly demonstrates the value of a multisource approach over a bag-of-words baseline.",
"title": ""
},
{
"docid": "9a13a2baf55676f82457f47d3929a4e7",
"text": "Humans are a cultural species, and the study of human psychology benefits from attention to cultural influences. Cultural psychology's contributions to psychological science can largely be divided according to the two different stages of scientific inquiry. Stage 1 research seeks cultural differences and establishes the boundaries of psychological phenomena. Stage 2 research seeks underlying mechanisms of those cultural differences. The literatures regarding these two distinct stages are reviewed, and various methods for conducting Stage 2 research are discussed. The implications of culture-blind and multicultural psychologies for society and intergroup relations are also discussed.",
"title": ""
},
{
"docid": "1cae7b0548fc84f00cd36cae1b6f1ceb",
"text": "Combining abstract, symbolic reasoning with continuous neural reasoning is a grand challenge of representation learning. As a step in this direction, we propose a new architecture, called neural equivalence networks, for the problem of learning continuous semantic representations of algebraic and logical expressions. These networks are trained to represent semantic equivalence, even of expressions that are syntactically very different. The challenge is that semantic representations must be computed in a syntax-directed manner, because semantics is compositional, but at the same time, small changes in syntax can lead to very large changes in semantics, which can be difficult for continuous neural architectures. We perform an exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types, showing that our model significantly outperforms existing architectures.",
"title": ""
},
{
"docid": "0203b3995c21e5e7026fe787eaef6e09",
"text": "Pose SLAM is the variant of simultaneous localization and map building (SLAM) is the variant of SLAM, in which only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. To reduce the computational cost of the information filter form of Pose SLAM and, at the same time, to delay inconsistency as much as possible, we introduce an approach that takes into account only highly informative loop-closure links and nonredundant poses. This approach includes constant time procedures to compute the distance between poses, the expected information gain for each potential link, and the exact marginal covariances while moving in open loop, as well as a procedure to recover the state after a loop closure that, in practical situations, scales linearly in terms of both time and memory. Using these procedures, the robot operates most of the time in open loop, and the cost of the loop closure is amortized over long trajectories. This way, the computational bottleneck shifts to data association, which is the search over the set of previously visited poses to determine good candidates for sensor registration. To speed up data association, we introduce a method to search for neighboring poses whose complexity ranges from logarithmic in the usual case to linear in degenerate situations. The method is based on organizing the pose information in a balanced tree whose internal levels are defined using interval arithmetic. The proposed Pose-SLAM approach is validated through simulations, real mapping sessions, and experiments using standard SLAM data sets.",
"title": ""
},
{
"docid": "d449155f5cbf60e942f0206fea151308",
"text": "This paper presents, for the first time, the 3D Glass Photonics (3DGP) technology being developed by Georgia Tech, based on ultra-thin 3D glass interposer [1]. The 3DGP system integrates both optical and electrical interconnects in the same glass substrate using photo-sensitive polymer core, and polymer cladding within an ultra-thin glass substrate. The 3DGP processes are demonstrated using 180 & 100 um thick glass substrates with 30 um diameter via and 8 um wide waveguide structures. The optical vias are used as mode transformer and high-tolerance coupler between fibers and chips. Finite-difference analysis is performed to determine the alignment tolerances of such vias.",
"title": ""
},
{
"docid": "6a6691d92503f98331ad7eed61a9c357",
"text": "This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.",
"title": ""
},
{
"docid": "97d7281f14c9d9e745fe6f63044a7d91",
"text": "The Long Term Evolution (LTE) is the latest mobile standard being implemented globally to provide connectivity and access to advanced services for personal mobile devices. Moreover, LTE networks are considered to be one of the main pillars for the deployment of Machine to Machine (M2M) communication systems and the spread of the Internet of Things (IoT). As an enabler for advanced communications services with a subscription count in the billions, security is of capital importance in LTE. Although legacy GSM (Global System for Mobile Communications) networks are known for being insecure and vulnerable to rogue base stations, LTE is assumed to guarantee confidentiality and strong authentication. However, LTE networks are vulnerable to security threats that tamper availability, privacy and authentication. This manuscript, which summarizes and expands the results presented by the author at ShmooCon 2016 [1], investigates the insecurity rationale behind LTE protocol exploits and LTE rogue base stations based on the analysis of real LTE radio link captures from the production network. Implementation results are discussed from the actual deployment of LTE rogue base stations, IMSI catchers and exploits that can potentially block a mobile device. A previously unknown technique to potentially track the location of mobile devices as they move from cell to cell is also discussed, with mitigations being proposed.",
"title": ""
},
{
"docid": "d3ac081abe2895830d3fff7ad2ce0721",
"text": "Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.",
"title": ""
},
{
"docid": "02cfb4cedd863cbf9364b5a80a46e9c4",
"text": "An engaged lifestyle is seen as an important component of successful ageing. Many older adults with high participation in social and leisure activities report positive wellbeing, a fact that fuelled the original activity theory and that continues to influence researchers, theorists and practitioners. This study’s purpose is to review the conceptualisation and measurement of activity among older adults and the associations reported in the gerontological literature between specific dimensions of activity and wellbeing. We searched published studies that focused on social and leisure activity and wellbeing, and found 42 studies in 44 articles published between 1995 and 2009. They reported from one to 13 activity domains, the majority reporting two or three, such as informal, formal and solitary, or productive versus leisure. Domains associated with subjective wellbeing, health or survival included social, leisure, productive, physical, intellectual, service and solitary activities. Informal social activity has accumulated the most evidence of an influence on wellbeing. Individual descriptors such as gender or physical functioning sometimes moderate these associations, while contextual variables such as choice, meaning or perceived quality play intervening roles. Differences in definitions and measurement make it difficult to draw inferences about this body of evidence on the associations between activity and wellbeing. Activity theory serves as shorthand for these associations, but gerontology must better integrate developmental and psychological constructs into a refined, comprehensive activity theory.",
"title": ""
},
{
"docid": "f72f55da6ec2fdf9d0902648571fd9fc",
"text": "Recently, numerous investigations for stock price prediction and portfolio management using machine learning have been trying to develop efficient mechanical trading systems. But these systems have a limitation in that they are mainly based on the supervised leaming which is not so adequate for leaming problems with long-term goals and delayed rewards. This paper proposes a method of applying reinforcement leaming, suitable for modeling and leaming various kinds of interactions in real situations, to the problem of stock price prediction. The stock price prediction problem is considered as Markov process which can be optimized by reinforcement learning based algorithm. TD(O), a reinforcement learning algorithm which leams only from experiences, is adopted and function approximation by artificial neural network is performed to leam the values of states each of which corresponds to a stock price trend at a given time. An experimental result based on the Korean stock market is presented to evaluate the performance of the proposed method.",
"title": ""
},
{
"docid": "747d9e0eddd0a0a6e9ff54509c3abe3c",
"text": "English. This paper describes the Unitor system that participated to the SENTIment POLarity Classification task proposed in Evalita 2016. The system implements a classification workflow made of several Convolutional Neural Network classifiers, that generalize the linguistic information observed in the training tweets by considering also their context. Moreover, sentiment specific information is injected in the training process by using Polarity Lexicons automatically acquired through the automatic analysis of unlabeled collection of tweets. Unitor achieved the best results in the Subjectivity Classification sub-task, and it scored 2nd in the Polarity Classification sub-task, among about 25 different submissions. Italiano. Questo lavoro descrive il sistema Unitor valutato nel task di SENTIment POLarity Classification proposto all’interno di Evalita 2016. Il sistema é basato su un workflow di classificazione implementato usando Convolutional Neural Network, che generalizzano le evidenze osservabili all’interno dei dati di addestramento analizzando i loro contesti e sfruttando lessici specifici per la analisi del sentimento, generati automaticamente. Il sistema ha ottenuto ottimi risultati, ottenendo la miglior performance nel task di Subjectivity Classification e la seconda nel task di Polarity Classification.",
"title": ""
},
{
"docid": "ac6e52c2681565af02af7ee44bd669c7",
"text": "A novel low-temperature polycrystalline-silicon thin-film-transistor pixel circuit for 3D active-matrix organic light-emitting diode (AMOLED) displays is presented in this work. The proposed pixel circuit employs high frame rate (240 Hz) emission driving scheme and only needs 3.5 μs for input data period. Thus, 3D AMOLED displays can be realized under high speed operations. The simulation results demonstrate excellent stability in the proposed pixel circuit. The relative current error rate is only 0.967% under the threshold voltage deviation ( ΔVTH_DTFT = ± 0.33 V) of driving TFT. With an OLED threshold voltage detecting architecture, the OLED current can be increased with the increased OLED threshold voltage to compensate for the OLED luminance degradation. The proposed pixel circuit can therefore effectively compensate for the DTFT threshold voltage shift and OLED electric degradation at the same time.",
"title": ""
},
{
"docid": "20c57c17bd2db03d017b0f3fa8e2eb23",
"text": "Recent research shows that the i-vector framework for speaker recognition can significantly benefit from phonetic information. A common approach is to use a deep neural network (DNN) trained for automatic speech recognition to generate a universal background model (UBM). Studies in this area have been done in relatively clean conditions. However, strong background noise is known to severely reduce speaker recognition performance. This study investigates a phonetically-aware i-vector system in noisy conditions. We propose a front-end to tackle the noise problem by performing speech separation and examine its performance for both verification and identification tasks. The proposed separation system trains a DNN to estimate the ideal ratio mask of the noisy speech. The separated speech is then used to extract enhanced features for the i-vector framework. We compare the proposed system against a multi-condition trained baseline and a traditional GMM-UBM i-vector system. Our proposed system provides an absolute average improvement of 8% in identification accuracy and 1.2% in equal error rate.",
"title": ""
},
{
"docid": "e2991def3d4b03340b0fc9b708aa1efc",
"text": "Author Samuli Laine Title Efficient Physically-Based Shadow Algorithms This research focuses on developing efficient algorithms for computing shadows in computer-generated images. A distinctive feature of the shadow algorithms presented in this thesis is that they produce correct, physicallybased results, instead of giving approximations whose quality is often hard to ensure or evaluate. Light sources that are modeled as points without any spatial extent produce hard shadows with sharp boundaries. Shadow mapping is a traditional method for rendering such shadows. A shadow map is a depth buffer computed from the scene, using a point light source as the viewpoint. The finite resolution of the shadow map requires that its contents are resampled when determining the shadows on visible surfaces. This causes various artifacts such as incorrect self-shadowing and jagged shadow boundaries. A novel method is presented that avoids the resampling step, and provides exact shadows for every point visible in the image. The shadow volume algorithm is another commonly used algorithm for real-time rendering of hard shadows. This algorithm gives exact results and does not suffer from any resampling problems, but it tends to consume a lot of fillrate, which leads to performance problems. This thesis presents a new technique for locally choosing between two previous shadow volume algorithms with different performance characteristics. A simple criterion for making the local choices is shown to yield better performance than using either of the algorithms alone. Light sources with nonzero spatial extent give rise to soft shadows with smooth boundaries. A novel method is presented that transposes the classical processing order for soft shadow computation in offline rendering. Instead of casting shadow rays, the algorithm first conceptually collects every ray that would need to be cast, and then processes the shadow-casting primitives one by one, hierarchically finding the rays that are blocked. Another new soft shadow algorithm takes a different point of view into computing the shadows. Only the silhouettes of the shadow casters are used for determining the shadows, and an unintrusive execution model makes the algorithm practical for production use in offline rendering. The proposed techniques accelerate the computing of physically-based shadows in real-time and offline rendering. These improvements make it possible to use correct, physically-based shadows in a broad range of scenes that previous methods cannot handle efficiently enough. UDC 004.925, 004.383.5",
"title": ""
},
{
"docid": "4102b836c1fefd0f8686d26f12f8e0ca",
"text": "Operative management of unstable burst vertebral fractures is challenging and debatable. This study of such cases was conducted at the Aga Khan Hospital, Karachi from January 1998 to April 2003. All surgically managed spine injuries were reviewed from case notes and operative records. Clinical outcome was assessed by Hanover spine score and correction of kyphosis was measured for radiological assessment. The results were analyzed by Wilcoxon sign rank test for two related samples and p-value < 0.05 was considered significant. Ten patients were identified by inclusion criteria. There was statistically significant difference between mean pre-and postoperative Hanover spine score (p=0.008). Likewise, there was significant difference between mean immediate postoperative and final follow-up kyphosis. (p=0.006). Critical assessment of neurologic and structural extent of injury, proper pre-operative planning and surgical expertise can optimize the outcome of patients.",
"title": ""
},
{
"docid": "5b786dee43f6b2b15a53bb4f633aefb6",
"text": "Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning.\n In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.",
"title": ""
},
{
"docid": "3f220d8863302719d3cf69b7d99f8c4e",
"text": "The numerical representation precision required by the computations performed by Deep Neural Networks (DNNs) varies across networks and between layers of a same network. This observation motivates a precision-based approach to acceleration which takes into account both the computational structure and the required numerical precision representation. This work presents <italic>Stripes</italic> (<italic>STR</italic>), a hardware accelerator that uses bit-serial computations to improve energy efficiency and performance. Experimental measurements over a set of state-of-the-art DNNs for image classification show that <italic>STR</italic> improves performance over a state-of-the-art accelerator from 1.35<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq1-2597140.gif\"/></alternatives></inline-formula> to 5.33<inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives><inline-graphic xlink:href=\"judd-ieq2-2597140.gif\"/> </alternatives></inline-formula> and by 2.24<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math> <alternatives><inline-graphic xlink:href=\"judd-ieq3-2597140.gif\"/></alternatives></inline-formula> on average. <italic>STR</italic>’s area and power overhead are estimated at 5 percent and 12 percent respectively. <italic> STR</italic> is 2.00<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq4-2597140.gif\"/></alternatives></inline-formula> more energy efficient than the baseline.",
"title": ""
},
{
"docid": "3e026bbbc8795ed11d06c12faef3286e",
"text": "Research on the organizational implementation of information technology (IT) and social power has favoured explanations based on issues of resource power and process power at the expense of matters of meaning power. As a result, although the existence and importance of meaning power is acknowledged, its distinctive practices and enacted outcomes remain relatively under-theorized and under-explored by IT researchers. This paper focused on unpacking the practices and outcomes associated with the exercise of meaning power within the IT implementation process. Our aim was to analyze the practices employed to construct meaning and enact a collective ‘definition of the situation’. We focused on framing and utilizing the signature matrix technique to represent and analyze the exercise of meaning power in practice. The paper developed and illustrated this conceptual framework using a case study of a conflictual IT implementation in a challenging public sector environment. We concluded by pointing out the situated nature of meaning power practices and the enacted outcomes. Our research extends the literature on IT and social power by offering an analytical framework distinctly suited to the analysis and deeper understanding of the meaning power properties.",
"title": ""
},
{
"docid": "ab152b8a696519abb4406dd8f7c15407",
"text": "While real scenes produce a wide range of brightness variations, vision systems use low dynamic range image detectors that typically provide 8 bits of brightness data at each pixel. The resulting low quality images greatly limit what vision can accomplish today. This paper proposes a very simple method for significantly enhancing the dynamic range of virtually any imaging system. The basic principle is to simultaneously sample the spatial and exposure dimensions of image irradiance. One of several ways to achieve this is by placing an optical mask adjacent to a conventional image detector array. The mask has a pattern with spatially varying transmittance, thereby giving adjacent pixels on the detector different exposures to the scene. The captured image is mapped to a high dynamic range image using an efficient image reconstruction algorithm. The end result is an imaging system that can measure a very wide range of scene radiances and produce a substantially larger number of brightness levels, with a slight reduction in spatial resolution. We conclude with several examples of high dynamic range images computed using spatially varying pixel exposures. 1 High Dynamic Range Imaging Any real-world scene has a significant amount of brightness variation within it. The human eye has a remarkable dynamic range that enables it to detect subtle contrast variations and interpret scenes under a large variety of illumination conditions [Blackwell, 1946]. In contrast, a typical video camera, or a digital still camera, provides only about 8 bits (256 levels) of brightness information at each pixel. As a result, virtually any image captured by a conventional imaging system ends up being too dark in some areas and possibly saturated in others. In computational vision, it is such low quality images that we are left with the task of interpreting. Clearly, the low dynamic range of existing image detectors poses a severe limitation on what computational vision can accomplish. This paper presents a very simple modification that can be made to any conventional imaging system to dramatically increases its dynamic range. The availability of extra bits of data at each image pixel is expected to enhance the robustness of vision algorithms. This work was supported in part by an ONR/DARPA MURI grant under ONR contract No. N00014-97-1-0553 and in part by a David and Lucile Packard Fellowship. Tomoo Mitsunaga is supported by the Sony Corporation. 2 Existing Approaches First, we begin with a brief summary of existing techniques for capturing a high dynamic range image with a low dynamic range image detector. 2.1 Sequential Exposure Change The most obvious approach is to sequentially capture multiple images of the same scene using different exposures. The exposure for each image is controlled by either varying the F-number of the imaging optics or the exposure time of the image detector. Clearly, a high exposure image will be saturated in the bright scene areas but capture the dark regions well. In contrast, a low exposure image will have less saturation in bright regions but end up being too dark and noisy in the dark areas. The complementary nature of these images allows one to combine them into a single high dynamic range image. Such an approach has been employed in [Azuma and Morimura, 1996], [Saito, 1995], [Konishi et al., 1995], [Morimura, 1993], [Ikeda, 1998], [Takahashi et al., 1997], [Burt and Kolczynski, 1993], [Madden, 1993] [Tsai, 1994]. In [Mann and Picard, 1995], [Debevec and Malik, 1997] and [Mitsunaga and Nayar, 1999] this approach has been taken one step further by using the acquired images to compute the radiometric response function of the imaging system. The above methods are of course suited only to static scenes; the imaging system, the scene objects and their radiances must remain constant during the sequential capture of images under different exposures. 2.2 Multiple Image Detectors The stationary scene restriction faced by sequential capture is remedied by using multiple imaging systems. This approach has been taken by several investigators [Doi et al., 1986], [Saito, 1995], [Saito, 1996], [Kimura, 1998], [Ikeda, 1998]. Beam splitters are used to generate multiple copies of the optical image of the scene. Each copy is detected by an image detector whose exposure is preset by using an optical attenuator or by changing the exposure time of the detector. This approach has the advantage of producing high dynamic range images in real time. Hence, the scene objects and the imaging system are free to move during the capture process. The disadvantage of course is that this approach is expensive as it requires multiple image detectors, precision optics for the alignment of all the acquired images and additional hardware for the capture and processing of multiple images. 1063-6919/00 $10.0",
"title": ""
}
] |
scidocsrr
|
77ce4e914cd4cf346f7bdf5009c5d540
|
Elderly activities recognition and classification for applications in assisted living
|
[
{
"docid": "aeabcc9117801db562d83709fda22722",
"text": "The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project). © 2003 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "e766cd377c223cb3d90272e8c40a54af",
"text": "This paper aims at describing the state of the art on quadratic assignment problems (QAPs). It discusses the most important developments in all aspects of the QAP such as linearizations, QAP polyhedra, algorithms to solve the problem to optimality, heuristics, polynomially solvable special cases, and asymptotic behavior. Moreover, it also considers problems related to the QAP, e.g. the biquadratic assignment problem, and discusses the relationship between the QAP and other well known combinatorial optimization problems, e.g. the traveling salesman problem, the graph partitioning problem, etc. The paper will appear in the Handbook of Combinatorial Optimization to be published by Kluwer Academic Publishers, P. Pardalos and D.-Z. Du, eds.",
"title": ""
},
{
"docid": "4d7d99532c59415cff1a12f2b935921e",
"text": "Many applications in computer graphics and virtual environments need to render datasets with large numbers of primitives and high depth complexity at interactive rates. However, standard techniques like view frustum culling and a hardware z-bu er are unable to display datasets composed of hundred of thousands of polygons at interactive frame rates on current high-end graphics systems. We add a \\conservative\"visibility culling stage to the rendering pipeline, attempting to identify and avoid processing of occluded polygons. Given a moving viewpoint, the algorithm dynamically chooses a set of occluders. Each occluder is used to compute a shadow frustum, and all primitives contained within this frustumare culled. The algorithmhierarchicallytraverses the model, culling out parts not visible from the current viewpoint using e cient, robust, and in some cases specialized interference detection algorithms. The algorithm's performance varies with the location of the viewpoint and the depth complexity of the model. In the worst case it is linear in the input size with a small constant. In this paper, we demonstrate its performance on a city model composed of 500;000 polygons and possessing varying depth complexity. We are able to cull an average of 55% of the polygons that would not be culled by view-frustum culling and obtain a commensurate improvement in frame rate. The overall approach is e ective and scalable, is applicable to all polygonal models, and can be easily implemented on top of view-frustum culling.",
"title": ""
},
{
"docid": "7232868b492b19f6ef5e4cf1de7b6ed7",
"text": "Cognitive linguistics is one of the fastest growing and influential perspectives on the nature of language, the mind, and their relationship with sociophysical (embodied) experience. It is a broad theoretical and methodological enterprise, rather than a single, closely articulated theory. Its primary commitments are outlined. These are the Cognitive Commitment-a commitment to providing a characterization of language that accords with what is known about the mind and brain from other disciplines-and the Generalization Commitment-which represents a dedication to characterizing general principles that apply to all aspects of human language. The article also outlines the assumptions and worldview which arises from these commitments, as represented in the work of leading cognitive linguists. WIREs Cogn Sci 2012, 3:129-141. doi: 10.1002/wcs.1163 For further resources related to this article, please visit the WIREs website.",
"title": ""
},
{
"docid": "87696c01f32e83a2237b83c833cc94b7",
"text": "Image tagging is an essential step for developing Automatic Image Annotation (AIA) methods that are based on the learning by example paradigm. However, manual image annotation, even for creating training sets for machine learning algorithms, requires hard effort and contains human judgment errors and subjectivity. Thus, alternative ways for automatically creating training examples, i.e., pairs of images and tags, are pursued. In this work, we investigate whether tags accompanying photos in the Instagram can be considered as image annotation metadata. If such a claim is proved then Instagram could be used as a very rich, easy to collect automatically, source of training data for the development of AIA techniques. Our hypothesis is that Instagram hashtags, and especially those provided by the photo owner/creator, express more accurately the content of a photo compared to the tags assigned to a photo during explicit image annotation processes like crowdsourcing. In this context, we explore the descriptive power of hashtags by examining whether other users would use the same, with the owner, hashtags to annotate an image. For this purpose 1000 Instagram images were collected and one to four hashtags, considered as the most descriptive ones for the image in question, were chosen among the hashtags used by the photo owner. An online database was constructed to generate online questionnaires containing 20 images each, which were distributed to experiment participants so they can choose the best suitable hashtag for every image according to their interpretation. Results show that an average of 66% of the participants hashtag choices coincide with those suggested by the photo owners; thus, an initial evidence towards our hypothesis confirmation can be claimed. c ⃝ 2016 Qassim University. Production and Hosting by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer review under responsibility of Qassim University. ∗ Corresponding author. E-mail addresses: [email protected] (S. Giannoulakis), nic http://dx.doi.org/10.1016/j.jides.2016.10.001 2352-6645/ c ⃝ 2016 Qassim University. Production and Hosting by E license (http://creativecommons.org/licenses/by-nc-nd/4.0/). [email protected] (N. Tsapatsoulis). lsevier B.V. This is an open access article under the CC BY-NC-ND J O U R N A L O F I N N O VA T I O N I N D I G I T A L E C O S Y S T E M S 3 ( 2 0 1 6 ) 1 1 4 – 1 2 9 115",
"title": ""
},
{
"docid": "cd9e90ba83156a2c092d68022c4227c9",
"text": "The difficulty of integer factorization is fundamental to modern cryptographic security using RSA encryption and signatures. Although a 512-bit RSA modulus was first factored in 1999, 512-bit RSA remains surprisingly common in practice across many cryptographic protocols. Popular understanding of the difficulty of 512-bit factorization does not seem to have kept pace with developments in computing power. In this paper, we optimize the CADO-NFS and Msieve implementations of the number field sieve for use on the Amazon Elastic Compute Cloud platform, allowing a non-expert to factor 512-bit RSA public keys in under four hours for $75. We go on to survey the RSA key sizes used in popular protocols, finding hundreds or thousands of deployed 512-bit RSA keys in DNSSEC, HTTPS, IMAP, POP3, SMTP, DKIM, SSH, and PGP.",
"title": ""
},
{
"docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1",
"text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.",
"title": ""
},
{
"docid": "dba24c6bf3e04fc6d8b99a64b66cb464",
"text": "Recommender systems have to serve in online environments which can be highly non-stationary.1. Traditional recommender algorithmsmay periodically rebuild their models, but they cannot adjust to quick changes in trends caused by timely information. In our experiments, we observe that even a simple, but online trained recommender model can perform significantly better than its batch version. We investigate online learning based recommender algorithms that can efficiently handle non-stationary data sets. We evaluate our models over seven publicly available data sets. Our experiments are available as an open source project2.",
"title": ""
},
{
"docid": "25c92d054b39fe4951606c832edf99c0",
"text": "The increasing use of machine learning algorithms, such as Convolutional Neural Networks (CNNs), makes the hardware accelerator approach very compelling. However the question of how to best design an accelerator for a given CNN has not been answered yet, even on a very fundamental level. This paper addresses that challenge, by providing a novel framework that can universally and accurately evaluate and explore various architectural choices for CNN accelerators on FPGAs. Our exploration framework is more extensive than that of any previous work in terms of the design space, and takes into account various FPGA resources to maximize performance including DSP resources, on-chip memory, and off-chip memory bandwidth. Our experimental results using some of the largest CNN models including one that has 16 convolutional layers demonstrate the efficacy of our framework, as well as the need for such a high-level architecture exploration approach to find the best architecture for a CNN model.",
"title": ""
},
{
"docid": "cf248f6d767072a4569e31e49918dea1",
"text": "We describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium. We concentrate in particular on the Dependency Minimal Recursion Semantics (DMRS) formalism, a graph-based representation designed for compositional semantic representation with deep grammars. Our main focus is on English, and specifically English Resource Semantics (ERS) as used in the English Resource Grammar. We first give an introduction to ERS and DMRS and a brief overview of some existing resources and then describe in detail a new repository which has been developed to simplify the use of ERS/DMRS. We explain a number of operations on DMRS graphs which our repository supports, with sketches of the algorithms, and illustrate how these operations can be exploited in application building. We believe that this work will aid researchers to exploit the rich and effective but complex DELPH-IN resources.",
"title": ""
},
{
"docid": "c281538d7aa7bd8727ce4718de82c7c8",
"text": "More than 15 years after model predictive control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for non-linear systems but for practical applications many questions remain, including the reliability and efficiency of the on-line computation scheme. To deal with model uncertainty ‘rigorously’ an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, non-linear state estimation, and batch system control. Many practical problems like control objective prioritization and symptom-aided diagnosis can be integrated systematically and effectively into the MPC framework by expanding the problem formulation to include integer variables yielding a mixed-integer quadratic or linear program. Efficient techniques for solving these problems are becoming available. © 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fa44652ecd36d99d18535966727fb3d4",
"text": "Spatio-temporal cuboid pyramid (STCP) for action recognition using depth motion sequences [1] is influenced by depth camera error which leads the depth motion sequence (DMS) existing many kinds of noise, especially on the surface. It means that the dimension of DMS is awfully high and the feature for action recognition becomes less apparent. In this paper, we present an effective method to reduce noise, which is to segment foreground. We firstly segment and extract human contour in the color image using convolutional network model. Then, human contour is re-segmented utilizing depth information. Thirdly we project each frame of the segmented depth sequence onto three views. We finally extract features from cuboids and recognize human actions. The proposed approach is evaluated on three public benchmark datasets, i.e., UTKinect-Action Dataset, MSRActionPairs Dataset and 3D Online Action Dataset. Experimental results show that our method achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "441e0a882bafc17a75fe9e2dbf3634f1",
"text": "Cloud computing focuses on delivery of reliable, secure, faulttolerant, sustainable, and scalable infrastructures for hosting internet-based application services. These applications have different composition, configuration, and deployment requirements. Cloud service providers are willing to provide large scaled computing infrastructure at a cheap prices. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. This problem can be tackle with the help of mobile agents. Mobile agent being a process that can transport its state from one environment to another, with its data intact, and is capable of performing appropriately in the new environment. This work proposes an agent based framework for providing scalability in cloud computing environments supported with algorithms for searching another cloud when the approachable cloud becomes overloaded and for searching closest datacenters with least response time of virtual machine (VM).",
"title": ""
},
{
"docid": "d9a113b6b09874a4cbd9bf2f006504a6",
"text": "Attracting, motivating and retaining knowledge workers have become important in a knowledge-based and tight labour market, where changing knowledge management practices and global convergence of technology has redefined the nature of work. While individualisation of employment practices and team-based work may provide personal and organisational flexibilities, aligning HR and organisational strategies for competitive advantage has become more prominent. This exploratory study identifies the most and least effective HR strategies used by knowledge intensive firms (KIFs) in Singapore for attracting, motivating and retaining these workers. The most popular strategies were not always the most effective, and there appear to be distinctive ‘bundles’ of HR practices for managing knowledge workers. These vary according to whether ownership is foreign or local. A schema, based on statistically significant findings, for improving the effectiveness of these practices in managing knowledge workers is proposed. Cross-cultural research is necessary to establish the extent of diffusion of these practices. Contact: Frank M. Horwitz, Graduate School of Business, Breakwater Campus, University of Cape Town, Private Bag Rondebosch, Cape Town 7700 South Africa. Email: [email protected]",
"title": ""
},
{
"docid": "17d46377e67276ec3e416d6da4bb4965",
"text": "There is an increasing trend of people leaving digital traces through social media. This reality opens new horizons for urban studies. With this kind of data, researchers and urban planners can detect many aspects of how people live in cities and can also suggest how to transform cities into more efficient and smarter places to live in. In particular, their digital trails can be used to investigate tastes of individuals, and what attracts them to live in a particular city or to spend their vacation there. In this paper we propose an unconventional way to study how people experience the city, using information from geotagged photographs that people take at different locations. We compare the spatial behavior of residents and tourists in 10 most photographed cities all around the world. The study was conducted on both a global and local level. On the global scale we analyze the 10 most photographed cities and measure how attractive each city is for people visiting it from other cities within the same country or from abroad. For the purpose of our analysis we construct the users’ mobility network and measure the strength of the links between each pair of cities as a level of attraction of people living in one city (i.e., origin) to the other city (i.e., destination). On the local level we study the spatial distribution of user activity and identify the photographed hotspots inside each city. The proposed methodology and the results of our study are a low cost mean to characterize touristic activity within a certain location and can help cities strengthening their touristic potential.",
"title": ""
},
{
"docid": "48844037619734b041c03a4bc7c680ba",
"text": "Surfactants are compounds that reduce the surface tension of a liquid, the interfacial tension between two liquids, or that between a liquid and a solid. Surfactants are characteristically organic compounds containing both hydrophobic groups (their tails) and hydrophilic groups (their heads). Therefore, a surfactant molecule contains both a water insoluble (and oil soluble component) and a water soluble component. Biosurfactants encompass the properties of dropping surface tension, stabilizing emulsions, promoting foaming and are usually non-toxic and biodegradable. Interest in microbial surfactants has been progressively escalating in recent years due to their diversity, environmentally friendly nature, possibility of large-scale production, selectivity, performance under intense circumstances and their impending applications in environmental fortification. These molecules have a potential to be used in a variety of industries like cosmetics, pharmaceuticals, humectants, food preservatives and detergents. Presently the production of biosurfactants is highly expensive due to the use of synthetic culture media. Therefore, greater emphasis is being laid on procurement of various cheap agro-industrial substrates including vegetable oils, distillery and dairy wastes, soya molasses, animal fat, waste and starchy waste as raw materials. These wastes can be used as substrates for large-scale production of biosurfactants with advanced technology which is the matter of future research. This review article represents an exhaustive evaluation of the raw materials, with respect to their commercial production, fermentation mechanisms, current developments and future perspectives of a variety of approaches of biosurfactant production.",
"title": ""
},
{
"docid": "bf24ab9d5d78287ce9da9b455b779ed3",
"text": "Spatial selective attention and spatial working memory have largely been studied in isolation. Studies of spatial attention have provided clear evidence that observers can bias visual processing towards specific locations, enabling faster and better processing of information at those locations than at unattended locations. We present evidence supporting the view that this process of visual selection is a key component of rehearsal in spatial working memory. Thus, although working memory has sometimes been depicted as a storage system that emerges 'downstream' of early sensory processing, current evidence suggests that spatial rehearsal recruits top-down processes that modulate the earliest stages of visual analysis.",
"title": ""
},
{
"docid": "dd0319de90cd0e58a9298a62c2178b25",
"text": "The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper presents a novel hybrid automatic approach for the extraction of retinal image vessels. The method consists in the application of mathematical morphology and a fuzzy clustering algorithm followed by a purification procedure. In mathematical morphology, the retinal image is smoothed and strengthened so that the blood vessels are enhanced and the background information is suppressed. The fuzzy clustering algorithm is then employed to the previous enhanced image for segmentation. After the fuzzy segmentation, a purification procedure is used to reduce the weak edges and noise, and the final results of the blood vessels are consequently achieved. The performance of the proposed method is compared with some existing segmentation methods and hand-labeled segmentations. The approach has been tested on a series of retinal images, and experimental results show that our technique is promising and effective.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "7d14d06a67a87006ac271c16b1c91b16",
"text": "Anti-malware vendors receive daily thousands of potentially malicious binaries to analyse and categorise before deploying the appropriate defence measure. Considering the limitations of existing malware analysis and classification methods, we present MalClassifier, a novel privacy-preserving system for the automatic analysis and classification of malware using network flow sequence mining. MalClassifier allows identifying the malware family behind detected malicious network activity without requiring access to the infected host or malicious executable reducing overall response time. MalClassifier abstracts the malware families' network flow sequence order and semantics behaviour as an n-flow. By mining and extracting the distinctive n-flows for each malware family, it automatically generates network flow sequence behaviour profiles. These profiles are used as features to build supervised machine learning classifiers (K-Nearest Neighbour and Random Forest) for malware family classification. We compute the degree of similarity between a flow sequence and the extracted profiles using a novel fuzzy similarity measure that computes the similarity between flows attributes and the similarity between the order of the flow sequences. For classifier performance evaluation, we use network traffic datasets of ransomware and botnets obtaining 96% F-measure for family classification. MalClassifier is resilient to malware evasion through flow sequence manipulation, maintaining the classifier's high accuracy. Our results demonstrate that this type of network flow-level sequence analysis is highly effective in malware family classification, providing insights on reoccurring malware network flow patterns.",
"title": ""
}
] |
scidocsrr
|
bd95f017591ade84d174f8849e744261
|
Efficient implementation of sorting on multi-core SIMD CPU architecture
|
[
{
"docid": "a1a81d420ef5702483859b01633bb14c",
"text": "Many sorting algorithms have been studied in the past, but there are only a few algorithms that can effectively exploit both SIMD instructions and thread-level parallelism. In this paper, we propose a new parallel sorting algorithm, called aligned-access sort (AA-sort), for shared-memory multi processors. The AA-sort algorithm takes advantage of SIMD instructions. The key to high performance is eliminating unaligned memory accesses that would reduce the effectiveness of SIMD instructions. We implemented and evaluated the AA-sort on PowerPCreg 970MP and Cell Broadband Enginetrade. In summary, a sequential version of the AA-sort using SIMD instructions outperformed IBM's optimized sequential sorting library by 1.8 times and GPUTeraSort using SIMD instructions by 3.3 times on PowerPC 970MP when sorting 32 M of random 32-bit integers. Furthermore, a parallel version of AA-sort demonstrated better scalability with increasing numbers of cores than a parallel version of GPUTeraSort on both platforms.",
"title": ""
}
] |
[
{
"docid": "bf89c380e3ce667f4be2e12685f3d583",
"text": "Prosocial behaviors are an aspect of adolescents’ positive development that has gained greater attention in the developmental literature since the 1990s. In this article, the authors review the literature pertaining to prosocial behaviors during adolescence. The authors begin by defining prosocial behaviors as prior theory and empirical studies have done. They describe antecedents to adolescents’ prosocial behaviors with a focus on two primary factors: socialization and cultural orientations. Accordingly, the authors review prior literature on prosocial behaviors among different ethnic/cultural groups throughout this article. As limited studies have examined prosocial behaviors among some specific ethnic groups, the authors conclude with recommendations for future research. Adolescence is a period of human development marked by several biological, cognitive, and social transitions. Physical changes, such as the onset of puberty and rapid changes in body composition (e.g., height, weight, and sex characteristics) prompt adolescents to engage in greater self-exploration (McCabe and Ricciardelli, 2003). Enhanced cognitive abilities permit adolescents to engage in more symbolic thinking and to contemplate abstract concepts, such as the self and one’s relationship to others (Kuhn, 2009; Steinberg, 2005). Furthermore, adolescence is marked with increased responsibilities at home and in the school context, opportunities for caregiving within the family, and mutuality in peer relationships (American Psychological Association, 2008). Moreover, society demands a greater level of psychosocial maturity and expects greater adherence to social norms from adolescents compared to children (Eccles et al., 2008). Therefore, adolescence presents itself as a time of major life transitions. In light of these myriad transitions, adolescents are further developing prosocial behaviors. Although the emergence of prosocial behaviors (e.g., expressed behaviors that are intended to benefit others) begins in early childhood, the developmental transitions described above allow adolescents to become active agents in their own developmental process. Behavior that is motivated by adolescents’ concern for others is thought to reflect optimal social functioning or prosocial behaviors (American Psychological Association, 2008). While the early literature focused primarily on prosocial behaviors among young children (e.g., Garner, 2006; Garner et al., 2008; Iannotti, 1985) there are several reasons to track prosocial development into adolescence. First and foremost, individuals develop cognitive abilities that allow them to better phenomenologically process and psychologically mediate life experiences that may facilitate (e.g., completing household chores and caring for siblings) or hinder (e.g., interpersonal conflict and perceptions of institutional discrimination) prosocial development (e.g., Brown and Bigler, 2005). Adolescents express more intentionality in which activities they will engage in and become selective in where they choose to devote their energies (Mahoney et al., 2009). Finally, adolescents are afforded more opportunities to express helping behaviors in other social spheres beyond the family context, such as in schools, communities, and civic society (Yates and Youniss, 1996). Origins and Definitions of Prosocial Behaviors Since the turn of the twenty-first century, there has been growing interest in understanding the relationships that exist between the strengths of individuals and resources within communities (e.g., person 4 context) in order to identify pathways for healthy development, or to understand how adolescents’ thriving can be promoted. This line of thinking is commonly described as the positive youth development perspective (e.g., Lerner et al., 2009). Although the adolescent literature still predominantly focuses on problematic development (e.g., delinquency and risk-taking behaviors), studies on adolescents’ prosocial development have increased substantially since the 1990s (Eisenberg et al., 2009a), paralleling the paradigm shift from a deficit-based model of development to one focusing on positive attributes of youth (e.g., Benson et al., 2006; Lerner, 2005). Generally described as the expression of voluntary behaviors with the intention to benefit others (Carlo, 2006; Eisenberg, 2006; see full review by Eisenberg et al., 2009a), prosocial behavior is one aspect among others of positive adolescent development that is gaining greater attention in the literature. Theory on prosocial development is rooted in the literature on moral development, which includes cognitive aspects of moral reasoning (e.g., how individuals decide between moral dilemmas; Kohlberg, 1978), moral behaviors (e.g., expression of behaviors that benefit society; Eisenberg and Fabes, 1998), and emotions (e.g., empathy; Eisenberg and Fabes, 1990). Empirical studies on adolescents’ prosocial development have found that different types of prosocial behaviors may exist. For example, Carlo and colleagues (e.g., Carlo et al., 2010; Carlo and Randall, 2002) found six types of prosocial tendencies (intentions to help others): compliant, dire, emotional, altruistic, anonymous, and public. Compliant helping refers to an individual’s intent to assist when asked. Emotional helping refers to helping in emotionally evocative situations (e.g., witnessing another individual crying). Dire helping refers to International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 19 http://dx.doi.org/10.1016/B978-0-08-097086-8.23190-5 221 International Encyclopedia of the Social & Behavioral Sciences, Second Edition, 2015, 221–227 Author's personal copy",
"title": ""
},
{
"docid": "541fb071299f20a242d482bc4b1f94ab",
"text": "This paper describes some of the early developments in the synthetic aperture technique for radar application. The basic principle and later extensions to the theory are described. The results of the first experimental verification at the University of Illinois are given as well as the results of subsequent experiments. The paper also includes a section comparing some of the important features of real and synthetic aperture systems.",
"title": ""
},
{
"docid": "8780b620d228498447c4f1a939fa5486",
"text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.",
"title": ""
},
{
"docid": "77320edf2d8da853b873c71e26802c6e",
"text": "Content Delivery Network (CDN) services largely affect the delivery quality perceived by users. While those services were initially offered by independent entities, some large ISP now develop their own CDN activities to control costs and delivery quality. But this new activity is also a new source of revenues for those vertically integrated ISP-CDNs, which can sell those services to content providers. In this paper, we investigate the impact of having an ISP and a vertically-integrated CDN, on the main actors of the ecosystem (users, competing ISPs). Our approach is based on an economic model of revenues and costs, and a multilevel game-theoretic formulation of the interactions among actors. Our model incorporates the possibility for the vertically-integrated ISP to partially offer CDN services to competitors in order to optimize the trade-off between CDN revenue (if fully offered) and competitive advantage on subscriptions at the ISP level (if not offered to competitors). Our results highlight two counterintuitive phenomena: an ISP may prefer an independent CDN over controlling (integrating) a CDN, and from the user point of view vertical integration is preferable to an independent CDN or a no-CDN configuration. Hence, a regulator may want to elicit such CDN-ISP vertical integrations rather than prevent them.",
"title": ""
},
{
"docid": "59a4471695fff7d42f49d94fc9755772",
"text": "We introduce a computationally efficient algorithm for multi-object tracking by detection that addresses four main challenges: appearance similarity among targets, missing data due to targets being out of the field of view or occluded behind other objects, crossing trajectories, and camera motion. The proposed method uses motion dynamics as a cue to distinguish targets with similar appearance, minimize target mis-identification and recover missing data. Computational efficiency is achieved by using a Generalized Linear Assignment (GLA) coupled with efficient procedures to recover missing data and estimate the complexity of the underlying dynamics. The proposed approach works with track lets of arbitrary length and does not assume a dynamical model a priori, yet it captures the overall motion dynamics of the targets. Experiments using challenging videos show that this framework can handle complex target motions, non-stationary cameras and long occlusions, on scenarios where appearance cues are not available or poor.",
"title": ""
},
{
"docid": "719654900a770c6d2ce5e8f1067fc29b",
"text": "Facial expressions are the facial changes in response to a person’s internal emotional states, intentions, or social communications. Facial expression analysis has been an active research topic for behavioral scientists since the work of Darwin in 1872 [21, 26, 29, 83]. Suwa et al. [90] presented an early attempt to automatically analyze facial expressions by tracking the motion of 20 identified spots on an image sequence in 1978. After that, much progress has been made to build computer systems to help us understand and use this natural form of human communication [5, 7, 8, 17, 23, 32, 43, 45, 57, 64, 77, 92, 95, 106–108, 110]. In this chapter, facial expression analysis refers to computer systems that attempt to automatically analyze and recognize facial motions and facial feature changes from visual information. Sometimes the facial expression analysis has been confused with emotion analysis in the computer vision domain. For emotion analysis, higher level knowledge is required. For example, although facial expressions can convey emotion, they can also express intention, cognitive processes, physical effort, or other intraor interpersonal meanings. Interpretation is aided by context, body gesture, voice, individual differences, and cultural factors as well as by facial configuration and timing [11, 79, 80]. Computer facial expression analysis systems need to analyze the facial actions regardless of context, culture, gender, and so on.",
"title": ""
},
{
"docid": "d67ab983c681136864f4a66c5b590080",
"text": "scoring in DeepQA C. Wang A. Kalyanpur J. Fan B. K. Boguraev D. C. Gondek Detecting semantic relations in text is an active problem area in natural-language processing and information retrieval. For question answering, there are many advantages of detecting relations in the question text because it allows background relational knowledge to be used to generate potential answers or find additional evidence to score supporting passages. This paper presents two approaches to broad-domain relation extraction and scoring in the DeepQA question-answering framework, i.e., one based on manual pattern specification and the other relying on statistical methods for pattern elicitation, which uses a novel transfer learning technique, i.e., relation topics. These two approaches are complementary; the rule-based approach is more precise and is used by several DeepQA components, but it requires manual effort, which allows for coverage on only a small targeted set of relations (approximately 30). Statistical approaches, on the other hand, automatically learn how to extract semantic relations from the training data and can be applied to detect a large amount of relations (approximately 7,000). Although the precision of the statistical relation detectors is not as high as that of the rule-based approach, their overall impact on the system through passage scoring is statistically significant because of their broad coverage of knowledge.",
"title": ""
},
{
"docid": "62fd503d151b97920bcb493ed495f0be",
"text": "Powered by TCPDF (www.tcpdf.org) This material is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Athukorala, Kumaripaba; Gowacka, Dorota; Jacucci, Giulio; Oulasvirta, Antti; Vreeken, Jilles",
"title": ""
},
{
"docid": "0efe3ccc1c45121c5167d3792a7fcd25",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "da7cc08e5fd7275d2f4194f83f1e7365",
"text": "Recursive neural networks (RNN) and their recently proposed extension recursive long short term memory networks (RLSTM) are models that compute representations for sentences, by recursively combining word embeddings according to an externally provided parse tree. Both models thus, unlike recurrent networks, explicitly make use of the hierarchical structure of a sentence. In this paper, we demonstrate that RNNs nevertheless suffer from the vanishing gradient and long distance dependency problem, and that RLSTMs greatly improve over RNN’s on these problems. We present an artificial learning task that allows us to quantify the severity of these problems for both models. We further show that a ratio of gradients (at the root node and a focal leaf node) is highly indicative of the success of backpropagation at optimizing the relevant weights low in the tree. This paper thus provides an explanation for existing, superior results of RLSTMs on tasks such as sentiment analysis, and suggests that the benefits of including hierarchical structure and of including LSTM-style gating are complementary.",
"title": ""
},
{
"docid": "ab2e9a230c9aeec350dff6e3d239c7d8",
"text": "Expression and pose variations are major challenges for reliable face recognition (FR) in 2D. In this paper, we aim to endow state of the art face recognition SDKs with robustness to facial expression variations and pose changes by using an extended 3D Morphable Model (3DMM) which isolates identity variations from those due to facial expressions. Specifically, given a probe with expression, a novel view of the face is generated where the pose is rectified and the expression neutralized. We present two methods of expression neutralization. The first one uses prior knowledge to infer the neutral expression image from an input image. The second method, specifically designed for verification, is based on the transfer of the gallery face expression to the probe. Experiments using rectified and neutralized view with a standard commercial FR SDK on two 2D face databases, namely Multi-PIE and AR, show significant performance improvement of the commercial SDK to deal with expression and pose variations and demonstrates the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "553de71fcc3e4e6660015632eee751b1",
"text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.",
"title": ""
},
{
"docid": "f672df401b24571f81648066b3181890",
"text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.",
"title": ""
},
{
"docid": "b02c718acfab40a33840eec013a09bda",
"text": "Smartphones today are ubiquitous source of sensitive information. Information leakage instances on the smartphones are on the rise because of exponential growth in smartphone market. Android is the most widely used operating system on smartphones. Many information flow tracking and information leakage detection techniques are developed on Android operating system. Taint analysis is commonly used data flow analysis technique which tracks the flow of sensitive information and its leakage. This paper provides an overview of existing Information flow tracking techniques based on the Taint analysis for android applications. It is observed that static analysis techniques look at the complete program code and all possible paths of execution before its run, whereas dynamic analysis looks at the instructions executed in the program-run in the real time. We provide in depth analysis of both static and dynamic taint analysis approaches.",
"title": ""
},
{
"docid": "ba57149e82718bad622df36852906531",
"text": "The classical psychedelic drugs, including psilocybin, lysergic acid diethylamide and mescaline, were used extensively in psychiatry before they were placed in Schedule I of the UN Convention on Drugs in 1967. Experimentation and clinical trials undertaken prior to legal sanction suggest that they are not helpful for those with established psychotic disorders and should be avoided in those liable to develop them. However, those with so-called 'psychoneurotic' disorders sometimes benefited considerably from their tendency to 'loosen' otherwise fixed, maladaptive patterns of cognition and behaviour, particularly when given in a supportive, therapeutic setting. Pre-prohibition studies in this area were sub-optimal, although a recent systematic review in unipolar mood disorder and a meta-analysis in alcoholism have both suggested efficacy. The incidence of serious adverse events appears to be low. Since 2006, there have been several pilot trials and randomised controlled trials using psychedelics (mostly psilocybin) in various non-psychotic psychiatric disorders. These have provided encouraging results that provide initial evidence of safety and efficacy, however the regulatory and legal hurdles to licensing psychedelics as medicines are formidable. This paper summarises clinical trials using psychedelics pre and post prohibition, discusses the methodological challenges of performing good quality trials in this area and considers a strategic approach to the legal and regulatory barriers to licensing psychedelics as a treatment in mainstream psychiatry. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.",
"title": ""
},
{
"docid": "d2edbca2ed1e4952794d97f6e34e02e4",
"text": "In today’s world, almost everybody is affluent with computers and network based technology is growing by leaps and bounds. So, network security has become very important, rather an inevitable part of computer system. An Intrusion Detection System (IDS) is designed to detect system attacks and classify system activities into normal and abnormal form. Machine learning techniques have been applied to intrusion detection systems which have an important role in detecting Intrusions. This paper reviews different machine approaches for Intrusion detection system. This paper also presents the system design of an Intrusion detection system to reduce false alarm rate and improve accuracy to detect intrusion.",
"title": ""
},
{
"docid": "7af1ddcefae86ffa989ddd106f032002",
"text": "In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that “Some people are gay” is toxic while “Some people are straight” is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.",
"title": ""
},
{
"docid": "4adcbd3cdb868406a7e191063ac91573",
"text": "In recent years, the increasing diffusion of malicious software has encouraged the adoption of advanced machine learning algorithms to timely detect new threats. A cloud-based approach allows to exploit the big data produced by client agents to train such algorithms, but on the other hand, poses severe challenges on their scalability and performance. We propose a hybrid cloud-based malware detection system in which static and dynamic analyses are combined in order to find a good trade-off between response time and detection accuracy. Our system performs a continuous learning process of its models, based on deep networks, by exploiting the growing amount of data provided by clients. The preliminary experimental evaluation confirms the suitability of the approach proposed here.",
"title": ""
},
{
"docid": "1f53e890c8a1b9c9a8ae450ecde0de8a",
"text": "BACKGROUND AND OBJECTIVE\nThe identification and quantification of potential drug-drug interactions is important for avoiding or minimizing the interaction-induced adverse events associated with specific drug combinations. Clinical studies in healthy subjects were performed to evaluate potential pharmacokinetic interactions between vortioxetine (Lu AA21004) and co-administered agents, including fluconazole (cytochrome P450 [CYP] 2C9, CYP2C19 and CYP3A inhibitor), ketoconazole (CYP3A and P-glycoprotein inhibitor), rifampicin (CYP inducer), bupropion (CYP2D6 inhibitor and CYP2B6 substrate), ethinyl estradiol/levonorgestrel (CYP3A substrates) and omeprazole (CYP2C19 substrate and inhibitor).\n\n\nMETHODS\nThe ratio of central values of the test treatment to the reference treatment for relevant parameters (e.g., area under the plasma concentration-time curve [AUC] and maximum plasma concentration [C max]) was used to assess pharmacokinetic interactions.\n\n\nRESULTS\nCo-administration of vortioxetine had no effect on the AUC or C max of ethinyl estradiol/levonorgestrel or 5'-hydroxyomeprazole, or the AUC of bupropion; the 90 % confidence intervals for these ratios of central values were within 80-125 %. Steady-state AUC and C max of vortioxetine increased when co-administered with bupropion (128 and 114 %, respectively), fluconazole (46 and 15 %, respectively) and ketoconazole (30 and 26 %, respectively), and decreased by 72 and 51 %, respectively, when vortioxetine was co-administered with rifampicin. Concomitant therapy was generally well tolerated; most adverse events were mild or moderate in intensity.\n\n\nCONCLUSION\nDosage adjustment may be required when vortioxetine is co-administered with bupropion or rifampicin.",
"title": ""
},
{
"docid": "1a7a66f5d4f2ea918a9267ee24c57586",
"text": "Elements associated with total suspended particulate matter (TSP) in Jeddah city were determined. Using high-volume samplers, TSP samples were simultaneously collected over a one-year period from seven sampling sites. Samples were analyzed for Al, Ba, Ca, Cu, Mg, Fe, Mn, Zn, Ti, V, Cr, Co, Ni, As, and Sr. Results revealed great dependence of element contents on spatial and temporal variations. Two sites characterized by busy roads, workshops, heavy population, and heavy trucking have high levels of all measured elements. Concentrations of most elements at the two sites exhibit strong spatial gradients and concentrations of elements at these sites are higher than other locations. The highest concentrations of elements were observed during June-August because of dust storms, significant increase in energy consumption, and active surface winds. Enrichment factors of elements at the high-level sites have values in the range >10~60 while for Cu and Zn the enrichment factors are much higher (~0->700) indicating that greater percentage of TSP composition for these three elements in air comes from anthropogenic activities.",
"title": ""
}
] |
scidocsrr
|
52d29bf126cff8363b890e841a79d4d3
|
A Comparative Evaluation of SiC Power Devices for High-Performance Domestic Induction Heating
|
[
{
"docid": "2706d3b3774cf238d07c1796c1901b95",
"text": "Domestic induction appliances require power converters that feature high efficiency and accurate power control in a wide range of operating conditions. To achieve this modulation techniques play a key role to optimize the power converter operation. In this paper, a series resonant inverter featuring reverse-blocking insulated gate bipolar transistors and an optimized modulation technique are proposed. An analytical study of the converter operation is performed, and the main simulation results are shown. The proposed topology reduces both conduction and switching losses, increasing significantly the power converter efficiency. Moreover, the proposed modulation technique achieves linear output power control, improving the final appliance performance. The results derived from this analysis are tested by means of an experimental prototype, verifying the feasibility of the proposed converter and modulation technique.",
"title": ""
},
{
"docid": "9123ff1c2e6c52bf9a16a6ed4c67f151",
"text": "Domestic induction cookers operation is based on a resonant inverter which supplies medium-frequency currents (20-100 kHz) to an inductor, which heats up the pan. The variable load that is inherent to this application requires the use of a reliable and load-adaptive control algorithm. In addition, a wide output power range is required to get a satisfactory user performance. In this paper, a control algorithm to cover the variety of loads and the output power range is proposed. The main design criteria are efficiency, power balance, acoustic noise, flicker emissions, and user performance. As a result of the analysis, frequency limit and power level limit algorithms are proposed based on square wave and pulse density modulations. These have been implemented in a field-programmable gate array, including output power feedback and mains-voltage zero-cross-detection circuitry. An experimental verification has been performed using a commercial induction heating inverter. This provides a convenient experimental test bench to analyze the viability of the proposed algorithm.",
"title": ""
}
] |
[
{
"docid": "0545a5fef32e4bb2ef482937429397ec",
"text": "We present Chameleon, a novel hybrid (mixed-protocol) framework for secure function evaluation (SFE) which enables two parties to jointly compute a function without disclosing their private inputs. Chameleon combines the best aspects of generic SFE protocols with the ones that are based upon additive secret sharing. In particular, the framework performs linear operations in the ring $\\mathbbZ _2^l $ using additively secret shared values and nonlinear operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson protocol. Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase. Almost all of the heavy cryptographic operations are precomputed in an offline phase which substantially reduces the communication overhead. Chameleon is both scalable and significantly more efficient than the ABY framework (NDSS'15) it is based on. Our framework supports signed fixed-point numbers. In particular, Chameleon's vector dot product of signed fixed-point numbers improves the efficiency of mining and classification of encrypted data for algorithms based upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer convolutional deep neural network shows 133x and 4.2x faster executions than Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively.",
"title": ""
},
{
"docid": "3f06fc0b50a1de5efd7682b4ae9f5a46",
"text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.",
"title": ""
},
{
"docid": "ae1a1875fd56eb83d2f087fbd5559efd",
"text": "Hippocampus and striatum play distinctive roles in memory processes since declarative and non-declarative memory systems may act independently. However, hippocampus and striatum can also be engaged to function in parallel as part of a dynamic system to integrate previous experience and adjust behavioral responses. In these structures the formation, storage, and retrieval of memory require a synaptic mechanism that is able to integrate multiple signals and to translate them into persistent molecular traces at both the corticostriatal and hippocampal/limbic synapses. The best cellular candidate for this complex synthesis is represented by long-term potentiation (LTP). A common feature of LTP expressed in these two memory systems is the critical requirement of convergence and coincidence of glutamatergic and dopaminergic inputs to the dendritic spines of the neurons expressing this form of synaptic plasticity. In experimental models of Parkinson's disease abnormal accumulation of α-synuclein affects these two memory systems by altering two major synaptic mechanisms underlying cognitive functions in cholinergic striatal neurons, likely implicated in basal ganglia dependent operative memory, and in the CA1 hippocampal region, playing a central function in episodic/declarative memory processes.",
"title": ""
},
{
"docid": "fc1baaeb129ace3a6e76d447b3199bd2",
"text": "Many computer vision problems can be formulated in a Bayesian framework based on Markov random fields (MRF) or conditional random fields (CRF). Generally, the MRF/CRF model is learned independently of the inference algorithm that is used to obtain the final result. In this paper, we observe considerable gains in speed and accuracy by training the MRF/CRF model together with a fast and suboptimal inference algorithm. An active random field (ARF) is defined as a combination of a MRF/CRF based model and a fast inference algorithm for the MRF/CRF model. This combination is trained through an optimization of a loss function and a training set consisting of pairs of input images and desired outputs. We apply the ARF concept to image denoising, using the Fields of Experts MRF together with a 1-4 iteration gradient descent algorithm for inference. Experimental validation on unseen data shows that the ARF approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF. Using the ARF approach, image denoising can be performed in real-time, at 8 fps on a single CPU for a 256times256 image sequence, with close to state-of-the-art accuracy.",
"title": ""
},
{
"docid": "c4c5401a3aac140a5c0ef254b08943b4",
"text": "Tasks recognizing named entities such as products, people names, or locations from documents have recently received significant attention in the literature. Many solutions to these tasks assume the existence of reference entity tables. An important challenge that needs to be addressed in the entity extraction task is that of ascertaining whether or not a candidate string approximately matches with a named entity in a given reference table.\n Prior approaches have relied on string-based similarity which only compare a candidate string and an entity it matches with. In this paper, we exploit web search engines in order to define new similarity functions. We then develop efficient techniques to facilitate approximate matching in the context of our proposed similarity functions. In an extensive experimental evaluation, we demonstrate the accuracy and efficiency of our techniques.",
"title": ""
},
{
"docid": "abf47e7d497c83b015ad0ba818e17847",
"text": "The staggering amounts of content readily available to us via digital channels can often appear overwhelming. While much research has focused on aiding people at selecting relevant articles to read, only few approaches have been developed to assist readers in more efficiently reading an individual text. In this paper, we present HiText, a simple yet effective way of dynamically marking parts of a document in accordance with their salience. Rather than skimming a text by focusing on randomly chosen sentences, students and other readers can direct their attention to sentences determined to be important by our system. For this, we rely on a deep learning-based sentence ranking method. Our experiments show that this results in marked increases in user satisfaction and reading efficiency, as assessed using TOEFL-style reading comprehension tests.",
"title": ""
},
{
"docid": "99361418a043f546f5eaed54746d6abc",
"text": "Non-negative Matrix Factorization (NMF) and Probabilistic Latent Semantic Indexing (PLSI) have been successfully applied to document clustering recently. In this paper, we show that PLSI and NMF (with the I-divergence objective function) optimize the same objective function, although PLSI and NMF are different algorithms as verified by experiments. This provides a theoretical basis for a new hybrid method that runs PLSI and NMF alternatively, each jumping out of local minima of the other method successively, thus achieving a better final solution. Extensive experiments on five real-life datasets show relations between NMF and PLSI, and indicate the hybrid method leads to significant improvements over NMFonly or PLSI-only methods. We also show that at first order approximation, NMF is identical to χ-statistic.",
"title": ""
},
{
"docid": "ba2632b7a323e785b57328d32a26bc99",
"text": "Modern malware is designed with mutation characteristics, namely polymorphism and metamorphism, which causes an enormous growth in the number of variants of malware samples. Categorization of malware samples on the basis of their behaviors is essential for the computer security community, because they receive huge number of malware everyday, and the signature extraction process is usually based on malicious parts characterizing malware families. Microsoft released a malware classification challenge in 2015 with a huge dataset of near 0.5 terabytes of data, containing more than 20K malware samples. The analysis of this dataset inspired the development of a novel paradigm that is effective in categorizing malware variants into their actual family groups. This paradigm is presented and discussed in the present paper, where emphasis has been given to the phases related to the extraction, and selection of a set of novel features for the effective representation of malware samples. Features can be grouped according to different characteristics of malware behavior, and their fusion is performed according to a per-class weighting paradigm. The proposed method achieved a very high accuracy ($\\approx$ 0.998) on the Microsoft Malware Challenge dataset.",
"title": ""
},
{
"docid": "46938d041228481cf3363f2c6dfcc524",
"text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time",
"title": ""
},
{
"docid": "ecabde376c5611240e35d3eb574b1979",
"text": "For high precision Synthetic Aperture Radar (SAR) processing, the determination of the Doppler centroid is indispensable. The Doppler frequency estimated from azimuth spectra, however, suffers from the fact that the data are sampled with the pulse repetition frequency (PRF) and an ambiguity about the correct PRF band remains. A new algorithm to resolve this ambiguity is proposed. It uses the fact that the Doppler centroid depends linearly on the transmitted radar frequency for a given antenna squint angle. This dependence is not subject to PRF ambiguities. It can be measured by Fourier transforming the SAR data in the range direction and estimating the Doppler centroid at each range frequency. The achievable accuracy is derived theoretically and verified with Seasat data of different scene content. The algorithm works best with low contrast scenes, where the conventional look correlation technique fails. It needs no iterative processing of the SAR data and causes only low computational load.",
"title": ""
},
{
"docid": "ffac82a212541d2d86cb50c973ecdacf",
"text": "This paper presents our English–German Automatic Post-Editing (APE) system submitted to the APE Task organized at WMT 2018 (Chatterjee et al., 2018). The proposed model is an extension of the transformer architecture: two separate self-attention-based encoders encode the machine translation output (mt) and the source (src), followed by a joint encoder that attends over a combination of these two encoded sequences (encsrc and encmt) for generating the post-edited sentence. We compare this multi-source architecture (i.e, {src, mt} → pe) to a monolingual transformer (i.e., mt → pe) model and an ensemble combining the multi-source {src, mt} → pe and singlesource mt → pe models. For both the PBSMT and the NMT task, the ensemble yields the best results, followed by the multi-source model and last the singlesource approach. Our best model, the ensemble, achieves a BLEU score of 66.16 and 74.22 for the PBSMT and NMT task, respectively.",
"title": ""
},
{
"docid": "08634303d285ec95873e003eeac701eb",
"text": "This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: feature extraction using the wavelet transform (WT) and the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. Five types of EEG signals were used as input patterns of the five ANFIS classifiers. To improve diagnostic accuracy, the sixth ANFIS classifier (combining ANFIS) was trained using the outputs of the five ANFIS classifiers as input data. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the saliency of features on classification of the EEG signals were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the EEG signals.",
"title": ""
},
{
"docid": "0a4573a440eb40a667c18923b0f35636",
"text": "Article history: Received 31 October 2016 Received in revised form 4 May 2017 Accepted 5 July 2017 Available online xxxx",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "16118317af9ae39ee95765616c5506ed",
"text": "Generative Adversarial Networks (GANs) are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles (sample pairs). Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.",
"title": ""
},
{
"docid": "6c1c3bc94314ce1efae62ac3ec605d4a",
"text": "Solar energy is an abundant renewable energy source (RES) which is available without any price from the Sun to the earth. It can be a good alternative of energy source in place of non-renewable sources (NRES) of energy like as fossil fuels and petroleum articles. Sun light can be utilized through solar cells which fulfills the need of energy of the utilizer instead of energy generation by NRES. The development of solar cells has crossed by a number of modifications from one age to another. The cost and efficiency of solar cells are the obstacles in the advancement. In order to select suitable solar photovoltaic (PV) cells for a particular area, operators are needed to sense the basic mechanisms and topologies of diverse solar PV with maximum power point tracking (MPPT) methodologies that are checked to a great degree. In this article, authors reviewed and analyzed a successive growth in the solar PV cell research from one decade to other, and explained about their coming fashions and behaviors. This article also attempts to emphasize on many experiments and technologies to contribute the perks of solar energy.",
"title": ""
},
{
"docid": "fb5f6eeff54e54034970d6bcaaacb6ec",
"text": "Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training. We investigate a hybrid strategy that begins training with an adaptive method and switches to SGD when appropriate. Concretely, we propose SWATS, a simple strategy which Switches from Adam to SGD when a triggering condition is satisfied. The condition we propose relates to the projection of Adam steps on the gradient subspace. By design, the monitoring process for this condition adds very little overhead and does not increase the number of hyperparameters in the optimizer. We report experiments on several standard benchmarks such as: ResNet, SENet, DenseNet and PyramidNet for the CIFAR-10 and CIFAR-100 data sets, ResNet on the tiny-ImageNet data set and language modeling with recurrent networks on the PTB and WT2 data sets. The results show that our strategy is capable of closing the generalization gap between SGD and Adam on a majority of the tasks.",
"title": ""
},
{
"docid": "5ed955ddaaf09fc61c214adba6b18449",
"text": "This study investigates how customers perceive and adopt Internet Banking (IB) in Hong Kong. We developed a theoretical model based on the Technology Acceptance Model (TAM) with an added construct Perceived Web Security, and empirically tested its ability in predicting customers’ behavioral intention of adopting IB. We designed a questionnaire and used it to survey a randomly selected sample of customers of IB from the Yellow Pages, and obtained 203 usable responses. We analyzed the data using Structured Equation Modeling (SEM) to evaluate the strength of the hypothesized relationships, if any, among the constructs, which include Perceived Ease of Use and Perceived Web Security as independent variables, Perceived Usefulness and Attitude as intervening variables, and Intention to Use as the dependent variable. The results provide support of the extended TAM model and confirm its robustness in predicting customers’ intention of adoption of IB. This study contributes to the literature by formulating and validating TAM to predict IB adoption, and its findings provide useful information for bank management in formulating IB marketing strategies.",
"title": ""
},
{
"docid": "83f1fc22d029b3a424afcda770a5af23",
"text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.",
"title": ""
},
{
"docid": "2a09022e79f1d9b9eed405e0b92245f4",
"text": "This paper considers a category of rogue access points (APs) that pretend to be legitimate APs to lure users to connect to them. We propose a practical timing based technique that allows the user to avoid connecting to rogue APs. Our method employs the round trip time between the user and the DNS server to independently determine whether an AP is legitimate or not without assistance from the WLAN operator. We implemented our detection technique on commercially available wireless cards to evaluate their performance.",
"title": ""
}
] |
scidocsrr
|
ab9d3b2d479121643c7f690057cbb60a
|
Sentiment Analysis in Social Media Texts
|
[
{
"docid": "52a5f4c15c1992602b8fe21270582cc6",
"text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.",
"title": ""
},
{
"docid": "4ef6adf0021e85d9bf94079d776d686d",
"text": "Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articles – author, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which (a) we test the relative suitability of various sentiment dictionaries and (b) we attempt to separate positive or negative opinion from good or bad news. In the experiments described here, we tested whether or not subject domain-defining vocabulary should be ignored. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance.",
"title": ""
}
] |
[
{
"docid": "0b117f379a32b0ba4383c71a692405c8",
"text": "Today’s educational policies are largely devoted to fostering the development and implementation of computer applications in education. This paper analyses the skills and competences needed for the knowledgebased society and reveals the role and impact of using computer applications to the teaching and learning processes. Also, the aim of this paper is to reveal the outcomes of a study conducted in order to determine the impact of using computer applications in teaching and learning Management and to propose new opportunities for the process improvement. The findings of this study related to the teachers’ and students’ perceptions about using computer applications for teaching and learning could open further researches on computer applications in education and their educational and economic implications.",
"title": ""
},
{
"docid": "656baf66e6dd638d9f48ea621593bac3",
"text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.",
"title": ""
},
{
"docid": "b5fea029d64084089de8e17ae9debffc",
"text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.",
"title": ""
},
{
"docid": "c089e788b5cfda6c4a7f518af668bc3a",
"text": "The selection of hyper-parameters is critical in Deep Learning. Because of the long training time of complex models and the availability of compute resources in the cloud, “one-shot” optimization schemes – where the sets of hyper-parameters are selected in advance (e.g. on a grid or in a random manner) and the training is executed in parallel – are commonly used. [1] show that grid search is sub-optimal, especially when only a few critical parameters matter, and suggest to use random search instead. Yet, random search can be “unlucky” and produce sets of values that leave some part of the domain unexplored. Quasi-random methods, such as Low Discrepancy Sequences (LDS) avoid these issues. We show that such methods have theoretical properties that make them appealing for performing hyperparameter search, and demonstrate that, when applied to the selection of hyperparameters of complex Deep Learning models (such as state-of-the-art LSTM language models and image classification models), they yield suitable hyperparameters values with much fewer runs than random search. We propose a particularly simple LDS method which can be used as a drop-in replacement for grid/random search in any Deep Learning pipeline, both as a fully one-shot hyperparameter search or as an initializer in iterative batch optimization.",
"title": ""
},
{
"docid": "d1afaada6bf5927d9676cee61d3a1d49",
"text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.",
"title": ""
},
{
"docid": "af6f5ef41a3737975893f95796558900",
"text": "In this work, we propose a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD). The proposed multi-task PAD (MT-PAD) is inspired by an object detection method which directly regresses the parameters of the iris bounding box and computes the probability of presentation attack from the input ocular image. Experiments involving both intra-sensor and cross-sensor scenarios suggest that the proposed method can achieve state-of-the-art results on publicly available datasets. To the best of our knowledge, this is the first work that performs iris detection and iris presentation attack detection simultaneously.",
"title": ""
},
{
"docid": "e7586aea8381245cfa07239158d115af",
"text": "The interpolation, prediction, and feature analysis of fine-gained air quality are three important topics in the area of urban air computing. The solutions to these topics can provide extremely useful information to support air pollution control, and consequently generate great societal and technical impacts. Most of the existing work solves the three problems separately by different models. In this paper, we propose a general and effective approach to solve the three problems in one model called the Deep Air Learning (DAL). The main idea of DAL lies in embedding feature selection and semi-supervised learning in different layers of the deep learning network. The proposed approach utilizes the information pertaining to the unlabeled spatio-temporal data to improve the performance of the interpolation and the prediction, and performs feature selection and association analysis to reveal the main relevant features to the variation of the air quality. We evaluate our approach with extensive experiments based on real data sources obtained in Beijing, China. Experiments show that DAL is superior to the peer models from the recent literature when solving the topics of interpolation, prediction, and feature analysis of fine-gained air quality.",
"title": ""
},
{
"docid": "7f75e0b789e7b2bbaa47c7fa06efb852",
"text": "A significant increase in the capability for controlling motion dynamics in key frame animation is achieved through skeleton control. This technique allows an animator to develop a complex motion sequence by animating a stick figure representation of an image. This control sequence is then used to drive an image sequence through the same movement. The simplicity of the stick figure image encourages a high level of interaction during the design stage. Its compatibility with the basic key frame animation technique permits skeleton control to be applied selectively to only those components of a composite image sequence that require enhancement.",
"title": ""
},
{
"docid": "e8a2ef4ded8ba4fa2e36588015c2c61a",
"text": "The interdisciplinary character of Bio-Inspired Design (BID) has resulted in a plethora of approaches and methods that propose different types of design processes. Although sustainable, creative and complex system design processes are not mutually incompatible they do focus on different aspects of design. This research defines areas of focus for the development of computational tools to support biomimetics, technical problem solving through abstraction, transfer and application of knowledge from biological models. An overview of analysed literature is provided as well as a qualitative analysis of the main themes found in BID literature. The result is a set of recommendations for further research on Computer-Aided Biomimetics (CAB).",
"title": ""
},
{
"docid": "d4ac52a52e780184359289ecb41e321e",
"text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.",
"title": ""
},
{
"docid": "2ec973e31082953bd743dc659f417645",
"text": "Object detection, including objectness detection (OD), salient object detection (SOD), and category-specific object detection (COD), is one of the most fundamental yet challenging problems in the computer vision community. Over the last several decades, great efforts have been made by researchers to tackle this problem, due to its broad range of applications for other computer vision tasks such as activity or event recognition, content-based image retrieval and scene understanding, etc. While numerous methods have been presented in recent years, a comprehensive review for the proposed high-quality object detection techniques, especially for those based on advanced deep-learning techniques, is still lacking. To this end, this article delves into the recent progress in this research field, including 1) definitions, motivations, and tasks of each subdirection; 2) modern techniques and essential research trends; 3) benchmark data sets and evaluation metrics; and 4) comparisons and analysis of the experimental results. More importantly, we will reveal the underlying relationship among OD, SOD, and COD and discuss in detail some open questions as well as point out several unsolved challenges and promising future works.",
"title": ""
},
{
"docid": "9c38fcfcbfeaf0072e723bd7e1e7d17d",
"text": "BACKGROUND\nAllicin (diallylthiosulfinate) is the major volatile- and antimicrobial substance produced by garlic cells upon wounding. We tested the hypothesis that allicin affects membrane function and investigated 1) betanine pigment leakage from beetroot (Beta vulgaris) tissue, 2) the semipermeability of the vacuolar membrane of Rhoeo discolor cells, 3) the electrophysiology of plasmalemma and tonoplast of Chara corallina and 4) electrical conductivity of artificial lipid bilayers.\n\n\nMETHODS\nGarlic juice and chemically synthesized allicin were used and betanine loss into the medium was monitored spectrophotometrically. Rhoeo cells were studied microscopically and Chara- and artificial membranes were patch clamped.\n\n\nRESULTS\nBeet cell membranes were approximately 200-fold more sensitive to allicin on a mol-for-mol basis than to dimethyl sulfoxide (DMSO) and approximately 400-fold more sensitive to allicin than to ethanol. Allicin-treated Rhoeo discolor cells lost the ability to plasmolyse in an osmoticum, confirming that their membranes had lost semipermeability after allicin treatment. Furthermore, allicin and garlic juice diluted in artificial pond water caused an immediate strong depolarization, and a decrease in membrane resistance at the plasmalemma of Chara, and caused pore formation in the tonoplast and artificial lipid bilayers.\n\n\nCONCLUSIONS\nAllicin increases the permeability of membranes.\n\n\nGENERAL SIGNIFICANCE\nSince garlic is a common foodstuff the physiological effects of its constituents are important. Allicin's ability to permeabilize cell membranes may contribute to its antimicrobial activity independently of its activity as a thiol reagent.",
"title": ""
},
{
"docid": "f2fa4fa43c21e8c65c752d6ad1d39d06",
"text": "Singing voice synthesis techniques have been proposed based on a hidden Markov model (HMM). In these approaches, the spectrum, excitation, and duration of singing voices are simultaneously modeled with context-dependent HMMs and waveforms are generated from the HMMs themselves. However, the quality of the synthesized singing voices still has not reached that of natural singing voices. Deep neural networks (DNNs) have largely improved on conventional approaches in various research areas including speech recognition, image recognition, speech synthesis, etc. The DNN-based text-to-speech (TTS) synthesis can synthesize high quality speech. In the DNN-based TTS system, a DNN is trained to represent the mapping function from contextual features to acoustic features, which are modeled by decision tree-clustered context dependent HMMs in the HMM-based TTS system. In this paper, we propose singing voice synthesis based on a DNN and evaluate its effectiveness. The relationship between the musical score and its acoustic features is modeled in frames by a DNN. For the sparseness of pitch context in a database, a musical-note-level pitch normalization and linear-interpolation techniques are used to prepare the excitation features. Subjective experimental results show that the DNN-based system outperformed the HMM-based system in terms of naturalness.",
"title": ""
},
{
"docid": "dbc463f080610e2ec1cf1841772d1d92",
"text": "Malware is one of the greatest and most rapidly growing threats to the digital world. Traditional signature-based detection is no longer adequate to detect new variants and highly targeted malware. Furthermore, dynamic detection is often circumvented with anti-VM and/or anti-debugger techniques. Recently heuristic approaches have been explored to enhance detection accuracy while maintaining the generality of a model to detect unknown malware samples. In this paper, we investigate three feature types extracted from memory images - registry activity, imported libraries, and API function calls. After evaluating the importance of the different features, different machine learning techniques are implemented to compare performances of malware detection using the three feature types, respectively. The highest accuracy achieved was 96%, and was reached using a support vector machine model, fitted on data extracted from registry activity.",
"title": ""
},
{
"docid": "23d42976a9651203e0d4dd1c332234ae",
"text": "BACKGROUND\nStatistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem.\n\n\nRESULTS\nThe terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs .\n\n\nCONCLUSIONS\nThe Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.",
"title": ""
},
{
"docid": "9c5a32c49d3e9eff842f155f99facd08",
"text": "Urdu is morphologically rich language with different nature of its characters. Urdu text tokenization and sentence boundary disambiguation is difficult as compared to the language like English. Major hurdle for tokenization is improper use of space between words, where as absence of case discrimination makes the sentence boundary detection a difficult task. In this paper some issues regarding both of these language processing tasks have been identified.",
"title": ""
},
{
"docid": "51fc49d6196702f87e7dae215fa93108",
"text": "Automatic classification of cancer lesions in tissues observed using gastroenterology imaging is a non-trivial pattern recognition task involving filtering, segmentation, feature extraction and classification. In this paper we measure the impact of a variety of segmentation algorithms (mean shift, normalized cuts, level-sets) on the automatic classification performance of gastric tissue into three classes: cancerous, pre-cancerous and normal. Classification uses a combination of color (hue-saturation histograms) and texture (local binary patterns) features, applied to two distinct imaging modalities: chromoendoscopy and narrow-band imaging. Results show that mean-shift obtains an interesting performance for both scenarios producing low classification degradations (6%), full image classification is highly inaccurate reinforcing the importance of segmentation research for Gastroenterology, and confirm that Patch Index is an interesting measure of the classification potential of small to medium segmented regions.",
"title": ""
},
{
"docid": "db6e3742a0413ad5f44647ab1826b796",
"text": "Endometrial stromal sarcoma is a rare tumor and has unique histopathologic features. Most tumors of this kind occur in the uterus; thus, the vagina is an extremely rare site. A 34-year-old woman presented with endometrial stromal sarcoma arising in the vagina. No correlative endometriosis was found. Because of the uncommon location, this tumor was differentiated from other more common neoplasms of the vagina, particularly embryonal rhabdomyosarcoma and other smooth muscle tumors. Although the pathogenesis of endometrial stromal tumors remains controversial, the most common theory of its origin is heterotopic Müllerian tissue such as endometriosis tissue. Primitive cells of the pelvis and retroperitoneum are an alternative possible origin for the tumor if endometriosis is not present. According to the literature, the tumor has a fairly good prognosis compared with other vaginal sarcomas. Surgery combined with adjuvant radiotherapy appears to be an adequate treatment.",
"title": ""
},
{
"docid": "51743d233ec269cfa7e010d2109e10a6",
"text": "Stress is a part of every life to varying degrees, but individuals differ in their stress vulnerability. Stress is usefully viewed from a biological perspective; accordingly, it involves activation of neurobiological systems that preserve viability through change or allostasis. Although they are necessary for survival, frequent neurobiological stress responses increase the risk of physical and mental health problems, perhaps particularly when experienced during periods of rapid brain development. Recently, advances in noninvasive measurement techniques have resulted in a burgeoning of human developmental stress research. Here we review the anatomy and physiology of stress responding, discuss the relevant animal literature, and briefly outline what is currently known about the psychobiology of stress in human development, the critical role of social regulation of stress neurobiology, and the importance of individual differences as a lens through which to approach questions about stress experiences during development and child outcomes.",
"title": ""
},
{
"docid": "6ef244a7eb6a5df025e282e1cc5f90aa",
"text": "Public infrastructure-as-a-service clouds, such as Amazon EC2 and Microsoft Azure allow arbitrary clients to run virtual machines (VMs) on shared physical infrastructure. This practice of multi-tenancy brings economies of scale, but also introduces the threat of malicious VMs abusing the scheduling of shared resources. Recent works have shown how to mount crossVM side-channel attacks to steal cryptographic secrets. The straightforward solution is hard isolation that dedicates hardware to each VM. However, this comes at the cost of reduced efficiency. We investigate the principle of soft isolation: reduce the risk of sharing through better scheduling. With experimental measurements, we show that a minimum run time (MRT) guarantee for VM virtual CPUs that limits the frequency of preemptions can effectively prevent existing Prime+Probe cache-based side-channel attacks. Through experimental measurements, we find that the performance impact of MRT guarantees can be very low, particularly in multi-core settings. Finally, we integrate a simple per-core CPU state cleansing mechanism, a form of hard isolation, into Xen. It provides further protection against side-channel attacks at little cost when used in conjunction with an MRT guarantee.",
"title": ""
}
] |
scidocsrr
|
414da789ccfd24d93314bce839acafaa
|
Predicting player churn in destiny: A Hidden Markov models approach to predicting player departure in a major online game
|
[
{
"docid": "7dfb6a3a619f7062452aa97aaa134c45",
"text": "Most companies favour the creation and nurturing of long-term relationships with customers because retaining customers is more profitable than acquiring new ones. Churn prediction is a predictive analytics technique to identify churning customers ahead of their departure and enable customer relationship managers to take action to keep them. This work evaluates the development of an expert system for churn prediction and prevention using a Hidden Markov model (HMM). A HMM is implemented on unique data from a mobile application and its predictive performance is compared to other algorithms that are commonly used for churn prediction: Logistic Regression, Neural Network and Support Vector Machine. Predictive performance of the HMM is not outperformed by the other algorithms. HMM has substantial advantages for use in expert systems though due to low storage and computational requirements and output of highly relevant customer motivational states. Generic session data of the mobile app is used to train and test the models which makes the system very easy to deploy and the findings applicable to the whole ecosystem of mobile apps distributed in Apple's App and Google's Play Store.",
"title": ""
},
{
"docid": "74959e138f7defce9bf7df2198b46a90",
"text": "In the game industry, especially for free to play games, player retention and purchases are important issues. There have been several approaches investigated towards predicting them by players' behaviours during game sessions. However, most current methods are only available for specific games because the data representations utilised are usually game specific. This work intends to use frequency of game events as data representations to predict both players' disengagement from game and the decisions of their first purchases. This method is able to provide better generality because events exist in every game and no knowledge of any event but their frequency is needed. In addition, this event frequency based method will also be compared with a recent work by Runge et al. [1] in terms of disengagement prediction.",
"title": ""
}
] |
[
{
"docid": "625c5c89b9f0001a3eed1ec6fb498c23",
"text": "About a 100 years ago, the Drosophila white mutant marked the birth of Drosophila genetics. The white gene turned out to encode the first well studied ABC transporter in arthropods. The ABC gene family is now recognized as one of the largest transporter families in all kingdoms of life. The majority of ABC proteins function as primary-active transporters that bind and hydrolyze ATP while transporting a large diversity of substrates across lipid membranes. Although extremely well studied in vertebrates for their role in drug resistance, less is known about the role of this family in the transport of endogenous and exogenous substances in arthropods. The ABC families of five insect species, a crustacean and a chelicerate have been annotated in some detail. We conducted a thorough phylogenetic analysis of the seven arthropod and human ABC protein subfamilies, to infer orthologous relationships that might suggest conserved function. Most orthologous relationships were found in the ABCB half transporter, ABCD, ABCE and ABCF subfamilies, but specific expansions within species and lineages are frequently observed and discussed. We next surveyed the role of ABC transporters in the transport of xenobiotics/plant allelochemicals and their involvement in insecticide resistance. The involvement of ABC transporters in xenobiotic resistance in arthropods is historically not well documented, but an increasing number of studies using unbiased differential gene expression analysis now points to their importance. We give an overview of methods that can be used to link ABC transporters to resistance. ABC proteins have also recently been implicated in the mode of action and resistance to Bt toxins in Lepidoptera. Given the enormous interest in Bt toxicology in transgenic crops, such findings will provide an impetus to further reveal the role of ABC transporters in arthropods. 2014 The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license.",
"title": ""
},
{
"docid": "fb11b937a3c07fd4b76cda1ed1eadc07",
"text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.",
"title": ""
},
{
"docid": "1819af3b3d96c182b7ea8a0e89ba5bbe",
"text": "The fingerprint is one of the oldest and most widely used biometric modality for person identification. Existing automatic fingerprint matching systems perform well when the same sensor is used for both enrollment and verification (regular matching). However, their performance significantly deteriorates when different sensors are used (cross-matching, fingerprint sensor interoperability problem). We propose an automatic fingerprint verification method to solve this problem. It was observed that the discriminative characteristics among fingerprints captured with sensors of different technology and interaction types are ridge orientations, minutiae, and local multi-scale ridge structures around minutiae. To encode this information, we propose two minutiae-based descriptors: histograms of gradients obtained using a bank of Gabor filters and binary gradient pattern descriptors, which encode multi-scale local ridge patterns around minutiae. In addition, an orientation descriptor is proposed, which compensates for the spurious and missing minutiae problem. The scores from the three descriptors are fused using a weighted sum rule, which scales each score according to its verification performance. Extensive experiments were conducted using two public domain benchmark databases (FingerPass and Multi-Sensor Optical and Latent Fingerprint) to show the effectiveness of the proposed system. The results showed that the proposed system significantly outperforms the state-of-the-art methods based on minutia cylinder-code (MCC), MCC with scale, VeriFinger—a commercial SDK, and a thin-plate spline model.",
"title": ""
},
{
"docid": "58164220c13b39eb5d2ca48139d45401",
"text": "There is general agreement that structural similarity — a match in relational structure — is crucial in analogical processing. However, theories differ in their definitions of structural similarity: in particular, in whether there must be conceptual similarity between the relations in the two domains or whether parallel graph structure is sufficient. In two studies, we demonstrate, first, that people draw analogical correspondences based on matches in conceptual relations, rather than on purely structural graph matches; and, second, that people draw analogical inferences between passages that have matching conceptual relations, but not between passages with purely structural graph matches.",
"title": ""
},
{
"docid": "a0eae0ebbec4dc6ee339b25286a8492a",
"text": "We present a visual recognition system for fine-grained visual categorization. The system is composed of a human and a machine working together and combines the complementary strengths of computer vision algorithms and (non-expert) human users. The human users provide two heterogeneous forms of information object part clicks and answers to multiple choice questions. The machine intelligently selects the most informative question to pose to the user in order to identify the object class as quickly as possible. By leveraging computer vision and analyzing the user responses, the overall amount of human effort required, measured in seconds, is minimized. Our formalism shows how to incorporate many different types of computer vision algorithms into a human-in-the-loop framework, including standard multiclass methods, part-based methods, and localized multiclass and attribute methods. We explore our ideas by building a field guide for bird identification. The experimental results demonstrate the strength of combining ignorant humans with poor-sighted machines the hybrid system achieves quick and accurate bird identification on a dataset containing 200 bird species.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "ef2e7ca89c1b52b4a462a2d38b60fa02",
"text": "Candidate phylum OD1 bacteria (also referred to as Parcubacteria) have been identified in a broad range of anoxic environments through community survey analysis. Although none of these species have been isolated in the laboratory, several genome sequences have been reconstructed from metagenomic sequence data and single-cell sequencing. The organisms have small (generally <1 Mb) genomes with severely reduced metabolic capabilities. We have reconstructed 8 partial to near-complete OD1 genomes from oxic groundwater samples, and compared them against existing genomic data. The conserved core gene set comprises 202 genes, or ~28% of the genomic complement. \"Housekeeping\" genes and genes for biosynthesis of peptidoglycan and Type IV pilus production are conserved. Gene sets for biosynthesis of cofactors, amino acids, nucleotides, and fatty acids are absent entirely or greatly reduced. The only aspects of energy metabolism conserved are the non-oxidative branch of the pentose-phosphate shunt and central glycolysis. These organisms also lack some activities conserved in almost all other known bacterial genomes, including signal recognition particle, pseudouridine synthase A, and FAD synthase. Pan-genome analysis indicates a broad genotypic diversity and perhaps a highly fluid gene complement, indicating historical adaptation to a wide range of growth environments and a high degree of specialization. The genomes were examined for signatures suggesting either a free-living, streamlined lifestyle, or a symbiotic lifestyle. The lack of biosynthetic capabilities and DNA repair, along with the presence of potential attachment and adhesion proteins suggest that the Parcubacteria are ectosymbionts or parasites of other organisms. The wide diversity of genes that potentially mediate cell-cell contact suggests a broad range of partner/prey organisms across the phylum.",
"title": ""
},
{
"docid": "316dfc9683a98e39a08481622acccf1a",
"text": "A wearable probe-fed microstrip antenna manufactured from conductive textile fabric designed for multiple Industrial-Scientific-Medical (ISM) band communications is presented in this paper. The proposed antenna operating at 2.450 GHz, 4.725 Hz and 5.800 GHz consists of a patch and ground plane made of silver fabric mounted on a substrate of flexible low-permittivity foam. For verification, a reference prototype is manufactured from copper. The measurement of both antennas demonstrates the expected resonances, with some unexpected loss especially in the higher frequency range. Simulation results for the antenna in various bending condition indicate the robustness of the design with deviations of resonant frequencies in an acceptable range.",
"title": ""
},
{
"docid": "1871c42e7656c7cef2a7fb042e2f5582",
"text": "The emergence and ubiquity of online social networks have enriched web data with evolving interactions and communities both at mega-scale and in real-time. This data offers an unprecedented opportunity for studying the interaction between society and disease outbreaks. The challenge we describe in this data paper is how to extract and leverage epidemic outbreak insights from massive amounts of social media data and how this exercise can benefit medical professionals, patients, and policymakers alike. We attempt to prepare the research community for this challenge with four datasets. Publishing the four datasets will commoditize the data infrastructure to allow a higher and more efficient focal point for the research community.",
"title": ""
},
{
"docid": "1ab272c668743c0873081160571aa462",
"text": "Monodisperse hollow and core-shell calcium alginate microcapsules are successfully prepared via internal gelation in microfluidic-generated double emulsions. Microfluidic emulsification is introduced to generate monodisperse oil-in-water-in-oil (O/W/O) double emulsion templates, which contain Na-alginate, CaCO3 nanoparticles, and photoacid generator in the middle aqueous phase, for synthesizing Ca-alginate microcapsules. The internal gelation of the aqueous middle layer of O/W/O double emulsions is induced by crosslinking alginate polymers with Ca(2+) ions that are released from CaCO3 nanoparticles upon UV exposure of the photoacid generator. The as-prepared hollow and core-shell calcium alginate microcapsules are highly monodisperse and spherical in water. Model proteins Bovine serum albumin (BSA) molecules can be encapsulated into the Ca-alginate microcapsules after the capsule preparation, which demonstrates an alternative route for loading active drugs or chemicals into carriers to avoid the inactivation during the carrier preparation. The proposed technique in this study provides an efficient approach for synthesis of monodisperse hollow or core-shell calcium alginate microcapsules with large cavity or encapsulated lipophilic drugs, chemicals, and nutrients.",
"title": ""
},
{
"docid": "402bf66ab180944e8f3068bef64fbc77",
"text": "EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.",
"title": ""
},
{
"docid": "a34e04069b232309b39994d21bb0f89a",
"text": "In the near future, i.e., beyond 4G, some of the prime objectives or demands that need to be addressed are increased capacity, improved data rate, decreased latency, and better quality of service. To meet these demands, drastic improvements need to be made in cellular network architecture. This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users. In this detailed survey, the prime focus is on the 5G cellular network architecture, massive multiple input multiple output technology, and device-to-device communication (D2D). Along with this, some of the emerging technologies that are addressed in this paper include interference management, spectrum sharing with cognitive radio, ultra-dense networks, multi-radio access technology association, full duplex radios, millimeter wave solutions for 5G cellular networks, and cloud technologies for 5G radio access networks and software defined networks. In this paper, a general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G cellular network architecture. A detailed survey is included regarding current research projects being conducted in different countries by research groups and institutions that are working on 5G technologies.",
"title": ""
},
{
"docid": "43977abf063f974689065fe29945297a",
"text": "In this short paper we propose several objective and subjective metrics and present a comparison between two “commodity” VR systems: HTC Vive and Oculus Rift. Objective assessment focuses on frame rate, impact of ambiance light, and impact of sensors' line of sight obstruction. Subjective study aims at evaluating and comparing the pick-and-place task performance in a virtual world. We collected user ratings of overall quality, perceived ease of use, and perceived intuitiveness, with results indicating that HTC Vive slightly outperforms the Oculus Rift for the pick-and-place task under test.",
"title": ""
},
{
"docid": "93d80e2015de513a689a41f33d74c45d",
"text": "A horizontally polarized omnidirectional antenna with enhanced impedance bandwidth is presented in this letter. The proposed antenna consists of a feeding network, four printed dipole elements with etched slots, parasitic strips, and director elements. Four identically curved and printed dipole elements are placed in a square array and fed by a feeding network with uniform magnitude and phase; thus, the proposed antenna can achieve an omnidirectional radiation. To enhance the impedance bandwidth, parasitic strips and etched slots are introduced to produce additional lower and upper resonant frequencies, respectively. By utilizing four director elements, the gain variation in the horizontal plane can be improved, especially for the upper frequency band. With the structure, a reduced size of <inline-formula> <tex-math notation=\"LaTeX\">$0.63\\,\\lambda _{L} \\times 0.63\\,\\lambda _{L} \\times 0.01\\,\\lambda _{L}$</tex-math> </inline-formula> (<inline-formula><tex-math notation=\"LaTeX\">$\\lambda _{L}$</tex-math></inline-formula> is the free-space wavelength at the lowest frequency) is obtained. The proposed antenna is designed and fabricated. Measurement results reveal that the proposed antenna can provide an impedance bandwidth of 84.2% (1.58–3.88 GHz). Additionally, the gain variation in the horizontal plane is less than 1.5 dB over the frequency band 1.58–3.50 GHz, and increased to 2.2 dB at 3.80 GHz. Within the impedance bandwidth, the cross-polarization level is less than –23 dB in the horizontal plane.",
"title": ""
},
{
"docid": "31bb74eb5b217909d46782430375c5be",
"text": "Recent studies of upper limb movements have provided insights into the computations, mechanisms, and taxonomy of human sensorimotor learning. Motor tasks differ with respect to how they weight different learning processes. These include adaptation, an internal-model based process that reduces sensory-prediction errors in order to return performance to pre-perturbation levels, use-dependent plasticity, and operant reinforcement. Visuomotor rotation and force-field tasks impose systematic errors and thereby emphasize adaptation. In skill learning tasks, which for the most part do not involve a perturbation, improved performance is manifest as reduced motor variability and probably depends less on adaptation and more on success-based exploration. Explicit awareness and declarative memory contribute, to varying degrees, to motor learning. The modularity of motor learning processes maps, at least to some extent, onto distinct brain structures.",
"title": ""
},
{
"docid": "c22d64723df5233bfa5e41b8eb10e1d5",
"text": "State-of-the-art millimeter wave (MMW) multiple-input, multiple-output (MIMO) frequency-modulated continuous-wave (FMCW) radars allow high precision direction of arrival (DOA) estimation with an optimized antenna aperture size [1]. Typically, these systems operate using a single polarization. Fully polarimetric radars on the other hand are used to obtain the polarimetric scattering matrix (S-matrix) and extract polari-metric scattering information that otherwise remains concealed [2]. Combining both approaches by assembly of a dual-polarized waveguide antenna and a 77 GHz MIMO FMCW radar system results in the fully polarimetric MIMO radar system presented in this paper. By applying a MIMO-adapted version of the isolated antenna calibration technique (IACT) from [3], the radar system is calibrated and laboratory measurements of different canonical objects such as spheres, plates, dihedrals and trihedrals are performed. A statistical evaluation of these measurement results demonstrates the usability of the approach and shows that basic polarimetric scattering phenomena are reliably identified.",
"title": ""
},
{
"docid": "0a3598013927cb5728362f5f6e0c321d",
"text": "Some postfire annuals with dormant seeds use heat or chemical cues from charred wood to synchronize their germination with the postfire environment. We report that wood smoke and polar extracts of wood smoke, but not the ash of burned wood, contain potent cue(s) that stimulate germination in the postfire annual plant,Nicotiana attenuata. We examined the responses of seeds from six populations of plants from southwest Utah to extracts of smoke and found the proportion of viable seeds that germinated in the presence of smoke cues to vary between populations but to be consistent between generations. With the most dormant genotypes, we examine three mechanisms by which smoke-derived chemical cues may stimulate germination (chemical scarification of the seed coat and nutritive- and signal-mediated stimulation of germination) and report that the response is consistent with the signal-mediated mechanism. The germination cue(s) found in smoke are produced by the burning of hay, hardwood branches, leaves, and, to a lesser degree, cellulose. Moreover, the cues are found in the common food condiment, “liquid smoke,” and we find no significant differences between brands. With a bioassay-driven fractionation of liquid smoke, we identified 71 compounds in active fractions by GC-MS and AA spectrometry. However, when these compounds were tested in pure form or in combinations that mimicked the composition of active fractions over a range of concentrations, they failed to stimulate germination to the same degree that smoke fractions did. Moreover, enzymatic oxidation of some of these compounds also failed to stimulate germination. In addition, we tested 43 additional compounds also reported from smoke, 85 compounds that were structurally similar to those reported from smoke and 34 compounds reported to influence germination in other species. Of the 233 compounds tested, 16 proved to inhibit germination at the concentrations tested, and none reproduced the activity of wood smoke. By thermally desorbing smoke produced by cellulose combustions that was trapped on Chromosorb 101, we demonstrate that the cue is desorbed between 125 and 150°C. We estimate that the germination cues are active at concentrations of less than 1 pg/seed and, due to their chromatographic behavior, infer that a number of different chemical structures are active. In separate experiments, we demonstrate that cues remain active for at least 53 days in soil under greenhouse conditions and that the application of aqucous extracts of smoke to soil containing seeds results in dramatic increases in germination of artificial seed banks. Hence, although the chemical nature of the germination cue remains elusive, the stability of the germination cues, their water-solubility, and their activity in low concentrations suggest that these cues could serve as powerful tools for the examination of dormant seed banks and the selective factors thought to be important in the evolution of postfire plant communities.",
"title": ""
},
{
"docid": "ac5f518cbd783060af1cf6700b994469",
"text": "Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithmnamely elitism, niching, and restricted mating are not significantly improving the scalability problems.",
"title": ""
},
{
"docid": "1778e5f82da9e90cbddfa498d68e461e",
"text": "Today’s business environment is characterized by fast and unexpected changes, many of which are driven by technological advancement. In such environment, the ability to respond effectively and adapt to the new requirements is not only desirable but essential to survive. Comprehensive and quick understanding of intricacies of market changes facilitates firm’s faster and better response. Two concepts contribute to the success of this scenario; organizational agility and business intelligence (BI). As of today, despite BI’s capabilities to foster organizational agility and consequently improve organizational performance, a clear link between BI and organizational agility has not been established. In this paper we argue that BI solutions have the potential to be facilitators for achieving agility. We aim at showing how BI capabilities can help achieve agility at operational, portfolio, and strategic levels.",
"title": ""
},
{
"docid": "89460f94140b9471b120674ddd904948",
"text": "Cross-disciplinary research on collective intelligence considers that groups, like individuals, have a certain level of intelligence. For example, the study by Woolley et al. (2010) indicates that groups which perform well on one type of task will perform well on others. In a pair of empirical studies of groups interacting face-to-face, they found evidence of a collective intelligence factor, a measure of consistent group performance across a series of tasks, which was highly predictive of performance on a subsequent, more complex task. This collective intelligence factor differed from the individual intelligence of group members, and was significantly predicted by members’ social sensitivity – the ability to understand the emotions of others based on visual facial cues (Baron-Cohen et al. 2001).",
"title": ""
}
] |
scidocsrr
|
ccba46f6feea5bbb3fb3fc700b51ebd0
|
Credit Scoring Models Using Soft Computing Methods: A Survey
|
[
{
"docid": "5b9baa6587bc70c17da2b0512545c268",
"text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode. In this paper, genetic programming (GP) is used to build credit scoring models. Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression. On the basis of the results, we can conclude that GP can provide better performance than other models. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "537076966f77631a3e915eccc8223d2b",
"text": "Finding domain invariant features is critical for successful domain adaptation and transfer learning. However, in the case of unsupervised adaptation, there is a significant risk of overfitting on source training data. Recently, a regularization for domain adaptation was proposed for deep models by (Ganin and Lempitsky, 2015). We build on their work by suggesting a more appropriate regularization for denoising autoencoders. Our model remains unsupervised and can be computed in a closed form. On standard text classification adaptation tasks, our approach yields the state of the art results, with an important reduction of the learning cost.",
"title": ""
},
{
"docid": "7f2857c1bd23c7114d58c290f21bf7bd",
"text": "Many contemporary organizations are placing a greater emphasis on their performance management systems as a means of generating higher levels of job performance. We suggest that producing performance increments may be best achieved by orienting the performance management system to promote employee engagement. To this end, we describe a new approach to the performance management process that includes employee engagement and the key drivers of employee engagement at each stage. We present a model of engagement management that incorporates the main ideas of the paper and suggests a new perspective for thinking about how to foster and manage employee engagement to achieve high levels of job",
"title": ""
},
{
"docid": "4c6efebdf08a3c1c4cefc9cdd8950bab",
"text": "Four patients are presented with the Goldenhar syndrome (GS) and cranial defects consisting of plagiocephaly, microcephaly, skull defects, or intracranial dermoid cysts. Twelve cases from the literature add hydrocephalus, encephalocele, and arhinencephaly to a growing list of brain anomalies in GS. As a group, these patients emphasize the variability of GS and the increased risk for developmental retardation with multiple, severe, or unusual manifestations. The temporal relation of proposed teratogenic events in GS provides an opportunity to reconstruct biological relationships within the 3-5-week human embryo.",
"title": ""
},
{
"docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7",
"text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.",
"title": ""
},
{
"docid": "296e9204869a3a453dd304fc3b4b8c4b",
"text": "Today, travelers are provided large amount information which includes Web sites and tourist magazines about introduction of tourist spot. However, it is not easy for users to process the information in a short time. Therefore travelers prefer to receive pertinent information easier and have that information presented in a clear and concise manner. This paper proposes a personalization method for tourist Point of Interest (POI) Recommendation.",
"title": ""
},
{
"docid": "13e84c1160fbffd1d8f91d5274c4d8cc",
"text": "This paper presents and demonstrates a class of 3-D integration platforms of substrate-integrated waveguide (SIW). The proposed right angle E-plane corner based on SIW technology enables the implementation of various 3-D architectures of planar circuits with the printed circuit board and other similar processes. This design scheme brings up attractive advantages in terms of cost, flexibility, and integration. Two circuit prototypes with both 0- and 45° vertical rotated arms are demonstrated. The straight version of the prototypes shows 0.5 dB of insertion loss from 30 to 40 GHz, while the rotated version gives 0.7 dB over the same frequency range. With this H-to-E-plane interconnect, a T-junction is studied and designed. Simulated results show 20-dB return loss over 19.25% of bandwidth. Measured results suggest an excellent performance within the experimental frequency range of 32-37.4 GHz, with 10-dB return loss and less than ±4° phase imbalance. An optimized wideband magic-T structure is demonstrated and fabricated. Both simulated and measured results show a very promising performance with very good isolation and power equality. With two 45° vertical rotated arm bends, two antennas are used to build up a dual polarization system. An isolation of 20 dB is shown over 32-40 GHz and the radiation patterns of the antenna are also given.",
"title": ""
},
{
"docid": "309e14c07a3a340f7da15abeb527231d",
"text": "The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad-hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas.",
"title": ""
},
{
"docid": "7f4701d8c9f651c3a551a91d19fd28d9",
"text": "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.",
"title": ""
},
{
"docid": "66b680500240631b9a4b682b33a5bafa",
"text": "Multichannel customer management is “the design, deployment, and evaluation of channels to enhance customer value through effective customer acquisition, retention, and development” (Neslin, Scott A., D. Grewal, R. Leghorn, V. Shankar, M. L. Teerling, J. S. Thomas, P. C. Verhoef (2006), Challenges and Opportunities in Multichannel Management. Journal of Service Research 9(2) 95–113). Channels typically include the store, the Web, catalog, sales force, third party agency, call center and the like. In recent years, multichannel marketing has grown tremendously and is anticipated to grow even further. While we have developed a good understanding of certain issues such as the relative value of a multichannel customer over a single channel customer, several research and managerial questions still remain. We offer an overview of these emerging issues, present our future outlook, and suggest important avenues for future research. © 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a4099a526548c6d00a91ea21b9f2291d",
"text": "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.",
"title": ""
},
{
"docid": "40c5f333d037f1e9a26e186d823b336e",
"text": "We present a simple, prepackaged solution to generating paraphrases of English sentences. We use the Paraphrase Database (PPDB) for monolingual sentence rewriting and provide machine translation language packs: prepackaged, tuned models that can be downloaded and used to generate paraphrases on a standard Unix environment. The language packs can be treated as a black box or customized to specific tasks. In this demonstration, we will explain how to use the included interactive webbased tool to generate sentential paraphrases.",
"title": ""
},
{
"docid": "c2b1bb55522213987573b22fa407c937",
"text": "We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\\\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.",
"title": ""
},
{
"docid": "0a4392285df7ddb92458ffa390f36867",
"text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"title": ""
},
{
"docid": "f465475eb7bb52d455e3ed77b4808d26",
"text": "Background Long-term dieting has been reported to reduce resting energy expenditure (REE) leading to weight regain once the diet has been curtailed. Diets are also difficult to follow for a significant length of time. The purpose of this preliminary proof of concept study was to examine the effects of short-term intermittent dieting during exercise training on REE and weight loss in overweight women.",
"title": ""
},
{
"docid": "3c13399d0c869e58830a7efb8f6832a8",
"text": "The use of supply frequencies above 50-60 Hz allows for an increase in the power density applied to the ozonizer electrode surface and an increase in ozone production for a given surface area, while decreasing the necessary peak voltage. Parallel-resonant converters are well suited for supplying the high capacitive load of ozonizers. Therefore, in this paper the current-fed parallel-resonant push-pull inverter is proposed as a good option to implement high-voltage high-frequency power supplies for ozone generators. The proposed converter is analyzed and some important characteristics are obtained. The design and implementation of the complete power supply are also shown. The UC3872 integrated circuit is proposed in order to operate the converter at resonance, allowing us to maintain a good response disregarding the changes in electric parameters of the transformer-ozonizer pair. Experimental results for a 50-W prototype are also provided.",
"title": ""
},
{
"docid": "b76d5cfc22d0c39649ca093111864926",
"text": "Runtime verification is the process of observing a sequence of events generated by a running system and comparing it to some formal specification for potential violations. We show how the use of a runtime monitor can greatly speed up the testing phase of a video game under development by automating the detection of bugs when the game is being played. We take advantage of the fact that a video game, contrarily to generic software, follows a special structure that contains a “game loop.” This game loop can be used to centralize the instrumentation and generate events based on the game's internal state. We report on experiments made on a sample of six real-world video games of various genres and sizes by successfully instrumenting and efficiently monitoring various temporal properties over their execution, including actual bugs reported in the games' bug tracking database in the course of their development.",
"title": ""
},
{
"docid": "d34d8dd7ba59741bb5e28bba3e870ac4",
"text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.",
"title": ""
},
{
"docid": "337a738d386fa66725fe9be620365d5f",
"text": "Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.",
"title": ""
},
{
"docid": "c6a649a1eed332be8fc39bfa238f4214",
"text": "The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.",
"title": ""
},
{
"docid": "9975e61afd0bf521c3ffbf29d0f39533",
"text": "Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b8157f13c56e9fe513b5ba5231606b61
|
Stereotype Threat Effects on Black and White Athletic Performance
|
[
{
"docid": "f5bc721d2b63912307c4ad04fb78dd2c",
"text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even",
"title": ""
}
] |
[
{
"docid": "b4a425c86bdd1814d7de6318ba305c58",
"text": "There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "6075b9f909a5df033d1222685d30b1dc",
"text": "Recent advances in high-throughput cDNA sequencing (RNA-seq) can reveal new genes and splice variants and quantify expression genome-wide in a single assay. The volume and complexity of data from RNA-seq experiments necessitate scalable, fast and mathematically principled analysis software. TopHat and Cufflinks are free, open-source software tools for gene discovery and comprehensive expression analysis of high-throughput mRNA sequencing (RNA-seq) data. Together, they allow biologists to identify new genes and new splice variants of known ones, as well as compare gene and transcript expression under two or more conditions. This protocol describes in detail how to use TopHat and Cufflinks to perform such analyses. It also covers several accessory tools and utilities that aid in managing data, including CummeRbund, a tool for visualizing RNA-seq analysis results. Although the procedure assumes basic informatics skills, these tools assume little to no background with RNA-seq analysis and are meant for novices and experts alike. The protocol begins with raw sequencing reads and produces a transcriptome assembly, lists of differentially expressed and regulated genes and transcripts, and publication-quality visualizations of analysis results. The protocol's execution time depends on the volume of transcriptome sequencing data and available computing resources but takes less than 1 d of computer time for typical experiments and ∼1 h of hands-on time.",
"title": ""
},
{
"docid": "b56b90d98b4b1b136e283111e9acf732",
"text": "Mobile phones are widely used nowadays and during the last years developed from simple phones to small computers with an increasing number of features. These result in a wide variety of data stored on the devices which could be a high security risk in case of unauthorized access. A comprehensive user survey was conducted to get information about what data is really stored on the mobile devices, how it is currently protected and if biometric authentication methods could improve the current state. This paper states the results from about 550 users of mobile devices. The analysis revealed a very low securtiy level of the devices. This is partly due to a low security awareness of their owners and partly due to the low acceptance of the offered authentication method based on PIN. Further results like the experiences with mobile thefts and the willingness to use biometric authentication methods as alternative to PIN authentication are also stated.",
"title": ""
},
{
"docid": "a65166fb5584bf634d841353c442b665",
"text": "Although business process management ( ̳BPM‘) is a popular concept, it has not yet been properly theoretically grounded. This leads to problems in identifying both generic and case specific critical success factors of BPM programs. The paper proposes an underlying theoretical framework with the utilization of three theories: contingency, dynamic capabilities and task technology fit. The main premise is that primarily the fit between the business environment and business processes is needed. Then both continuous improvement and the proper fit between business process tasks and information systems must exist. The underlying theory is used to identify critical success factors on a case study from the banking sector.",
"title": ""
},
{
"docid": "a80c83fd7bdf2a8550c80c32b98352ec",
"text": "In this paper, we propose an online learning algorithm for optimal execution in the limit order book of a financial asset. Given a certain number of shares to sell and an allocated time window to complete the transaction, the proposed algorithm dynamically learns the optimal number of shares to sell via market orders at prespecified time slots within the allocated time interval. We model this problem as a Markov Decision Process (MDP), which is then solved by dynamic programming. First, we prove that the optimal policy has a specific form, which requires either selling no shares or the maximum allowed amount of shares at each time slot. Then, we consider the learning problem, in which the state transition probabilities are unknown and need to be learned on the fly. We propose a learning algorithm that exploits the form of the optimal policy when choosing the amount to trade. Interestingly, this algorithm achieves bounded regret with respect to the optimal policy computed based on the complete knowledge of the market dynamics. Our numerical results on several finance datasets show that the proposed algorithm performs significantly better than the traditional Q-learning algorithm by exploiting the structure of the problem.",
"title": ""
},
{
"docid": "12a8d007ca4dce21675ddead705c7b62",
"text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.",
"title": ""
},
{
"docid": "ac1b28346ae9df1dd3b455d113551caf",
"text": "The new IEEE 802.11 standard, IEEE 802.11ax, has the challenging goal of serving more Uplink (UL) traffic and users as compared with his predecessor IEEE 802.11ac, enabling consistent and reliable streams of data (average throughput) per station. In this paper we explore several new IEEE 802.11ax UL scheduling mechanisms and compare between the maximum throughputs of unidirectional UDP Multi Users (MU) triadic. The evaluation is conducted based on Multiple-Input-Multiple-Output (MIMO) and Orthogonal Frequency Division Multiple Access (OFDMA) transmission multiplexing format in IEEE 802.11ax vs. the CSMA/CA MAC in IEEE 802.11ac in the Single User (SU) and MU modes for 1, 4, 8, 16, 32 and 64 stations scenario in reliable and unreliable channels. The comparison is conducted as a function of the Modulation and Coding Schemes (MCS) in use. In IEEE 802.11ax we consider two new flavors of acknowledgment operation settings, where the maximum acknowledgment windows are 64 or 256 respectively. In SU scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 64% and 85% in reliable and unreliable channels respectively. In MU-MIMO scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 263% and 270% in reliable and unreliable channels respectively. Also, as the number of stations increases, the advantage of IEEE 802.11ax in terms of the access delay also increases.",
"title": ""
},
{
"docid": "8c2c54207fa24358552bc30548bec5bc",
"text": "This paper proposes an edge bundling approach applied on parallel coordinates to improve the visualization of cluster information directly from the overview. Lines belonging to a cluster are bundled into a single curve between axes, where the horizontal and vertical positioning of the bundling intersection (known as bundling control points) to encode pertinent information about the cluster in a given dimension, such as variance, standard deviation, mean, median, and so on. The hypothesis is that adding this information to the overview improves the visualization overview at the same that it does not prejudice the understanding in other aspects. We have performed tests with participants to compare our approach with classic parallel coordinates and other consolidated bundling technique. The results showed most of the initially proposed hypotheses to be confirmed at the end of the study, as the tasks were performed successfully in the majority of tasks maintaining a low response time in average, as well as having more aesthetic pleasing according to participants' opinion.",
"title": ""
},
{
"docid": "ee0c8eafd5804b215b34a443d95259d4",
"text": "Fog computing has emerged as a promising technology that can bring the cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, and how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud,” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes as building blocks of fog computing, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and show how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.",
"title": ""
},
{
"docid": "3c2b68ac95f1a9300585b73ca4b83122",
"text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.",
"title": ""
},
{
"docid": "fea6f052c032c09408f967950098947e",
"text": "The identification of signals of very recent positive selection provides information about the adaptation of modern humans to local conditions. We report here on a genome-wide scan for signals of very recent positive selection in favor of variants that have not yet reached fixation. We describe a new analytical method for scanning single nucleotide polymorphism (SNP) data for signals of recent selection, and apply this to data from the International HapMap Project. In all three continental groups we find widespread signals of recent positive selection. Most signals are region-specific, though a significant excess are shared across groups. Contrary to some earlier low resolution studies that suggested a paucity of recent selection in sub-Saharan Africans, we find that by some measures our strongest signals of selection are from the Yoruba population. Finally, since these signals indicate the existence of genetic variants that have substantially different fitnesses, they must indicate loci that are the source of significant phenotypic variation. Though the relevant phenotypes are generally not known, such loci should be of particular interest in mapping studies of complex traits. For this purpose we have developed a set of SNPs that can be used to tag the strongest approximately 250 signals of recent selection in each population.",
"title": ""
},
{
"docid": "00f2bb2dd3840379c2442c018407b1c8",
"text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.",
"title": ""
},
{
"docid": "cdb87a9db48b78e193d9229282bd3b67",
"text": "While large-scale automatic grading of student programs for correctness is widespread, less effort has focused on automating feedback for good programming style:} the tasteful use of language features and idioms to produce code that is not only correct, but also concise, elegant, and revealing of design intent. We hypothesize that with a large enough (MOOC-sized) corpus of submissions to a given programming problem, we can observe a range of stylistic mastery from naïve to expert, and many points in between, and that we can exploit this continuum to automatically provide hints to learners for improving their code style based on the key stylistic differences between a given learner's submission and a submission that is stylistically slightly better. We are developing a methodology for analyzing and doing feature engineering on differences between submissions, and for learning from instructor-provided feedback as to which hints are most relevant. We describe the techniques used to do this in our prototype, which will be deployed in a residential software engineering course as an alpha test prior to deploying in a MOOC later this year.",
"title": ""
},
{
"docid": "7a8c7f369c060003ed99bb4ff784b687",
"text": "An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.",
"title": ""
},
{
"docid": "6520be1becd7e446b24ecb2fae6b1d50",
"text": "Neural networks in their modern deep learning incarnation have achieved state of the art performance on a wide variety of tasks and domains. A core intuition behind these methods is that they learn layers of features which interpolate between two domains in a series of related parts. The first part of this thesis introduces the building blocks of neural networks for computer vision. It starts with linear models then proceeds to deep multilayer perceptrons and convolutional neural networks, presenting the core details of each. However, the introduction also focuses on intuition by visualizing concrete examples of the parts of a modern network. The second part of this thesis investigates regularization of neural networks. Methods like dropout and others have been proposed to favor certain (empirically better) solutions over others. However, big deep neural networks still overfit very easily. This section proposes a new regularizer called DeCov, which leads to significantly reduced overfitting (difference between train and val performance) and greater generalization, sometimes better than dropout and other times not. The regularizer is based on the cross-covariance of hidden representations and takes advantage of the intuition that different features should try to represent different things, an intuition others have explored with similar losses. Experiments across a range of datasets and network architectures demonstrate reduced overfitting due to DeCov while almost always maintaining or increasing generalization performance and often improving performance over dropout.",
"title": ""
},
{
"docid": "879282128be8b423114401f6ec8baf8a",
"text": "Yelp is one of the largest online searching and reviewing systems for kinds of businesses, including restaurants, shopping, home services et al. Analyzing the real world data from Yelp is valuable in acquiring the interests of users, which helps to improve the design of the next generation system. This paper targets the evaluation of Yelp dataset, which is provided in the Yelp data challenge. A bunch of interesting results are found. For instance, to reach any one in the Yelp social network, one only needs 4.5 hops on average, which verifies the classical six degree separation theory; Elite user mechanism is especially effective in maintaining the healthy of the whole network; Users who write less than 100 business reviews dominate. Those insights are expected to be considered by Yelp to make intelligent business decisions in the future.",
"title": ""
},
{
"docid": "61ad35eaee012d8c1bddcaeee082fa22",
"text": "For realistic simulation it is necessary to thoroughly define and describe light-source characteristics¿especially the light-source geometry and the luminous intensity distribution.",
"title": ""
},
{
"docid": "b6bf6c87040bc4996315fee62acb911b",
"text": "The influence of the sleep patterns of 2,259 students, aged 11 to 14 years, on trajectories of depressive symptoms, self-esteem, and grades was longitudinally examined using latent growth cross-domain models. Consistent with previous research, sleep decreased over time. Students who obtained less sleep in sixth grade exhibited lower initial self-esteem and grades and higher initial levels of depressive symptoms. Similarly, students who obtained less sleep over time reported heightened levels of depressive symptoms and decreased self-esteem. Sex of the student played a strong role as a predictor of hours of sleep, self-esteem, and grades. This study underscores the role of sleep in predicting adolescents' psychosocial outcomes and highlights the importance of using idiographic methodologies in the study of developmental processes.",
"title": ""
},
{
"docid": "80563d90bfdccd97d9da0f7276468a43",
"text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.",
"title": ""
}
] |
scidocsrr
|
04a5199bba708f0ac027cc8d96902ffa
|
3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "836402d8099846a6668272aeec9b2c9f",
"text": "This paper addresses the problem of estimating the intrinsic parameters of the 3D Velodyne lidar while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to alleviate this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal observability conditions, under which, it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes, as well as standard metrics of the calibration errors, are used to assess the validity of our approach.",
"title": ""
}
] |
[
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "07c5f9d76909f47aae5970d82e06e4b5",
"text": "In this paper we present a novel approach to minimally supervised synonym extraction. The approach is based on the word embeddings and aims at presenting a method for synonym extraction that is extensible to various languages. We report experiments with word vectors trained by using both the continuous bag-of-words model (CBoW) and the skip-gram model (SG) investigating the effects of different settings with respect to the contextual window size, the number of dimensions and the type of word vectors. We analyze the word categories that are (cosine) similar in the vector space, showing that cosine similarity on its own is a bad indicator to determine if two words are synonymous. In this context, we propose a new measure, relative cosine similarity, for calculating similarity relative to other cosine-similar words in the corpus. We show that calculating similarity relative to other words boosts the precision of the extraction. We also experiment with combining similarity scores from differently-trained vectors and explore the advantages of using a part-of-speech tagger as a way of introducing some light supervision, thus aiding extraction. We perform both intrinsic and extrinsic evaluation on our final system: intrinsic evaluation is carried out manually by two human evaluators and we use the output of our system in a machine translation task for extrinsic evaluation, showing that the extracted synonyms improve the evaluation metric. ©2016 PBML. Distributed under CC BY-NC-ND. Corresp. author: [email protected] Cite as: Artuur Leeuwenberg, Mihaela Vela, Jon Dehdari, Josef van Genabith. A Minimally Supervised Approach for Synonym Extraction with Word Embeddings. The Prague Bulletin of Mathematical Linguistics No. 105, 2016, pp. 111–142. doi: 10.1515/pralin-2016-0006.",
"title": ""
},
{
"docid": "2efe5c0228e6325cdbb8e0922c19924f",
"text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.",
"title": ""
},
{
"docid": "d7f743ddff9863b046ab91304b37a667",
"text": "In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramér-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization.",
"title": ""
},
{
"docid": "30e229f91456c3d7eb108032b3470b41",
"text": "Software as a service (SaaS) is a rapidly growing model of software licensing. In contrast to traditional software where users buy a perpetual-use license, SaaS users buy a subscription from the publisher. Whereas traditional software publishers typically release new product features as part of new versions of software once in a few years, publishers using SaaS have an incentive to release new features as soon as they are completed. We show that this property of the SaaS licensing model leads to greater investment in product development under most conditions. This increased investment leads to higher software quality in equilibrium under SaaS compared to perpetual licensing. The software publisher earns greater profits under SaaS while social welfare is also higher",
"title": ""
},
{
"docid": "e7f7bc87930407c02b082fee74a8e1a5",
"text": "We thoroughly and critically review studies reporting the real (refractive index) and imaginary (absorption index) parts of the complex refractive index of silica glass over the spectral range from 30 nm to 1000 microm. The general features of the optical constants over the electromagnetic spectrum are relatively consistent throughout the literature. In particular, silica glass is effectively opaque for wavelengths shorter than 200 nm and larger than 3.5-4.0 microm. Strong absorption bands are observed (i) below 160 nm due to the interaction with electrons, absorption by impurities, and the presence of OH groups and point defects; (ii) at aproximately 2.73-2.85, 3.5, and 4.3 microm also caused by OH groups; and (iii) at aproximately 9-9.5, 12.5, and 21-23 microm due to Si-O-Si resonance modes of vibration. However, the actual values of the refractive and absorption indices can vary significantly due to the glass manufacturing process, crystallinity, wavelength, and temperature and to the presence of impurities, point defects, inclusions, and bubbles, as well as to the experimental uncertainties and approximations in the retrieval methods. Moreover, new formulas providing comprehensive approximations of the optical properties of silica glass are proposed between 7 and 50 microm. These formulas are consistent with experimental data and substantially extend the spectral range of 0.21-7 microm covered by existing formulas and can be used in various engineering applications.",
"title": ""
},
{
"docid": "23186cb9f2869e5ba09700b2b9f07c0f",
"text": "Facility Layout Problem (FLP) is logic based combinatorial optimization problem. It is a meta-heuristic solution approach that gained significant attention to obtained optimal facility layout. This paper examines the convergence analysis by changing the crossover and mutation probability in an optimal facility layout. This algorithm is based on appropriate techniques that include multipoint swapped crossover and swap mutation operators. Two test cases were used for the implementations of the said technique and evaluate the robustness of the proposed method compared to other approaches in the literature. Keywords—facility layout problem, genetic algorithm, material handling cost, meta-heuristics",
"title": ""
},
{
"docid": "e381b56801a0cb8a2dc0e9bc3346f68f",
"text": "We have designed and presented a wireless sensor network monitoring and control system for aquaculture. The system can detect and control water quality parameters of temperature, dissolved oxygen content, pH value, and water level in real-time. The sensor nodes collect the water quality parameters and transmit them to the base station host computer through ZigBee wireless communication standard. The host computer is used for data analysis, processing and presentation using LabVIEW software platform. The water quality parameters will be sent to owners through short messages from the base station via the Global System for Mobile (GSM) module for notification. The experimental evaluation of the network performance metrics of quality of communication link, battery performance and data aggregation was presented. The experimental results show that the system has great prospect and can be used to operate in real world environment for optimum control of aquaculture",
"title": ""
},
{
"docid": "4afa66aeaf18fae2b29a0d4c855746dd",
"text": "In this work, we propose a technique that utilizes a fully convolutional network (FCN) to localize image splicing attacks. We first evaluated a single-task FCN (SFCN) trained only on the surface label. Although the SFCN is shown to provide superior performance over existing methods, it still provides a coarse localization output in certain cases. Therefore, we propose the use of a multi-task FCN (MFCN) that utilizes two output branches for multi-task learning. One branch is used to learn the surface label, while the other branch is used to learn the edge or boundary of the spliced region. We trained the networks using the CASIA v2.0 dataset, and tested the trained models on the CASIA v1.0, Columbia Uncompressed, Carvalho, and the DARPA/NIST Nimble Challenge 2016 SCI datasets. Experiments show that the SFCN and MFCN outperform existing splicing localization algorithms, and that the MFCN can achieve finer localization than the SFCN.",
"title": ""
},
{
"docid": "680523e1eaa7abb7556655313875d353",
"text": "Our aim in this paper is to clarify the range of motivations that have inspired the development of computer programs for the composition of music. We consider this to be important since different methodologies are appropriate for different motivations and goals. We argue that a widespread failure to specify the motivations and goals involved has lead to a methodological malaise in music related research. A brief consideration of some of the earliest attempts to produce computational systems for the composition of music leads us to identify four activities involving the development of computer programs which compose music each of which is inspired by different practical or theoretical motivations. These activities are algorithmic composition, the design of compositional tools, the computational modelling of musical styles and the computational modelling of music cognition. We consider these four motivations in turn, illustrating the problems that have arisen from failing to distinguish between them. We propose a terminology that clearly differentiates the activities defined by the four motivations and present methodological suggestions for research in each domain. While it is clearly important for researchers to embrace developments in related disciplines, we argue that research in the four domains will continue to stagnate unless the motivations and aims of research projects are clearly stated and appropriate methodologies are adopted for developing and evaluating systems that compose music.",
"title": ""
},
{
"docid": "304f4e48ac5d5698f559ae504fc825d9",
"text": "How the circadian clock regulates the timing of sleep is poorly understood. Here, we identify a Drosophila mutant, wide awake (wake), that exhibits a marked delay in sleep onset at dusk. Loss of WAKE in a set of arousal-promoting clock neurons, the large ventrolateral neurons (l-LNvs), impairs sleep onset. WAKE levels cycle, peaking near dusk, and the expression of WAKE in l-LNvs is Clock dependent. Strikingly, Clock and cycle mutants also exhibit a profound delay in sleep onset, which can be rescued by restoring WAKE expression in LNvs. WAKE interacts with the GABAA receptor Resistant to Dieldrin (RDL), upregulating its levels and promoting its localization to the plasma membrane. In wake mutant l-LNvs, GABA sensitivity is decreased and excitability is increased at dusk. We propose that WAKE acts as a clock output molecule specifically for sleep, inhibiting LNvs at dusk to promote the transition from wake to sleep.",
"title": ""
},
{
"docid": "05307b60bd185391919ea7c1bf1ce0ec",
"text": "Trace-level reuse is based on the observation that some traces (dynamic sequences of instructions) are frequently repeated during the execution of a program, and in many cases, the instructions that make up such traces have the same source operand values. The execution of such traces will obviously produce the same outcome and thus, their execution can be skipped if the processor records the outcome of previous executions. This paper presents an analysis of the performance potential of trace-level reuse and discusses a preliminary realistic implementation. Like instruction-level reuse, trace-level reuse can improve performance by decreasing resource contention and the latency of some instructions. However, we show that tracelevel reuse is more effective than instruction-level reuse because the former can avoid fetching the instructions of reused traces. This has two important benefits: it reduces the fetch bandwidth requirements, and it increases the effective instruction window size since these instructions do not occupy window entries. Moreover, trace-level reuse can compute all at once the result of a chain of dependent instructions, which may allow the processor to avoid the serialization caused by data dependences and thus, to potentially exceed the dataflow limit.",
"title": ""
},
{
"docid": "6561b240817d9e82d7da51bfd3a58546",
"text": "Vehicle safety is increasingly becoming a concern. Whether the driver is wearing a seatbelt and whether the vehicle is speeding out or not become important indicators of the vehicle safety. However, manually searching, detecting, recording and other work will spend a lot of manpower and time inefficiently. This paper proposes a cascade Adaboost classifier based seatbelt detection system to detect the vehicle windows, to complete Canny edge detection on gradient map of vehicle window images, and to perform the probabilistic Hough transform to extract the straight-lines of seatbelts. The system achieves the goal of seatbelt detection intelligently.",
"title": ""
},
{
"docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
},
{
"docid": "ff5d1ace34029619d79342e5fe63e0b7",
"text": "In this paper, Proposes SIW slot antenna backed with a cavity for 57-64 GHz frequency. This frequency is used for wireless communication applications. The proposed antenna is designed by using Rogers substrate with dielectric constant of 2.2, substrate thickness is 0.381 mm and the microstrip feed is used with the input impedance of 50ohms. The structure provides 5.2GHz impedance bandwidth with a range of 57.8 to 64 GHz and matches with VSWR 2:1. The values of reflection coefficient, VSWR, gain, transmission efficiency and radiation efficiency of proposed antenna at 60GHz are −17.32dB, 1.3318, 7.19dBi, 79.5% and 89.5%.",
"title": ""
},
{
"docid": "c74c73965123e09bfbaef3e9793c38e0",
"text": "We propose a one-class neural network (OC-NN) model to detect anomalies in complex data sets. OC-NN combines the ability of deep networks to extract progressively rich representation of data with the one-class objective of creating a tight envelope around normal data. The OC-NN approach breaks new ground for the following crucial reason: data representation in the hidden layer is driven by the OC-NN objective and is thus customized for anomaly detection. This is a departure from other approaches which use a hybrid approach of learning deep features using an autoencoder and then feeding the features into a separate anomaly detection method like one-class SVM (OC-SVM). The hybrid OC-SVM approach is sub-optimal because it is unable to influence representational learning in the hidden layers. A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and GTSRB), OC-NN performs on par with state-of-the-art methods and outperformed conventional shallow methods in some scenarios.",
"title": ""
},
{
"docid": "8f5ca5819dd28c686da78332add76fb0",
"text": "The emerging Service-Oriented Computing (SOC) paradigm promises to enable businesses and organizations to collaborate in an unprecedented way by means of standard web services. To support rapid and dynamic composition of services in this paradigm, web services that meet requesters' functional requirements must be able to be located and bounded dynamically from a large and constantly changing number of service providers based on their Quality of Service (QoS). In order to enable quality-driven web service selection, we need an open, fair, dynamic and secure framework to evaluate the QoS of a vast number of web services. The fair computation and enforcing of QoS of web services should have minimal overhead but yet able to achieve sufficient trust by both service requesters and providers. In this paper, we presented our open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.",
"title": ""
},
{
"docid": "e40eb32613ed3077177d61ac14e82413",
"text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.",
"title": ""
},
{
"docid": "ac15d2b4d14873235fe6e4d2dfa84061",
"text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.",
"title": ""
},
{
"docid": "4062ef369dce8a6b010282fb362040c4",
"text": "How people in the city perceive their surroundings depends on a variety of dynamic and static context factors such as road traffic, the feeling of safety, urban architecture, etc. Such subjective and context-dependent perceptions can trigger different emotions, which enable additional insights into the spatial and temporal configuration of urban structures. This paper presents the Urban Emotions concept that proposes a human-centred approach for extracting contextual emotional information from human and technical sensors. The methodology proposed in this paper consists of four steps: 1) detecting emotions using wristband sensors, 2) “ground-truthing” these measurements using a People as Sensors location-based service, 3) extracting emotion information from crowdsourced data like Twitter, and 4) correlating the measured and extracted emotions. Finally, the emotion information is mapped and fed back into urban planning for decision support and for evaluating ongoing planning processes.",
"title": ""
}
] |
scidocsrr
|
7379816680472df3d7c1a11f1a457df2
|
Artistic minimal rendering with lines and blocks
|
[
{
"docid": "cfe31ce3a6a23d9148709de6032bd90b",
"text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.",
"title": ""
}
] |
[
{
"docid": "d23649c81665bc76134c09b7d84382d0",
"text": "This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or S. Basagni ( ) Department of Electrical and Computer Engineering, Northeastern University e-mail: [email protected] A. Carosi · C. Petrioli Dipartimento di Informatica, Università di Roma “La Sapienza” e-mail: [email protected] C. Petrioli e-mail: [email protected] E. Melachrinoudis · Z. M. Wang Department of Mechanical and Industrial Engineering, Northeastern University e-mail: [email protected] Z. M. Wang e-mail: [email protected] RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.",
"title": ""
},
{
"docid": "f474fd0bce5fa65e79ceb77a17ace260",
"text": "One popular approach to controlling humanoid robots is through inverse kinematics (IK) with stiff joint position tracking. On the other hand, inverse dynamics (ID) based approaches have gained increasing acceptance by providing compliant motions and robustness to external perturbations. However, the performance of such methods is heavily dependent on high quality dynamic models, which are often very difficult to produce for a physical robot. IK approaches only require kinematic models, which are much easier to generate in practice. In this paper, we supplement our previous work with ID-based controllers by adding IK, which helps compensate for modeling errors. The proposed full body controller is applied to three tasks in the DARPA Robotics Challenge (DRC) Trials in Dec. 2013.",
"title": ""
},
{
"docid": "9093cff51237b4c601f604ad6df85aec",
"text": "Motivation\nReconstructing the full-length expressed transcripts ( a.k.a. the transcript assembly problem) from the short sequencing reads produced by RNA-seq protocol plays a central role in identifying novel genes and transcripts as well as in studying gene expressions and gene functions. A crucial step in transcript assembly is to accurately determine the splicing junctions and boundaries of the expressed transcripts from the reads alignment. In contrast to the splicing junctions that can be efficiently detected from spliced reads, the problem of identifying boundaries remains open and challenging, due to the fact that the signal related to boundaries is noisy and weak.\n\n\nResults\nWe present DeepBound, an effective approach to identify boundaries of expressed transcripts from RNA-seq reads alignment. In its core DeepBound employs deep convolutional neural fields to learn the hidden distributions and patterns of boundaries. To accurately model the transition probabilities and to solve the label-imbalance problem, we novelly incorporate the AUC (area under the curve) score into the optimizing objective function. To address the issue that deep probabilistic graphical models requires large number of labeled training samples, we propose to use simulated RNA-seq datasets to train our model. Through extensive experimental studies on both simulation datasets of two species and biological datasets, we show that DeepBound consistently and significantly outperforms the two existing methods.\n\n\nAvailability and implementation\nDeepBound is freely available at https://github.com/realbigws/DeepBound .\n\n\nContact\[email protected] or [email protected].",
"title": ""
},
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "5cd3809ab7ed083de14bb622f12373fe",
"text": "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.",
"title": ""
},
{
"docid": "0060fbebb60c7f67d8750826262d7135",
"text": "This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods.",
"title": ""
},
{
"docid": "6a2b3389ad8de2a0e9a50d4324869c2a",
"text": "Many web applications provide a fully automatic machine translation service, and users can easily access and understand the information they are interested in. However, the services still have inaccurate results when translating technical terms. Therefore, we suggest a new method that collects reliable translations of technical terms between Korean and English. To collect the pairs, we utilize the metadata of Korean scientific papers and make a new statistical model to adapt the metadata characteristics appropriately. The collected Korean-English pairs are evaluated in terms of reliability and compared with the results of Google translator. Through evaluation and comparison, we confirm that this research can produce highly reliable data and improve the translation quality of technical terms.",
"title": ""
},
{
"docid": "06cc255e124702878e2106bf0e8eb47c",
"text": "Agent technology has been recognized as a promising paradigm for next generation manufacturing systems. Researchers have attempted to apply agent technology to manufacturing enterprise integration, enterprise collaboration (including supply chain management and virtual enterprises), manufacturing process planning and scheduling, shop floor control, and to holonic manufacturing as an implementation methodology. This paper provides an update review on the recent achievements in these areas, and discusses some key issues in implementing agent-based manufacturing systems such as agent encapsulation, agent organization, agent coordination and negotiation, system dynamics, learning, optimization, security and privacy, tools and standards. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b7f2b7e528ee530822ff5bbb371645d",
"text": "Automatically generating video captions with natural language remains a challenge for both the field of nature language processing and computer vision. Recurrent Neural Networks (RNNs), which models sequence dynamics, has proved to be effective in visual interpretation. Based on a recent sequence to sequence model for video captioning, which is designed to learn the temporal structure of the sequence of frames and the sequence model of the generated sentences with RNNs, we investigate how pretrained language model and attentional mechanism can aid the generation of natural language descriptions of videos. We evaluate our improvements on the Microsoft Video Description Corpus (MSVD) dataset, which is a standard dataset for this task. The results demonstrate that our approach outperforms original sequence to sequence model and achieves state-of-art baselines. We further run our model one a much harder Montreal Video Annotation Dataset (M-VAD), where the model also shows promising results.",
"title": ""
},
{
"docid": "7b7c418cefcd571b03e5c0a002a5e923",
"text": "A loop antenna having a gap has been investigated in the presence of a ground plane. The antenna configuration is optimized for the CP radiation, using the method of moments. It is found that, as the loop height above the ground plane is reduced, the optimized gap width approaches zero. Further antenna height reduction is found to be possible for an antenna whose wire radius is increased. On the basis of these results, we design an open-loop array antenna using a microstrip comb line as the feed network. It is demonstrated that an array antenna composed of eight open loop elements can radiate a CP wave with an axial ratio of 0.1 dB. The bandwidth for a 3-dB axial-ratio criterion is 4%, where the gain is almost constant at 15 dBi.",
"title": ""
},
{
"docid": "9627fdd88378559f0e2704bd6fef36e7",
"text": "Traditionally, a full-mouth rehabilitation based on full-crown coverage has been the recommended treatment for patients affected by severe dental erosion. Nowadays, thanks to improved adhesive techniques, the indications for crowns have decreased and a more conservative approach may be proposed. Even though adhesive treatments simplify both the clinical and laboratory procedures, restoring such patients still remains a challenge due to the great amount of tooth destruction. To facilitate the clinician's task during the planning and execution of a full-mouth adhesive rehabilitation, an innovative concept has been developed: the three-step technique. Three laboratory steps are alternated with three clinical steps, allowing the clinician and the laboratory technician to constantly interact to achieve the most predictable esthetic and functional outcome. During the first step, an esthetic evaluation is performed to establish the position of the plane of occlusion. In the second step, the patient's posterior quadrants are restored at an increased vertical dimension. Finally, the third step reestablishes the anterior guidance. Using the three-step technique, the clinician can transform a full-mouth rehabilitation into a rehabilitation for individual quadrants. This article illustrates only the first step in detail, explaining all the clinical parameters that should be analyzed before initiating treatment.",
"title": ""
},
{
"docid": "4d2a87405ed84e8108cd20c855918102",
"text": "When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependencies’ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and proprietary projects, scientific knowledge is still lacking on how and why practitioners use mocks. An empirical understanding of the situations where developers have (and have not) been applying mocks, as well as the impact of such decisions in terms of coupling and software evolution can be used to help practitioners adapt and improve their future usage. To this aim, we study the usage of mock objects in three OSS projects and one industrial system. More specifically, we manually analyze more than 2,000 mock usages. We then discuss our findings with developers from these systems, and identify practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. Finally, we manually analyze how the usage of mock objects in test code evolve over time as well as the impact of their usage on the coupling between test and production code. Our study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. Developers report to frequently mock dependencies that make testing difficult (e.g., infrastructure-related dependencies) and to not mock classes that encapsulate domain concepts/rules of the system. Among the key challenges, developers report that maintaining the behavior of the mock compatible with the behavior of original class is hard and that mocking increases the coupling between the test and the production code. Their perceptions are confirmed by our data, as we observed that mocks mostly exist since the very first version of the test class, and that they tend to stay there for its whole lifetime, and that changes in production code often force the test code to also change.",
"title": ""
},
{
"docid": "e2060b183968f81342df4f636a141a3b",
"text": "This paper presents automatic parallel parking for a passenger vehicle, with highlights on a path-planning method and on experimental results. The path-planning method consists of two parts. First, the kinematic model of the vehicle, with corresponding geometry, is used to create a path to park the vehicle in one or more maneuvers if the spot is very narrow. This path is constituted of circle arcs. Second, this path is transformed into a continuous-curvature path using clothoid curves. To execute the generated path, control inputs for steering angle and longitudinal velocity depending on the traveled distance are generated. Therefore, the traveled distance and the vehicle pose during a parking maneuver are estimated. Finally, the parking performance is tested on a prototype vehicle.",
"title": ""
},
{
"docid": "3f2aa3cde019d56240efba61d52592a4",
"text": "Drivers like global competition, advances in technology, and new attractive market opportunities foster a process of servitization and thus the search for innovative service business models. To facilitate this process, different methods and tools for the development of new business models have emerged. Nevertheless, business model approaches are missing that enable the representation of cocreation as one of the most important service-characteristics. Rooted in a cumulative research design that seeks to advance extant business model representations, this goal is to be closed by the Service Business Model Canvas (SBMC). This contribution comprises the application of thinking-aloud protocols for the formative evaluation of the SBMC. With help of industry experts and academics with experience in the service sector and business models, the usability is tested and implications for its further development derived. Furthermore, this study provides empirically based insights for the design of service business model representation that can facilitate the development of future business models.",
"title": ""
},
{
"docid": "736bf637db43f67775c8e7b934f12602",
"text": "With the fast growing interest in deep learning, various applications and machine learning tasks are emerged in recent years. Video captioning is especially gaining a lot of attention from both computer vision and natural language processing fields. Generating captions is usually performed by jointly learning of different types of data modalities that share common themes in the video. Learning with the joining representations of different modalities is very challenging due to the inherent heterogeneity resided in the mixed information of visual scenes, speech dialogs, music and sounds, and etc. Consequently, it is hard to evaluate the quality of video captioning results. In this paper, we introduce well-known metrics and datasets for evaluation of video captioning. We compare the the existing metrics and datasets to derive a new research proposal for the evaluation of video descriptions.",
"title": ""
},
{
"docid": "1e5956b0d9d053cd20aad8b53730c969",
"text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.",
"title": ""
},
{
"docid": "acb41ecca590ed8bc53b7af46a280daf",
"text": "We consider the problem of state estimation for a dynamic system driven by unobserved, correlated inputs. We model these inputs via an uncertain set of temporally correlated dynamic models, where this uncertainty includes the number of modes, their associated statistics, and the rate of mode transitions. The dynamic system is formulated via two interacting graphs: a hidden Markov model (HMM) and a linear-Gaussian state space model. The HMM's state space indexes system modes, while its outputs are the unobserved inputs to the linear dynamical system. This Markovian structure accounts for temporal persistence of input regimes, but avoids rigid assumptions about their detailed dynamics. Via a hierarchical Dirichlet process (HDP) prior, the complexity of our infinite state space robustly adapts to new observations. We present a learning algorithm and computational results that demonstrate the utility of the HDP for tracking, and show that it efficiently learns typical dynamics from noisy data.",
"title": ""
},
{
"docid": "6b04721c0fc7135ddd0fdf76a9cfdd79",
"text": "Functional magnetic resonance imaging (fMRI) was used to compare brain activity during the retrieval of coarse- and fine-grained spatial details and episodic details associated with a familiar environment. Long-time Toronto residents compared pairs of landmarks based on their absolute geographic locations (requiring either coarse or fine discriminations) or based on previous visits to those landmarks (requiring episodic details). An ROI analysis of the hippocampus showed that all three conditions activated the hippocampus bilaterally. Fine-grained spatial judgments recruited an additional region of the right posterior hippocampus, while episodic judgments recruited an additional region of the right anterior hippocampus, and a more extensive region along the length of the left hippocampus. To examine whole-brain patterns of activity, Partial Least Squares (PLS) analysis was used to identify sets of brain regions whose activity covaried with the three conditions. All three comparison judgments recruited the default mode network including the posterior cingulate/retrosplenial cortex, middle frontal gyrus, hippocampus, and precuneus. Fine-grained spatial judgments also recruited additional regions of the precuneus, parahippocampal cortex and the supramarginal gyrus. Episodic judgments recruited the posterior cingulate and medial frontal lobes as well as the angular gyrus. These results are discussed in terms of their implications for theories of hippocampal function and spatial and episodic memory.",
"title": ""
},
{
"docid": "5fe036906302ab4131c7f9afc662df3f",
"text": "Plant peptide hormones play an important role in regulating plant developmental programs via cell-to-cell communication in a non-cell autonomous manner. To characterize the biological relevance of C-TERMINALLY ENCODED PEPTIDE (CEP) genes in rice, we performed a genome-wide search against public databases using a bioinformatics approach and identified six additional CEP members. Expression analysis revealed a spatial-temporal pattern of OsCEP6.1 gene in different tissues and at different developmental stages of panicle. Interestingly, the expression level of the OsCEP6.1 was also significantly up-regulated by exogenous cytokinin. Application of a chemically synthesized 15-amino acid OsCEP6.1 peptide showed that OsCEP6.1 had a negative role in regulating root and seedling growth, which was further confirmed by transgenic lines. Furthermore, the constitutive expression of OsCEP6.1 was sufficient to lead to panicle architecture and grain size variations. Scanning electron microscopy analysis revealed that the phenotypic variation of OsCEP6.1 overexpression lines resulted from decreased cell size but not reduced cell number. Moreover, starch accumulation was not significantly affected. Taken together, these data suggest that the OsCEP6.1 peptide might be involved in regulating the development of panicles and grains in rice.",
"title": ""
},
{
"docid": "010926d088cf32ba3fafd8b4c4c0dedf",
"text": "The number and the size of spatial databases, e.g. for geomarketing, traffic control or environmental studies, are rapidly growing which results in an increasing need for spatial data mining. In this paper, we present new algorithms for spatial characterization and spatial trend analysis. For spatial characterization it is important that class membership of a database object is not only determined by its non-spatial attributes but also by the attributes of objects in its neighborhood. In spatial trend analysis, patterns of change of some non-spatial attributes in the neighborhood of a database object are determined. We present several algorithms for these tasks. These algorithms were implemented within a general framework for spatial data mining providing a small set of database primitives on top of a commercial spatial database management system. A performance evaluation using a real geographic database demonstrates the effectiveness of the proposed algorithms. Furthermore, we show how the algorithms can be combined to discover even more interesting spatial knowledge.",
"title": ""
}
] |
scidocsrr
|
0a8ffc3e525a9e15863c7e0d84c7a2d0
|
SPECTRAL BASIS NEURAL NETWORKS FOR REAL-TIME TRAVEL TIME FORECASTING
|
[
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "8b1b0ee79538a1f445636b0798a0c7ca",
"text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.",
"title": ""
}
] |
[
{
"docid": "b01b7d382f534812f07faaaa1442b3f9",
"text": "In this paper, we first establish new relationships in matrix forms among discrete Fourier transform (DFT), generalized DFT (GDFT), and various types of discrete cosine transform (DCT) and discrete sine transform (DST) matrices. Two new independent tridiagonal commuting matrices for each of DCT and DST matrices of types I, IV, V, and VIII are then derived from the existing commuting matrices of DFT and GDFT. With these new commuting matrices, the orthonormal sets of Hermite-like eigenvectors for DCT and DST matrices can be determined and the discrete fractional cosine transform (DFRCT) and the discrete fractional sine transform (DFRST) are defined. The relationships among the discrete fractional Fourier transform (DFRFT), fractional GDFT, and various types of DFRCT and DFRST are developed to reduce computations for DFRFT and fractional GDFT.",
"title": ""
},
{
"docid": "d60fb42ca7082289c907c0e2e2c343fc",
"text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to",
"title": ""
},
{
"docid": "7380419cc9c5eac99e8d46e73df78285",
"text": "This paper discusses the classification of books purely based on cover image and title, without prior knowledge or context of author and origin. Several methods were implemented to assess the ability to distinguish books based on only these two characteristics. First we used a color-based distribution approach. Then we implemented transfer learning with convolutional neural networks on the cover image along with natural language processing on the title text. We found that image and text modalities yielded similar accuracy which indicate that we have reached a certain threshold in distinguishing between the genres that we have defined. This was confirmed by the accuracy being quite close to the human oracle accuracy.",
"title": ""
},
{
"docid": "793d41551a918a113f52481ff3df087e",
"text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.",
"title": ""
},
{
"docid": "8c0d117602ecadee24215f5529e527c6",
"text": "We present the first open-set language identification experiments using one-class classification models. We first highlight the shortcomings of traditional feature extraction methods and propose a hashing-based feature vectorization approach as a solution. Using a dataset of 10 languages from different writing systems, we train a One-Class Support Vector Machine using only a monolingual corpus for each language. Each model is evaluated against a test set of data from all 10 languages and we achieve an average F-score of 0.99, demonstrating the effectiveness of this approach for open-set language identification.",
"title": ""
},
{
"docid": "478aa46b9dafbc111c1ff2cdb03a5a77",
"text": "This paper presents results from recent work using structured light laser profile imaging to create high resolution bathymetric maps of underwater archaeological sites. Documenting the texture and structure of submerged sites is a difficult task and many applicable acoustic and photographic mapping techniques have recently emerged. This effort was completed to evaluate laser profile imaging in comparison to stereo imaging and high frequency multibeam mapping. A ROV mounted camera and inclined 532 nm sheet laser were used to create profiles of the bottom that were then merged into maps using platform navigation data. These initial results show very promising resolution in comparison to multibeam and stereo reconstructions, particularly in low contrast scenes. At the test sites shown here there were no significant complications related to scattering or attenuation of the laser sheet by the water. The resulting terrain was gridded at 0.25 cm and shows overall centimeter level definition. The largest source of error was related to the calibration of the laser and camera geometry. Results from three small areas show the highest resolution 3D models of a submerged archaeological site to date and demonstrate that laser imaging will be a viable method for accurate three dimensional site mapping and documentation.",
"title": ""
},
{
"docid": "2876086e4431e8607d5146f14f0c29dc",
"text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.",
"title": ""
},
{
"docid": "d362b36e0c971c43856a07b7af9055f3",
"text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,",
"title": ""
},
{
"docid": "47ac4b546fe75f2556a879d6188d4440",
"text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.",
"title": ""
},
{
"docid": "587f1510411636090bc192b1b9219b58",
"text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.",
"title": ""
},
{
"docid": "cdf2235bea299131929700406792452c",
"text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"title": ""
},
{
"docid": "e33d34d0fbc19dbee009134368e40758",
"text": "Quantum metrology exploits quantum phenomena to improve the measurement sensitivity. Theoretical analysis shows that quantum measurement can break through the standard quantum limits and reach super sensitivity level. Quantum radar systems based on quantum measurement can fufill not only conventional target detection and recognition tasks but also capable of detecting and identifying the RF stealth platform and weapons systems. The theoretical basis, classification, physical realization of quantum radar is discussed comprehensively in this paper. And the technology state and open questions of quantum radars is reviewed at the end.",
"title": ""
},
{
"docid": "06b4bfebe295e3dceadef1a842b2e898",
"text": "Constant changes in the economic environment, where globalization and the development of the knowledge economy act as drivers, are systematically pushing companies towards the challenge of accessing external markets. Web localization constitutes a new field of study and professional intervention. From the translation perspective, localization equates to the website being adjusted to the typological, discursive and genre conventions of the target culture, adapting that website to a different language and culture. This entails much more than simply translating the content of the pages. The content of a webpage is made up of text, images and other multimedia elements, all of which have to be translated and subjected to cultural adaptation. A case study has been carried out to analyze the current presence of localization within Spanish SMEs from the chemical sector. Two types of indicator have been established for evaluating the sample: indicators for evaluating company websites (with a Likert scale from 0–4) and indicators for evaluating web localization (0–2 scale). The results show overall website quality is acceptable (2.5 points out of 4). The higher rating has been obtained by the system quality (with 2.9), followed by information quality (2.7 points) and, lastly, service quality (1.9 points). In the web localization evaluation, the contact information aspects obtain 1.4 points, the visual aspect 1.04, and the navigation aspect was the worse considered (0.37). These types of analysis facilitate the establishment of practical recommendations aimed at SMEs in order to increase their international presence through the localization of their websites.",
"title": ""
},
{
"docid": "3cae5c0440536b95cf1d0273071ad046",
"text": "Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system.\n This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.",
"title": ""
},
{
"docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9",
"text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.",
"title": ""
},
{
"docid": "cf506587f2699d88e4a2e0be36ccac41",
"text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.",
"title": ""
},
{
"docid": "89c85642fc2e0b1f10c9a13b19f1d833",
"text": "Many current successful Person Re-Identification(ReID) methods train a model with the softmax loss function to classify images of different persons and obtain the feature vectors at the same time. However, the underlying feature embedding space is ignored. In this paper, we use a modified softmax function, termed Sphere Softmax, to solve the classification problem and learn a hypersphere manifold embedding simultaneously. A balanced sampling strategy is also introduced. Finally, we propose a convolutional neural network called SphereReID adopting Sphere Softmax and training a single model end-to-end with a new warming-up learning rate schedule on four challenging datasets including Market-1501, DukeMTMC-reID, CHHK-03, and CUHK-SYSU. Experimental results demonstrate that this single model outperforms the state-of-the-art methods on all four datasets without fine-tuning or reranking. For example, it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID. The code and trained weights of our model will be released.",
"title": ""
},
{
"docid": "fee96195e50e7418b5d63f8e6bd07907",
"text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.",
"title": ""
},
{
"docid": "704d729295cddd358eba5eefdf0bdee4",
"text": "Remarkable advances in instrument technology, automation and computer science have greatly simplified many aspects of previously tedious tasks in laboratory diagnostics, creating a greater volume of routine work, and significantly improving the quality of results of laboratory testing. Following the development and successful implementation of high-quality analytical standards, analytical errors are no longer the main factor influencing the reliability and clinical utilization of laboratory diagnostics. Therefore, additional sources of variation in the entire laboratory testing process should become the focus for further and necessary quality improvements. Errors occurring within the extra-analytical phases are still the prevailing source of concern. Accordingly, lack of standardized procedures for sample collection, including patient preparation, specimen acquisition, handling and storage, account for up to 93% of the errors currently encountered within the entire diagnostic process. The profound awareness that complete elimination of laboratory testing errors is unrealistic, especially those relating to extra-analytical phases that are harder to control, highlights the importance of good laboratory practice and compliance with the new accreditation standards, which encompass the adoption of suitable strategies for error prevention, tracking and reduction, including process redesign, the use of extra-analytical specifications and improved communication among caregivers.",
"title": ""
},
{
"docid": "e05b1b6e1ca160b06e36b784df30b312",
"text": "The vision of the MDSD is an era of software engineering where modelling completely replaces programming i.e. the systems are entirely generated from high-level models, each one specifying a different view of the same system. The MDSD can be seen as the new generation of visual programming languages which provides methods and tools to streamline the process of software engineering. Productivity of the development process is significantly improved by the MDSD approach and it also increases the quality of the resulting software system. The MDSD is particularly suited for those software applications which require highly specialized technical knowledge due to the involvement of complex technologies and the large number of complex and unmanageable standards. In this paper, an overview of the MDSD is presented; the working styles and the main concepts are illustrated in detail.",
"title": ""
}
] |
scidocsrr
|
abd5e0c3461694f5de54fcc58fc8f0b1
|
NaLIR: an interactive natural language interface for querying relational databases
|
[
{
"docid": "000961818e2e0e619f1fc0464f69a496",
"text": "Database query languages can be intimidating to the non-expert, leading to the immense recent popularity for keyword based search in spite of its significant limitations. The holy grail has been the development of a natural language query interface. We present NaLIX, a generic interactive natural language query interface to an XML database. Our system can accept an arbitrary English language sentence as query input, which can include aggregation, nesting, and value joins, among other things. This query is translated, potentially after reformulation, into an XQuery expression that can be evaluated against an XML database. The translation is done through mapping grammatical proximity of natural language parsed tokens to proximity of corresponding elements in the result XML. In this demonstration, we show that NaLIX, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed features in NaLIX facilitate the interactive query process and improve the usability of the interface.",
"title": ""
},
{
"docid": "026a0651177ee631a80aaa7c63a1c32f",
"text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.",
"title": ""
}
] |
[
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "47f64720b0526a9141393131921c6e00",
"text": "The purpose of this study was to assess relative total body fat and skinfold patterning in Filipino national karate and pencak silat athletes. Participants were members of the Philippine men's and women's national teams in karate (12 males, 5 females) and pencak silat (17 males and 5 females). In addition to age, the following anthropometric measurements were taken: height, body mass, triceps, subscapular, supraspinale, umbilical, anterior thigh and medial calf skinfolds. Relative total body fat was expressed as sum of six skinfolds. Sum of skinfolds and each individual skinfold were also expressed relative to Phantom height. A two-way (Sport*Gender) ANOVA was used to determine the differences between men and women in total body fat and skinfold patterning. A Bonferroni-adjusted alpha was employed for all analyses. The women had a higher proportional sum of skinfols (80.19 ± 25.31 mm vs. 51.77 ± 21.13 mm, p = 0. 001, eta(2) = 0.275). The men had a lower proportional triceps skinfolds (-1.72 ± 0.71 versus - 0.35 ± 0.75, p < 0.001). Collapsed over gender, the karate athletes (-2.18 ± 0.66) had a lower proportional anterior thigh skinfold than their pencak silat colleagues (-1.71 ± 0.74, p = 0.001). Differences in competition requirements between sports may account for some of the disparity in anthropometric measurements. Key PointsThe purpose of the present investigation was to assess relative total body fat and skinfold patterning in Filipino national karate and pencak silat athletes.The results seem to suggest that there was no difference between combat sports in fatness.Skinfold patterning was more in line with what was reported in the literature with the males recording lower extremity fat.",
"title": ""
},
{
"docid": "6b37baf34546ac4a630aa435af4a2284",
"text": "The adoption of smartphones, devices transforming from simple communication devices to ‘smart’ and multipurpose devices, is constantly increasing. Amongst the main reasons are their small size, their enhanced functionality and their ability to host many useful and attractive applications. However, this vast use of mobile platforms makes them an attractive target for conducting privacy and security attacks. This scenario increases the risk introduced by these attacks for personal mobile devices, given that the use of smartphones as business tools may extend the perimeter of an organization's IT infrastructure. Furthermore, smartphone platforms provide application developers with rich capabilities, which can be used to compromise the security and privacy of the device holder and her environment (private and/or organizational). This paper examines the feasibility of malware development in smartphone platforms by average programmers that have access to the official tools and programming libraries provided by smartphone platforms. Towards this direction in this paper we initially propose specific evaluation criteria assessing the security level of the well-known smartphone platforms (i.e. Android, BlackBerry, Apple iOS, Symbian, Windows Mobile), in terms of the development of malware. In the sequel, we provide a comparative analysis, based on a proof of concept study, in which the implementation and distribution of a location tracking malware is attempted. Our study has proven that, under circumstances, all smartphone platforms could be used by average developers as privacy attack vectors, harvesting data from the device without the users knowledge and consent.",
"title": ""
},
{
"docid": "e8d0b295658e582e534b9f41b1f14b25",
"text": "The rapid development of artificial intelligence has brought the artificial intelligence threat theory as well as the problem about how to evaluate the intelligence level of intelligent products. Both need to find a quantitative method to evaluate the intelligence level of intelligence systems, including human intelligence. Based on the standard intelligence system and the extended Von Neumann architecture, this paper proposes General IQ, Service IQ and Value IQ evaluation methods for intelligence systems, depending on different evaluation purposes. Among them, the General IQ of intelligence systems is to answer the question of whether \"the artificial intelligence can surpass the human intelligence\", which is reflected in putting the intelligence systems on an equal status and conducting the unified evaluation. The Service IQ and Value IQ of intelligence systems are used to answer the question of “how the intelligent products can better serve the human”, reflecting the intelligence and required cost of each intelligence system as a product in the process of serving human. 0. Background With AlphaGo defeating the human Go champion Li Shishi in 2016[1], the worldwide artificial intelligence is developing rapidly. As a result, the artificial intelligence threat theory is widely disseminated as well. At the same time, the intelligent products are flourishing and emerging. Can the artificial intelligence surpass the human intelligence? What level exactly does the intelligence of these intelligent products reach? To answer these questions requires a quantitative method to evaluate the development level of intelligence systems. Since the introduction of the Turing test in 1950, scientists have done a great deal of work on the evaluation system for the development of artificial intelligence[2]. In 1950, Turing proposed the famous Turing experiment, which can determine whether a computer has the intelligence equivalent to that of human with questioning and human judgment method. As the most widely used artificial intelligence test method, the Turing test does not test the intelligence development level of artificial intelligence, but only judges whether the intelligence system can be the same with human intelligence, and depends heavily on the judges’ and testees’ subjective judgments due to too much interference from human factors, so some people often claim their ideas have passed the Turing test, even without any strict verification. On March 24, 2015, the Proceedings of the National Academy of Sciences (PNAS) published a paper proposing a new Turing test method called “Visual Turing test”, which was designed to perform a more in-depth evaluation on the image cognitive ability of computer[3]. In 2014, Mark O. Riedl of the Georgia Institute of Technology believed that the essence of intelligence lied in creativity. He designed a test called Lovelace version 2.0. The test range of Lovelace 2.0 includes the creation of a virtual story novel, poetry, painting and music[4]. There are two problems in various solutions including the Turing test in solving the artificial intelligence quantitative test. Firstly, these test methods do not form a unified intelligent model, nor do they use the model as a basis for analysis to distinguish multiple categories of intelligence, which leads to that it is impossible to test different intelligence systems uniformly, including human; secondly, these test methods can not quantitatively analyze artificial intelligence, or only quantitatively analyze some aspects of intelligence. But what percentage does this system reach to human intelligence? How’s its ratio of speed to the rate of development of human intelligence? All these problems are not covered in the above study. In response to these problems, the author of this paper proposes that: There are three types of IQs in the evaluation of intelligence level for intelligence systems based on different purposes, namely: General IQ, Service IQ and Value IQ. The theoretical basis of the three methods and IQs for the evaluation of intelligence systems, detailed definitions and evaluation methods will be elaborated in the following. 1. Theoretical Basis: Standard Intelligence System and Extended Von Neumann Architecture People are facing two major challenges in evaluating the intelligence level of an intelligence system, including human beings and artificial intelligence systems. Firstly, artificial intelligence systems do not currently form a unified model; secondly, there is no unified model for the comparison between the artificial intelligence systems and the human at present. In response to this problem, the author's research team referred to the Von Neumann Architecture[5], David Wexler's human intelligence model[6], and DIKW model system in the field of knowledge management[7], and put forward a \"standard intelligent model\", which describes the characteristics and attributes of the artificial intelligence systems and the human uniformly, and takes an agent as a system with the abilities of knowledge acquisition, mastery, creation and feedback[8] (see Figure 1). Figure 1 Standard Intelligence Model Based on this model in combination with Von Neumann architecture, an extended Von Neumann architecture can be formed (see Figure 2). Compared to the Von Neumann architecture, this model is added with innovation and creation function that can discover new elements of knowledge and new laws based on the existing knowledge, and make them stored in the storage for use by computers and controllers, and achieve knowledge interaction with the outside through the input / output system. The second addition is an external knowledge database or cloud storage that enables knowledge sharing, whereas the Von Neumann architecture's external storage only serves the single system. A. Arithmetic logic unit D. innovation generator B. Control unitE. input device C. Internal memory unit F. output device Figure 2 Expanded Von Neumann Architecture 2. Definitions of Three IQs of Intelligence System 2.1 Proposal of AI General IQ (AI G IQ) Based on the standard intelligent model, the research team established the AI IQ Test Scale and used it to conduct AI IQ tests on more than 50 artificial intelligence systems including Google, Siri, Baidu, Bing and human groups at the age of 6, 12, and 18 respectively in 2014 and 2016. From the test results, the performance of artificial intelligence systems such as Google and Baidu has been greatly increased from two years ago, but still lags behind the human group at the age of 6[9] (see Table1 and Table 2). Table 1. Ranking of top 13 artificial intelligence IQs for 2014.",
"title": ""
},
{
"docid": "db31e73ce01652b66a2b6a4becffafd7",
"text": "A thorough and complete colonoscopy is critically important in preventing colorectal cancer. Factors associated with difficult and incomplete colonoscopy include a poor bowel preparation, severe diverticulosis, redundant colon, looping, adhesions, young and female patients, patient discomfort, and the expertise of the endoscopist. For difficult colonoscopy, focusing on bowel preparation techniques, appropriate sedation and adjunct techniques such as water immersion, abdominal pressure techniques, and patient positioning can overcome many of these challenges. Occasionally, these fail and other alternatives to incomplete colonoscopy have to be considered. If patients have low risk of polyps, then noninvasive imaging options such as computed tomography (CT) or magnetic resonance (MR) colonography can be considered. Novel applications such as Colon Capsule™ and Check-Cap are also emerging. In patients in whom a clinically significant lesion is noted on a noninvasive imaging test or if they are at a higher risk of having polyps, balloon-assisted colonoscopy can be performed with either a single- or double-balloon enteroscope or colonoscope. The application of these techniques enables complete colonoscopic examination in the vast majority of patients.",
"title": ""
},
{
"docid": "329420b8b13e8c315d341e382419315a",
"text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.",
"title": ""
},
{
"docid": "4e37fee25234a84a32b2ffc721ade2f8",
"text": "Over the last decade, the deep neural networks are a hot topic in machine learning. It is breakthrough technology in processing images, video, speech, text and audio. Deep neural network permits us to overcome some limitations of a shallow neural network due to its deep architecture. In this paper we investigate the nature of unsupervised learning in restricted Boltzmann machine. We have proved that maximization of the log-likelihood input data distribution of restricted Boltzmann machine is equivalent to minimizing the cross-entropy and to special case of minimizing the mean squared error. Thus the nature of unsupervised learning is invariant to different training criteria. As a result we propose a new technique called “REBA” for the unsupervised training of deep neural networks. In contrast to Hinton’s conventional approach to the learning of restricted Boltzmann machine, which is based on linear nature of training rule, the proposed technique is founded on nonlinear training rule. We have shown that the classical equations for RBM learning are a special case of the proposed technique. As a result the proposed approach is more universal in contrast to the traditional energy-based model. We demonstrate the performance of the REBA technique using wellknown benchmark problem. The main contribution of this paper is a novel view and new understanding of an unsupervised learning in deep neural networks.",
"title": ""
},
{
"docid": "d3a6be631dcf65791b4443589acb6880",
"text": "We present a deep generative model for Zero-Shot Learning (ZSL). Unlike most existing methods for this problem, that represent each class as a point (via a semantic embedding), we represent each seen/unseen class using a classspecific latent-space distribution, conditioned on class attributes. We use these latent-space distributions as a prior for a supervised variational autoencoder (VAE), which also facilitates learning highly discriminative feature representations for the inputs. The entire framework is learned end-to-end using only the seen-class training data. At test time, the label for an unseen-class test input is the class that maximizes the VAE lower bound. We further extend the model to a (i) semi-supervised/transductive setting by leveraging unlabeled unseen-class data via an unsupervised learning module, and (ii) few-shot learning where we also have a small number of labeled inputs from the unseen classes. We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of benchmark data sets.",
"title": ""
},
{
"docid": "6a1fa32d9a716b57a321561dfce83879",
"text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .",
"title": ""
},
{
"docid": "23208f44270f69c4de1640bb1c865a73",
"text": "In order to provide a wide variety of mobile services and applications, the fifth-generation (5G) mobile communication system has attracted much attention to improve system capacity much more than the 4G system. The drastic improvement is mainly realized by small/semi-macro cell deployment with much wider bandwidth in higher frequency bands. To cope with larger pathloss in the higher frequency bands, Massive MIMO is one of key technologies to acquire beamforming (BF) in addition to spatial multiplexing. This paper introduces 5G Massive MIMO technologies including high-performance hybrid BF and novel digital BF schemes in addition to distributed Massive MIMO concept with flexible antenna deployment. The latest 5G experimental trials using the Massive MIMO technologies are also shown briefly.",
"title": ""
},
{
"docid": "cb2917b8e6ea5413ef25bb241ff17d1f",
"text": "can be found at: Journal of Language and Social Psychology Additional services and information for http://jls.sagepub.com/cgi/alerts Email Alerts: http://jls.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://jls.sagepub.com/cgi/content/refs/23/4/447 SAGE Journals Online and HighWire Press platforms): (this article cites 16 articles hosted on the Citations",
"title": ""
},
{
"docid": "03ff1bdb156c630add72357005a142f5",
"text": "Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. Stateof-the-art methods enable the real-time creation of a forged version of a single video obtained from a social network. Although numerous methods have been developed for detecting forged images and videos, they are generally targeted at certain domains and quickly become obsolete as new kinds of attacks appear. The method introduced in this paper uses a capsule network to detect various kinds of spoofs, from replay attacks using printed images or recorded videos to computergenerated videos using deep convolutional neural networks. It extends the application of capsule networks beyond their original intention to the solving of inverse graphics problems.",
"title": ""
},
{
"docid": "00309e5119bb0de1d7b2a583b8487733",
"text": "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few studies consider to use user feedback other than click / no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of a commercial news recommendation application and have shown the superior performance of our methods.",
"title": ""
},
{
"docid": "bc9469a9912df59e554c1be99f12d319",
"text": "This paper studies the joint learning of action recognition and temporal localization in long, untrimmed videos. We employ a multi-task learning framework that performs the three highly related steps of action proposal, action recognition, and action localization refinement in parallel instead of the standard sequential pipeline that performs the steps in order. We develop a novel temporal actionness regression module that estimates what proportion of a clip contains action. We use it for temporal localization but it could have other applications like video retrieval, surveillance, summarization, etc. We also introduce random shear augmentation during training to simulate viewpoint change. We evaluate our framework on three popular video benchmarks. Results demonstrate that our joint model is efficient in terms of storage and computation in that we do not need to compute and cache dense trajectory features, and that it is several times faster than its sequential ConvNets counterpart. Yet, despite being more efficient, it outperforms stateof-the-art methods with respect to accuracy.",
"title": ""
},
{
"docid": "a3a12def5690cac73226484fe172e9f8",
"text": "Solar, wind and hydro are renewable energy sources that are seen as reliable alternatives to conventional energy sources such as oil or natural gas. However, the efficiency and the performance of renewable energy systems are still under development. Consequently, the control structures of the grid-connected inverter as an important section for energy conversion and transmission should be improved to meet the requirements for grid interconnection. In this paper, a comprehensive simulation and implementation of a three-phase grid-connected inverter is presented. The control structure of the grid-side inverter is firstly discussed. Secondly, the space vector modulation SVM is presented. Thirdly, the synchronization for grid-connected inverters is discussed. Finally, the simulation of the grid-connected inverter system using PSIM simulation package and the system implementation are presented to illustrate concepts and compare their results.",
"title": ""
},
{
"docid": "cb7397dedaa92be09dec1f78532b9fc5",
"text": "This paper investigates a new strategy for radio resource allocation applying a non-orthogonal multiple access (NOMA) scheme. It calls for the cohabitation of users in the power domain at the transmitter side and for successive interference canceller (SIC) at the receiver side. Taking into account multi-user scheduling, subband assignment and transmit power allocation, a hybrid NOMA scheme is introduced. Adaptive switching to orthogonal signaling (OS) is performed whenever the non-orthogonal cohabitation in the power domain does not improve the achieved data rate per subband. In addition, a new power allocation technique based on waterfilling is introduced to improve the total achieved system throughput. We show that the proposed strategy for resource allocation improves both the spectral efficiency and the cell-edge user throughput. It also proves to be robust in the case of communications in crowded areas.",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "ac41c57bcb533ab5dabcc733dd69a705",
"text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.",
"title": ""
},
{
"docid": "6992e0712e99e11b9ebe862c01c0882b",
"text": "This paper is in many respects a continuation of the earlier paper by the author published in Proc. R. Soc. A in 1998 entitled ‘A comprehensive methodology for the design of ships (and other complex systems)’. The earlier paper described the approach to the initial design of ships developedby the author during some 35years of design practice, including two previous secondments to teach ship design atUCL.Thepresent paper not only takes thatdevelopment forward, it also explains how the research tool demonstrating the author’s approach to initial ship design has now been incorporated in an industry based design system to provide a working graphically and numerically integrated design system. This achievement is exemplified by a series of practical design investigations, undertaken by the UCL Design Research Centre led by the author, which were mainly undertaken for industry clients in order to investigate real problems towhich the approachhasbrought significant insights.The other new strand in the present paper is the emphasis on the human factors or large scale ergonomics dimension, vital to complex and large scale design products but rarely hitherto beengiven sufficientprominence in the crucial formative stagesof large scale designbecauseof the inherent difficulties in doing so. The UCL Design Building Block approach has now been incorporated in the established PARAMARINE ship design system through a module entitled SURFCON. Work is now underway on an Engineering and Physical Sciences Research Council joint project with the University of Greenwich to interface the latter’s escape simulation toolmaritimeEXODUSwithSURFCONtoprovide initial design guidance to ship designers on personnelmovement. The paper’s concluding section considers the wider applicability of the integration of simulation during initial design with the graphically driven synthesis to other complex and large scale design tasks. The paper concludes by suggesting how such an approach to complex design can contribute to the teaching of designers and, moreover, how this designapproach can enable a creative qualitative approach to engineering design to be sustained despite the risk that advances in computer based methods might encourage emphasis being accorded to solely to quantitative analysis.",
"title": ""
},
{
"docid": "fa7a4970cf70032acfd6bdc383107574",
"text": "Alumina-titanium materials (cermets) of enhanced mechanical properties have been lately developed. In this work, physical properties such as electrical conductivity and the crystalline phases in the bulk material are evaluated. As these new cermets manufactured by spark plasma sintering may have potential application for hard tissue replacements, their biocompatibility needs to be evaluated. Thus, this research aims to study the cytocompatibility of a novel alumina-titanium (25 vol. % Ti) cermet compared to its pure counterpart, the spark plasma sintered alumina. The influence of the particular surface properties (chemical composition, roughness and wettability) on the pre-osteoblastic cell response is also analyzed. The material electrical resistance revealed that this cermet may be machined to any shape by electroerosion. The investigated specimens had a slightly undulated topography, with a roughness pattern that had similar morphology in all orientations (isotropic roughness) and a sub-micrometric average roughness. Differences in skewness that implied valley-like structures in the cermet and predominance of peaks in alumina were found. The cermet presented a higher surface hydrophilicity than alumina. Any cytotoxicity risk associated with the new materials or with the innovative manufacturing methodology was rejected. Proliferation and early-differentiation stages of osteoblasts were statistically improved on the composite. Thus, our results suggest that this new multifunctional cermet could improve current alumina-based biomedical devices for applications such as hip joint replacements.",
"title": ""
}
] |
scidocsrr
|
22bd5eb662e28e0c50a7dfa9a92cec89
|
Towards SMS Spam Filtering : Results under a New Dataset
|
[
{
"docid": "5a6fc8dd2b73f5481cbba649e5e76c1b",
"text": "Mobile phones are becoming the latest target of electronic junk mail. Recent reports clearly indicate that the volume of SMS spam messages are dramatically increasing year by year. Probably, one of the major concerns in academic settings was the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. To address this issue, we have recently proposed a new SMS Spam Collection that, to the best of our knowledge, is the largest, public and real SMS dataset available for academic studies. However, as it has been created by augmenting a previously existing database built using roughly the same sources, it is sensible to certify that there are no duplicates coming from them. So, in this paper we offer a comprehensive analysis of the new SMS Spam Collection in order to ensure that this does not happen, since it may ease the task of learning SMS spam classifiers and, hence, it could compromise the evaluation of methods. The analysis of results indicate that the procedure followed does not lead to near-duplicates and, consequently, the proposed dataset is reliable to use for evaluating and comparing the performance achieved by different classifiers.",
"title": ""
},
{
"docid": "52a5f4c15c1992602b8fe21270582cc6",
"text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.",
"title": ""
},
{
"docid": "73973ae6c858953f934396ab62276e0d",
"text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "dfca5783e6ec34d228278f14c5719288",
"text": "Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latentspace back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.",
"title": ""
},
{
"docid": "f1d0f218b789ac104448777c82a4093f",
"text": "This paper critically reviews the literature on managing diversity through human resource management (HRM). We discuss the major issues and objectives of managing diversity and examine the state of human resource diversity management practices in organizations. Our review shows that inequality and discrimination still widely exist and HRM has focused mainly on compliance with equal employment opportunity (EEO) and affirmative action (AA) legislation. Less attention has been paid to valuing, developing and making use of diversity. Our review reveals limited literature examining how diversity is managed in organizations through effective human resource management. We develop a framework that presents strategies for HR diversity management at the strategic, tactical and operational levels. Our review also discusses the implications for practice and further research.",
"title": ""
},
{
"docid": "6b7ab5e130cba03fd9ec41837f82880a",
"text": "High-utility itemset (HUI) mining is a popular data mining task, consisting of enumerating all groups of items that yield a high profit in a customer transaction database. However, an important issue with traditional HUI mining algorithms is that they tend to find itemsets having many items. But those itemsets are often rare, and thus may be less interesting than smaller itemsets for users. In this paper, we address this issue by presenting a novel algorithm named FHM+ for mining HUIs, while considering length constraints. To discover HUIs efficiently with length constraints, FHM+ introduces the concept of Length UpperBound Reduction (LUR), and two novel upper-bounds on the utility of itemsets. An extensive experimental evaluation shows that length constraints are effective at reducing the number of patterns, and the novel upper-bounds can greatly decrease the execution time, and memory usage for HUI mining.",
"title": ""
},
{
"docid": "61c6d49c3cdafe4366d231ebad676077",
"text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.",
"title": ""
},
{
"docid": "d3783bcc47ed84da2c54f5f536450a0c",
"text": "In this paper, we present a new framework for large scale online kernel learning, making kernel methods efficient and scalable for large-scale online learning applications. Unlike the regular budget online kernel learning scheme that usually uses some budget maintenance strategies to bound the number of support vectors, our framework explores a completely different approach of kernel functional approximation techniques to make the subsequent online learning task efficient and scalable. Specifically, we present two different online kernel machine learning algorithms: (i) Fourier Online Gradient Descent (FOGD) algorithm that applies the random Fourier features for approximating kernel functions; and (ii) Nyström Online Gradient Descent (NOGD) algorithm that applies the Nyström method to approximate large kernel matrices. We explore these two approaches to tackle three online learning tasks: binary classification, multi-class classification, and regression. The encouraging results of our experiments on large-scale datasets validate the effectiveness and efficiency of the proposed algorithms, making them potentially more practical than the family of existing budget online kernel learning approaches.",
"title": ""
},
{
"docid": "c2a7fa32a3037ff30bd633ed0934ee5f",
"text": "databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges involved in real-world applications of knowledge discovery, and current and future research directions in the field.",
"title": ""
},
{
"docid": "30957a6b88724db8f59dd35a79523a4b",
"text": "It is believed that repeated exposure to real-life and to entertainment violence may alter cognitive, affective, and behavioral processes, possibly leading to desensitization. The goal of the present study was to determine if there are relationships between real-life and media violence exposure and desensitization as reflected in related characteristics. One hundred fifty fourth and fifth graders completed measures of real-life violence exposure, media violence exposure, empathy, and attitudes towards violence. Regression analyses indicated that only exposure to video game violence was associated with (lower) empathy. Both video game and movie violence exposure were associated with stronger proviolence attitudes. The active nature of playing video games, intense engagement, and the tendency to be translated into fantasy play may explain negative impact, though causality was not investigated in the present design. The samples' relatively low exposure to real-life violence may have limited the identification of relationships. Although difficult to quantify, desensitization to violence should be further studied using related characteristics as in the present study. Individual differences and causal relationships should also be examined.",
"title": ""
},
{
"docid": "30cd626772ad8c8ced85e8312d579252",
"text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>",
"title": ""
},
{
"docid": "75c1fa342d6f30d68b0aba906a54dd69",
"text": "The Constrained Application Protocol (CoAP) is a promising candidate for future smart city applications that run on resource-constrained devices. However, additional security means are mandatory to cope with the high security requirements of smart city applications. We present a framework to evaluate lightweight intrusion detection techniques for CoAP applications. This framework combines an OMNeT++ simulation with C/C++ application code that also runs on real hardware. As the result of our work, we used our framework to evaluate intrusion detection techniques for a smart public transport application that uses CoAP. Our first evaluations indicate that a hybrid IDS approach is a favorable choice for smart city applications.",
"title": ""
},
{
"docid": "f7562e0540e65fdfdd5738d559b4aad1",
"text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING",
"title": ""
},
{
"docid": "c943d44e452c5cd5e027df814f8aac32",
"text": "Three experiments tested the hypothesis that the social roles implied by specific contexts can attenuate or reverse the typical pattern of racial bias obtained on both controlled and automatic evaluation measures. Study 1 assessed evaluations of Black and Asian faces in contexts related to athlete or student roles. Study 2 compared evaluations of Black and White faces in 3 role-related contexts (prisoner, churchgoer, and factory worker). Study 3 manipulated role cues (lawyer or prisoner) within the same prison context. All 3 studies produced significant reversals of racial bias as a function of implied role on measures of both controlled and automatic evaluation. These results support the interpretation that differential evaluations based on Race x Role interactions provide one way that context can moderate both controlled and automatic racial bias.",
"title": ""
},
{
"docid": "48aa68862748ab502f3942300b4d8e1e",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "d5bc5837349333a6f1b0b47f16844c13",
"text": "Personalized news recommender systems have gained increasing attention in recent years. Within a news reading community, the implicit correlations among news readers, news articles, topics and named entities, e.g., what types of named entities in articles are preferred by users, and why users like the articles, could be valuable for building an effective news recommender. In this paper, we propose a novel news personalization framework by mining such correlations. We use hypergraph to model various high-order relations among different objects in news data, and formulate news recommendation as a ranking problem on fine-grained hypergraphs. In addition, by transductive inference, our proposed algorithm is capable of effectively handling the so-called cold-start problem. Extensive experiments on a data set collected from various news websites have demonstrated the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "633ae4599a8d5ce5fd3b8dc8c465dd90",
"text": "Softmax is an output activation function for modeling categorical probability distributions in many applications of deep learning. However, a recent study revealed that softmax can be a bottleneck of representational capacity of neural networks in language modeling (the softmax bottleneck). In this paper, we propose an output activation function for breaking the softmax bottleneck without additional parameters. We re-analyze the softmax bottleneck from the perspective of the output set of log-softmax and identify the cause of the softmax bottleneck. On the basis of this analysis, we propose sigsoftmax, which is composed of a multiplication of an exponential function and sigmoid function. Sigsoftmax can break the softmax bottleneck. The experiments on language modeling demonstrate that sigsoftmax and mixture of sigsoftmax outperform softmax and mixture of softmax, respectively.",
"title": ""
},
{
"docid": "34546e42bd78161259d2bc190e36c9f7",
"text": "Peer to Peer networks are the leading cause for music piracy but also used for music sampling prior to purchase. In this paper we investigate the relations between music file sharing and sales (both physical and digital)using large Peer-to-Peer query database information. We compare file sharing information on songs to their popularity on the Billboard Hot 100 and the Billboard Digital Songs charts, and show that popularity trends of songs on the Billboard have very strong correlation (0.88-0.89) to their popularity on a Peer-to-Peer network. We then show how this correlation can be utilized by common data mining algorithms to predict a song's success in the Billboard in advance, using Peer-to-Peer information.",
"title": ""
},
{
"docid": "6ddad64507fa5ebf3b2930c261584967",
"text": "In this article we propose a methodology to determine snow cover by means of Landsat-7 ETM+ and Landsat-5 TM images, as well as an improvement in daily Snow Cover TERRA- MODIS product (MOD10A1), between 2002 and 2005. Both methodologies are based on a NDSI threshold > 0.4. In the Landsat case, and although this threshold also selects water bodies, we have obtained optimal results using a mask of water bodies and generating a pre-boundary snow mask around the snow cover. Moreover, an important improvement in snow cover mapping in shadow cast areas by means of a hybrid classification has been obtained. Using these results as ground truth we have verified MODIS Snow Cover product using coincident dates. In the MODIS product, we have noted important commission errors in water bodies, forest covers and orographic shades because of the NDVI-NDSI filter applied to this product. In order to improve MODIS snow cover determination using MODIS images, we propose a hybrid methodology based on experience with Landsat images, which provide greater spatial resolution.",
"title": ""
},
{
"docid": "640af69086854b79257cbdeb4668830b",
"text": "Traditionally traffic safety was addressed by traffic awareness and passive safety measures like solid chassis, seat belts, air bags etc. With the recent breakthroughs in the domain of mobile ad hoc networks, the concept of vehicular ad hoc networks (VANET) was realised. Safety messaging is the most important aspect of VANETs, where the passive safety (accident readiness) in vehicles was reinforced with the idea of active safety (accident prevention). In safety messaging vehicles will message each other over wireless media, updating each other on traffic conditions and hazards. Security is an important aspect of safety messaging, that aims to prevent participants spreading wrong information in the network that are likely to cause mishaps. Equally important is the fact that secure communication protocols should satisfy the communication constraints of VANETs. VANETs are delay intolerant. Features like high speeds, large network size, constant mobility etc. induce certain limitations in the way messaging can be carried out in VANETs. This thesis studies the impact of total message size on VANET messaging system performance, and conducts an analysis of secure communication protocols to measure how they perform in a VANET messaging system.",
"title": ""
},
{
"docid": "e584549afba4c444c32dfe67ee178a84",
"text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "edd6fb76f672e00b14935094cb0242d0",
"text": "Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for taskoriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.",
"title": ""
},
{
"docid": "a28c5732d2df003e76464e4fc65334e3",
"text": "Fingerprint identification is based on two basic premises: (i) persistence: the basic characteristics of fingerprints do not change with time; and (ii) individuality: the fingerprint is unique to an individual. The validity of the first premise has been established by the anatomy and morphogenesis of friction ridge skin. While the second premise has been generally accepted to be true based on empirical results, the underlying scientific basis of fingerprint individuality has not been formally established. As a result, the validity of fingerprint evidence is now being challenged in several court cases. A scientific basis for establishing fingerprint individuality will not only result in the admissibility of fingerprint identification in the courts of law but will also establish an upper bound on the performance of an automatic fingerprint verification system. We address the problem of fingerprint individuality by quantifying the amount of information available in minutiae features to establish a correspondence between two fingerprint images. We derive an expression which estimates the probability of a false correspondence between minutiae-based representations from two arbitrary fingerprints belonging to different fingers. For example, the probability that a fingerprint with 36 minutiae points will share 12 minutiae points with another arbitrarily chosen fingerprint with 36 minutiae ∗An earlier version of this paper appeared in the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 805-812, Hawaii, December 11-13, 2001. †Corresponding Author",
"title": ""
}
] |
scidocsrr
|
f9b7547746046886ca65804f7ffe1405
|
ASPIER: An Automated Framework for Verifying Security Protocol Implementations
|
[
{
"docid": "2a60bb7773d2e5458de88d2dc0e78e54",
"text": "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols.",
"title": ""
},
{
"docid": "d1c46994c5cfd59bdd8d52e7d4a6aa83",
"text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.",
"title": ""
},
{
"docid": "7d634a9abe92990de8cb41a78c25d2cc",
"text": "We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests.",
"title": ""
}
] |
[
{
"docid": "a61f2e71e0b68d8f4f79bfa33c989359",
"text": "Model-based testing relies on behavior models for the generation of model traces: input and expected output---test cases---for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.",
"title": ""
},
{
"docid": "b062222917050f13c3a17e8de53a6abe",
"text": "Exposed to traditional language learning strategies, students will gradually lose interest in and motivation to not only learn English, but also any language or culture. Hence, researchers are seeking technology-based learning strategies, such as digital game-mediated language learning, to motivate students and improve learning performance. This paper synthesizes the findings of empirical studies focused on the effectiveness of digital games in language education published within the last five years. Nine qualitative, quantitative, and mixed-method studies are collected and analyzed in this paper. The review found that recent empirical research was conducted primarily to examine the effectiveness by measuring language learning outcomes, motivation, and interactions. Weak proficiency was found in vocabulary retention, but strong proficiency was present in communicative skills such as speaking. Furthermore, in general, students reported that they are motivated to engage in language learning when digital games are involved; however, the motivation is also observed to be weak due to the design of the game and/or individual differences. The most effective method used to stimulate interaction language learning process seems to be digital games, as empirical studies demonstrate that it effectively promotes language education. However, significant work is still required to provide clear answers with respect to innovative and effective learning practice.",
"title": ""
},
{
"docid": "3f0f97dfa920d8abf795ba7f48904a3a",
"text": "An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.",
"title": ""
},
{
"docid": "d0c4997c611d8759805d33cf1ad9eef1",
"text": "The automatic evaluation of text-based assessment items, such as short answers or essays, is an open and important research challenge. In this paper, we compare several features for the classification of short open-ended responses to questions related to a large first-year health sciences course. These features include a) traditional n-gram models; b) entity URIs (Uniform Resource Identifier) and c) entity mentions extracted using a semantic annotation API; d) entity mention embeddings based on GloVe, and e) entity URI embeddings extracted from Wikipedia. These features are used in combination with classification algorithms to discriminate correct answers from incorrect ones. Our results show that, on average, n-gram features performed the best in terms of precision and entity mentions in terms of f1-score. Similarly, in terms of accuracy, entity mentions and n-gram features performed the best. Finally, features based on dense vector representations such as entity embeddings and mention embeddings obtained the best f1-score for predicting correct answers.",
"title": ""
},
{
"docid": "14636b427ecdab0b0bc73c1948eb8a08",
"text": "We review research related to the learning of complex motor skills with respect to principles developed on the basis of simple skill learning. Although some factors seem to have opposite effects on the learning of simple and of complex skills, other factors appear to be relevant mainly for the learning of more complex skills. We interpret these apparently contradictory findings as suggesting that situations with low processing demands benefit from practice conditions that increase the load and challenge the performer, whereas practice conditions that result in extremely high load should benefit from conditions that reduce the load to more manageable levels. The findings reviewed here call into question the generalizability of results from studies using simple laboratory tasks to the learning of complex motor skills. They also demonstrate the need to use more complex skills in motor-learning research in order to gain further insights into the learning process.",
"title": ""
},
{
"docid": "7f9640bc22241bb40154bedcfda33655",
"text": "This project aims to detect possible anomalies in the resource consumption of radio base stations within the 4G LTE Radio architecture. This has been done by analyzing the statistical data that each node generates every 15 minutes, in the form of \"performance maintenance counters\". In this thesis, we introduce methods that allow resources to be automatically monitored after software updates, in order to detect any anomalies in the consumption patterns of the different resources compared to the reference period before the update. Additionally, we also attempt to narrow down the origin of anomalies by pointing out parameters potentially linked to the issue.",
"title": ""
},
{
"docid": "e43a39af20f2e905d0bdb306235c622a",
"text": "This paper presents a fully integrated remotely powered and addressable radio frequency identification (RFID) transponder working at 2.45 GHz. The achieved operating range at 4 W effective isotropically radiated power (EIRP) base-station transmit power is 12 m. The integrated circuit (IC) is implemented in a 0.5 /spl mu/m silicon-on-sapphire technology. A state-of-the-art rectifier design achieving 37% of global efficiency is embedded to supply energy to the transponder. The necessary input power to operate the transponder is about 2.7 /spl mu/W. Reader to transponder communication is obtained using on-off keying (OOK) modulation while transponder to reader communication is ensured using the amplitude shift keying (ASK) backscattering modulation technique. Inductive matching between the antenna and the transponder IC is used to further optimize the operating range.",
"title": ""
},
{
"docid": "5109aa9328094af5e552ed1cab62f09a",
"text": "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.",
"title": ""
},
{
"docid": "cebb70761a891fd1bce7402c10e7266c",
"text": "Abstract: A new approach for mobility, providing an alternative to the private passenger car, by offering the same flexibility but with much less nuisances, is emerging, based on fully automated electric vehicles. A fleet of such vehicles might be an important element in a novel individual, door-to-door, transportation system to the city of tomorrow. For fully automated operation, trajectory planning methods that produce smooth trajectories, with low associated accelerations and jerk, for providing passenger ́s comfort, are required. This paper addresses this problem proposing an approach that consists of introducing a velocity planning stage to generate adequate time sequences for usage in the interpolating curve planners. Moreover, the generated speed profile can be merged into the trajectory for usage in trajectory-tracking tasks like it is described in this paper, or it can be used separately (from the generated 2D curve) for usage in pathfollowing tasks. Three trajectory planning methods, aided by the speed profile planning, are analysed from the point of view of passengers' comfort, implementation easiness, and trajectory tracking.",
"title": ""
},
{
"docid": "5d8fc02f96206da7ccb112866951d4c7",
"text": "Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.",
"title": ""
},
{
"docid": "36acc76d232f2f58fcb6b65a1d4027aa",
"text": "Surface measurements of the ear are needed to assess damage in patients with disfigurement or defects of the ears and face. Population norms are useful in calculating the amount of tissue needed to rebuild the ear to adequate size and natural position. Anthropometry proved useful in defining grades of severe, moderate, and mild microtia in 73 patients with various facial syndromes. The division into grades was based on the amount of tissue lost and the degree of asymmetry in the position of the ears. Within each grade the size and position of the ears varied greatly. In almost one-third, the nonoperated microtic ears were symmetrically located, promising the best aesthetic results with the least demanding surgical procedures. In slightly over one-third, the microtic ears were associated with marked horizontal and vertical asymmetries. In cases of horizontal and vertical dislocation exceeding 20 mm, surgical correction of the defective facial framework should precede the building up of a new ear. Data on growth and age of maturation of the ears in the normal population can be useful in choosing the optimal time for ear reconstruction.",
"title": ""
},
{
"docid": "2ae58def943d1ae34e1c62663900d64a",
"text": "This document outlines a method for implementing an eye tracking device as a method of electrical wheelchair control. Through the use of measured gaze points, it is possible to translate a desired movement into a physical one. This form of interface does not only provide a form of transportation for those with severe disability but also allow the user to get a sense of control back into their lives.",
"title": ""
},
{
"docid": "518e0713115bcaac6efc087d4107d95c",
"text": "This paper introduces a device and needed signal processing for high-resolution acoustic imaging in air. The device employs off the shelf audio hardware and linear frequency modulated (LFM) pulse waveform. The image formation is based on the principle of synthetic aperture. The proposed implementation uses inverse filtering method with a unique kernel function for each pixel and focuses a synthetic aperture with no approximations. The method is solid for both far-field and near-field and easily adaptable for different synthetic aperture formation geometries. The proposed imaging is demonstrated via an inverse synthetic aperture formation where the object rotation by a stepper motor provides the required change in aspect angle. Simulated and empirical results are presented. Measurements have been done using a conventional speaker and microphones in an ordinary room with near-field distance and strong static echoes present. The resulting high-resolution 2-D spatial distribution of the acoustic reflectivity provides valuable information for many applications such as object recognition.",
"title": ""
},
{
"docid": "01288eefbf2bc0e8c9dc4b6e0c6d70e9",
"text": "The latest discoveries on diseases and their diagnosis/treatment are mostly disseminated in the form of scientific publications. However, with the rapid growth of the biomedical literature and a high level of variation and ambiguity in disease names, the task of retrieving disease-related articles becomes increasingly challenging using the traditional keywordbased approach. An important first step for any disease-related information extraction task in the biomedical literature is the disease mention recognition task. However, despite the strong interest, there has not been enough work done on disease name identification, perhaps because of the difficulty in obtaining adequate corpora. Towards this aim, we created a large-scale disease corpus consisting of 6900 disease mentions in 793 PubMed citations, derived from an earlier corpus. Our corpus contains rich annotations, was developed by a team of 12 annotators (two people per annotation) and covers all sentences in a PubMed abstract. Disease mentions are categorized into Specific Disease, Disease Class, Composite Mention and Modifier categories. When used as the gold standard data for a state-of-the-art machine-learning approach, significantly higher performance can be found on our corpus than the previous one. Such characteristics make this disease name corpus a valuable resource for mining disease-related information from biomedical text. The NCBI corpus is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Fe llows/Dogan/disease.html.",
"title": ""
},
{
"docid": "99f66f4ff6a8548a4cbdac39d5f54cc4",
"text": "Dissolution tests that can predict the in vivo performance of drug products are usually called biorelevant dissolution tests. Biorelevant dissolution testing can be used to guide formulation development, to identify food effects on the dissolution and bioavailability of orally administered drugs, and to identify solubility limitations and stability issues. To develop a biorelevant dissolution test for oral dosage forms, the physiological conditions in the gastrointestinal (GI) tract that can affect drug dissolution are taken into consideration according to the properties of the drug and dosage form. A variety of biorelevant methods in terms of media and hydrodynamics to simulate the contents and the conditions of the GI tract are presented. The ability of biorelevant dissolution methods to predict in vivo performance and generate successful in vitro–in vivo correlations (IVIVC) for oral formulations are also discussed through several studies.",
"title": ""
},
{
"docid": "cda5c6908b4f52728659f89bb082d030",
"text": "Until a few years ago the diagnosis of hair shaft disorders was based on light microscopy or scanning electron microscopy on plucked or cut samples of hair. Dermatoscopy is a new fast, noninvasive, and cost-efficient technique for easy in-office diagnosis of all hair shaft abnormalities including conditions such as pili trianguli and canaliculi that are not recognizable by examining hair shafts under the light microscope. It can also be used to identify disease limited to the eyebrows or eyelashes. Dermatoscopy allows for fast examination of the entire scalp and is very helpful to identify the affected hair shafts when the disease is focal.",
"title": ""
},
{
"docid": "561320dd717f1a444735dfa322dfbd31",
"text": "IEEE 802.11 based WLAN systems have gained interest to be used in the military and public authority environments, where the radio conditions can be harsh due to intentional jamming. The radio environment can be difficult also in commercial and civilian deployments since the unlicensed frequency bands are crowded. To study these problems, we built a test bed with a controlled signal path to measure the effects of different interfering signals to WLAN communications. We use continuous wideband noise jamming as the point of comparison, and focus on studying the effect of pulsed jamming and frequency sweep jamming. In addition, we consider also medium access control (MAC) interference. Based on the results, WLAN systems do not seem to be sensitive to the tested short noise jamming pulses. Under longer pulses, the effects are seen, and long data frames are more vulnerable to jamming than short ones. In fact, even a small amount of long frames in a data stream can ruin the performance of the whole link. Under frequency sweep jamming, slow sweeps with narrowband jamming signals can be quite harmful to WLAN communications. The results of MAC jamming show significant variation in performance between the different devices: The clear channel assessment (CCA) mechanism of some devices can be jammed very easily by using WLAN-like jamming signals. As a side product, the study also revealed some countermeasures against jamming.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "a920ed7775a73791946eb5610387bc23",
"text": "A limiting factor for photosynthetic organisms is their light-harvesting efficiency, that is the efficiency of their conversion of light energy to chemical energy. Small modifications or variations of chlorophylls allow photosynthetic organisms to harvest sunlight at different wavelengths. Oxygenic photosynthetic organisms usually utilize only the visible portion of the solar spectrum. The cyanobacterium Acaryochloris marina carries out oxygenic photosynthesis but contains mostly chlorophyll d and only traces of chlorophyll a. Chlorophyll d provides a potential selective advantage because it enables Acaryochloris to use infrared light (700-750 nm) that is not absorbed by chlorophyll a. Recently, an even more red-shifted chlorophyll termed chlorophyll f has been reported. Here, we discuss using modified chlorophylls to extend the spectral region of light that drives photosynthetic organisms.",
"title": ""
},
{
"docid": "fb8638c46ca5bb4a46b1556a2504416d",
"text": "In this paper we investigate how a VANET-based traffic information system can overcome the two key problems of strictly limited bandwidth and minimal initial deployment. First, we present a domain specific aggregation scheme in order to minimize the required overall bandwidth. Then we propose a genetic algorithm which is able to identify good positions for static roadside units in order to cope with the highly partitioned nature of a VANET in an early deployment stage. A tailored toolchain allows to optimize the placement with respect to an application-centric objective function, based on travel time savings. By means of simulation we assess the performance of the resulting traffic information system and the optimization strategy.",
"title": ""
}
] |
scidocsrr
|
311172e6662a2d88ccafb0f07613bf35
|
Multiple Arousal Theory and Daily-Life Electrodermal Activity Asymmetry
|
[
{
"docid": "d76e649c6daeb71baf377c2b36623e29",
"text": "The somatic marker hypothesis proposes that decision-making is a process that depends on emotion. Studies have shown that damage of the ventromedial prefrontal (VMF) cortex precludes the ability to use somatic (emotional) signals that are necessary for guiding decisions in the advantageous direction. However, given the role of the amygdala in emotional processing, we asked whether amygdala damage also would interfere with decision-making. Furthermore, we asked whether there might be a difference between the roles that the amygdala and VMF cortex play in decision-making. To address these two questions, we studied a group of patients with bilateral amygdala, but not VMF, damage and a group of patients with bilateral VMF, but not amygdala, damage. We used the \"gambling task\" to measure decision-making performance and electrodermal activity (skin conductance responses, SCR) as an index of somatic state activation. All patients, those with amygdala damage as well as those with VMF damage, were (1) impaired on the gambling task and (2) unable to develop anticipatory SCRs while they pondered risky choices. However, VMF patients were able to generate SCRs when they received a reward or a punishment (play money), whereas amygdala patients failed to do so. In a Pavlovian conditioning experiment the VMF patients acquired a conditioned SCR to visual stimuli paired with an aversive loud sound, whereas amygdala patients failed to do so. The results suggest that amygdala damage is associated with impairment in decision-making and that the roles played by the amygdala and VMF in decision-making are different.",
"title": ""
}
] |
[
{
"docid": "1ace2a8a8c6b4274ac0891e711d13190",
"text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.",
"title": ""
},
{
"docid": "305ae3e7a263bb12f7456edca94c06ca",
"text": "We study the effects of changes in uncertainty about future fiscal policy on aggregate economic activity. In light of large fiscal deficits and high public debt levels in the U.S., a fiscal consolidation seems inevitable. However, there is notable uncertainty about the policy mix and timing of such a budgetary adjustment. To evaluate the consequences of the increased uncertainty, we first estimate tax and spending processes for the U.S. that allow for timevarying volatility. We then feed these processes into an otherwise standard New Keynesian business cycle model calibrated to the U.S. economy. We find that fiscal volatility shocks can have a sizable adverse effect on economic activity.",
"title": ""
},
{
"docid": "7437f0c8549cb8f73f352f8043a80d19",
"text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.",
"title": ""
},
{
"docid": "efc7adc3963e7ccb0e2f1297a81005b2",
"text": "data types Reasoning Englis guitarists Academic degrees Companies establishe... Cubes Internet radio Loc l authorities ad... Figure 5: Topic coverage of LAK data graph for the individual resources. 5. RELATED WORK Cobo et al.[3] presents an analysis of student participation in online discussion forums using an agglomerative hierarchical clustering algorithm, and explore the profiles to find relevant activity patterns and detect different student profiles. Barber et al. [1] uses a predictive analytic model to prevent students from failing in courses. They analyze several variables, such as grades, age, attendance and others, that can impede the student learning.Kahn et al. [7] present a long-term study using hierarchical cluster analysis, t-tests and Pearson correlation that identified seven behavior patterns of learners in online discussion forums based on their access. García-Solórzano et al. [6] introduce a new educational monitoring tool that helps tutors to monitor the development of the students. Unlike traditional monitoring systems, they propose a faceted browser visualization tool to facilitate the analysis of the student progress. Glass [8] provides a versatile visualization tool to enable the creation of additional visualizations of data collections. Essa et al. [4] utilize predictive models to identify learners academically at-risk. They present the problem with an interesting analogy to the patient-doctor workflow, where first they identify the problem, analyze the situation and then prescribe courses that are indicated to help the student to succeed. Siadaty et al.[13] present the Learn-B environment, a hub system that captures information about the users usage in different softwares and learning activities in their workplace and present to the user feedback to support future decisions, planning and accompanies them in the learning process. In the same way, McAuley et al. [9] propose a visual analytics to support organizational learning in online communities. They present their analysis through an adjacency matrix and an adjustable timeline that show the communication-actions of the users and is able to organize it into temporal patterns. Bramucci et al. [2] presents Sherpa an academic recommendation system to support students on making decisions. For instance, using the learner profiles they recommend courses or make interventions in case that students are at-risk. In the related work, we showed how different perspectives and the necessity of new tools and methods to make data available and help decision-makers. 6. CONCLUSION In this paper we presented the main features of the Cite4Me Web application. Cite4Me makes use of several data sources to provide information for users interested on scientific publications and its applications. Additionally, we provided a general framework on data discovery and correlated resources based on a constructed feature set, consisting of items extracted from reference datasets. It made possible for users, to search and relate resources from a dataset with other resources offered as Linked Data. For more information about the Cite4Me Web application refer to http://www.cite4me.com. 7. REFERENCES [1] R. Barber and M. Sharkey. Course correction: using analytics to predict course success. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 259–262, New York, NY, USA, 2012. ACM. [2] R. Bramucci and J. Gaston. Sherpa: increasing student success with a recommendation engine. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 82–83, New York, NY, USA, 2012. ACM. [3] G. Cobo, D. García-Solórzano, J. A. Morán, E. Santamaría, C. Monzo, and J. Melenchón. Using agglomerative hierarchical clustering to model learner participation profiles in online discussion forums. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 248–251, New York, NY, USA, 2012. ACM. [4] A. Essa and H. Ayad. Student success system: risk analytics and data visualization using ensembles of predictive models. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 158–161, New York, NY, USA, 2012. ACM. [5] E. Gabrilovich and S. Markovitch. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proc. of the 20th international joint conference on Artifical intelligence, IJCAI’07, pages 1606–1611, San Francisco, CA, USA, 2007. Morgan Kaufmann Pub. Inc. [6] D. García-Solórzano, G. Cobo, E. Santamaría, J. A. Morán, C. Monzo, and J. Melenchón. Educational monitoring tool based on faceted browsing and data portraits. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 170–178, New York, NY, USA, 2012. ACM. [7] T. M. Khan, F. Clear, and S. S. Sajadi. The relationship between educational performance and online access routines: analysis of students’ access to an online discussion forum. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 226–229, New York, NY, USA, 2012. ACM. [8] D. Leony, A. Pardo, L. de la Fuente Valentín, D. S. de Castro, and C. D. Kloos. Glass: a learning analytics visualization tool. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 162–163, New York, NY, USA, 2012. ACM. [9] J. McAuley, A. O’Connor, and D. Lewis. Exploring reflection in online communities. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 102–110, New York, NY, USA, 2012. ACM. [10] P. N. Mendes, M. Jakob, A. García-Silva, and C. Bizer. Dbpedia spotlight: shedding light on the web of documents. In Proc. of the 7th International Conference on Semantic Systems, I-Semantics ’11, pages 1–8, New York, NY, USA, 2011. ACM. [11] B. Pereira Nunes, S. Dietze, M. A. Casanova, R. Kawase, B. Fetahu, and W. Nejdl. Combining a co-occurrence-based and a semantic measure for entity linking. In ESWC, 2013 (to appear). [12] B. Pereira Nunes, R. Kawase, S. Dietze, D. Taibi, M. A. Casanova, and W. Nejdl. Can entities be friends? In G. Rizzo, P. Mendes, E. Charton, S. Hellmann, and A. Kalyanpur, editors, Proc. of the Web of Linked Entities Workshop in conjuction with the 11th International Semantic Web Conference, volume 906 of CEUR-WS.org, pages 45–57, Nov. 2012. [13] M. Siadaty, D. Gašević, J. Jovanović, N. Milikić, Z. Jeremić, L. Ali, A. Giljanović, and M. Hatala. Learn-b: a social analytics-enabled tool for self-regulated workplace learning. In Proc. of the 2nd International Conference on Learning Analytics and Knowledge, LAK ’12, pages 115–119, New York, NY, USA, 2012. ACM. [14] C. van Rijsbergen, S. Robertson, and M. Porter. New models in probabilistic information retrieval. 1980.",
"title": ""
},
{
"docid": "cf26c4f612a23ec26b284a6b243de7f4",
"text": "Grit-perseverance and passion for long-term goals-has been shown to be a significant predictor of academic success, even after controlling for other personality factors. Here, for the first time, we use a U.K.-representative sample and a genetically sensitive design to unpack the etiology of Grit and its prediction of academic achievement in comparison to well-established personality traits. For 4,642 16-year-olds (2,321 twin pairs), we used the Grit-S scale (perseverance of effort and consistency of interest), along with the Big Five personality traits, to predict grades on the General Certificate of Secondary Education (GCSE) exams, which are administered U.K.-wide at the end of compulsory education. Twin analyses of Grit perseverance yielded a heritability estimate of 37% (20% for consistency of interest) and no evidence for shared environmental influence. Personality, primarily conscientiousness, predicts about 6% of the variance in GCSE grades, but Grit adds little to this prediction. Moreover, multivariate twin analyses showed that roughly two-thirds of the GCSE prediction is mediated genetically. Grit perseverance of effort and Big Five conscientiousness are to a large extent the same trait both phenotypically (r = 0.53) and genetically (genetic correlation = 0.86). We conclude that the etiology of Grit is highly similar to other personality traits, not only in showing substantial genetic influence but also in showing no influence of shared environmental factors. Personality significantly predicts academic achievement, but Grit adds little phenotypically or genetically to the prediction of academic achievement beyond traditional personality factors, especially conscientiousness. (PsycINFO Database Record",
"title": ""
},
{
"docid": "997993e389cdb1e40714e20b96927890",
"text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.",
"title": ""
},
{
"docid": "80947cea68851bc522d5ebf8a74e28ab",
"text": "Advertising is key to the business model of many online services. Personalization aims to make ads more relevant for users and more effective for advertisers. However, relatively few studies into user attitudes towards personalized ads are available. We present a San Francisco Bay Area survey (N=296) and in-depth interviews (N=24) with teens and adults. People are divided and often either (strongly) agreed or disagreed about utility or invasiveness of personalized ads and associated data collection. Mobile ads were reported to be less relevant than those on desktop. Participants explained ad personalization based on their personal previous behaviors and guesses about demographic targeting. We describe both metrics improvements as well as opportunities for improving online advertising by focusing on positive ad interactions reported by our participants, such as personalization focused not just on product categories but specific brands and styles, awareness of life events, and situations in which ads were useful or even inspirational.",
"title": ""
},
{
"docid": "1aaacf3d7d6311a118581d836f78d142",
"text": "One of the most powerful features of SQL is the use of nested queries. Most research work on the optimization of nested queries focuses on aggregate subqueries. However, the solutions proposed for non-aggregate subqueries are still limited, especially for queries having multiple subqueries and null values. In this paper, we show that existing approaches to queries containing non-aggregate subqueries proposed in the literature (including rewrites) are not adequate. We then propose a new efficient approach, the nested relational approach, based on the nested relational algebra. Our approach directly unnests non-aggregate subqueries using hash joins, and treats all subqueries in a uniform manner, being able to deal with nested queries of any type and any level. We report on experimental work that confirms that existing approaches have difficulties dealing with non-aggregate subqueries, and that our approach offers better performance. We also discuss some possibilities for algebraic optimization and the issue of integrating our approach in a relational database system.",
"title": ""
},
{
"docid": "c863d82ae2b56202d333ffa5bef5dd59",
"text": "We present an algorithm for finding landmarks along a manifold. These landmarks provide a small set of locations spaced out along the manifold such that they capture the low-dimensional nonlinear structure of the data embedded in the high-dimensional space. The approach does not select points directly from the dataset, but instead we optimize each landmark by moving along the continuous manifold space (as approximated by the data) according to the gradient of an objective function. We borrow ideas from active learning with Gaussian processes to define the objective, which has the property that a new landmark is “repelled” by those currently selected, allowing for exploration of the manifold. We derive a stochastic algorithm for learning with large datasets and show results on several datasets, including the Million Song Dataset and articles from the New York Times.",
"title": ""
},
{
"docid": "288377464cc80eef5c669e5821e3b2b3",
"text": "For a long time, the human genome was considered an intrinsically stable entity; however, it is currently known that our human genome contains many unstable elements consisting of tandem repeat elements, mainly Short tandem repeats (STR), also known as microsatellites or Simple sequence repeats (SSR) (Ellegren, 2000). These sequences involve a repetitive unit of 1-6 bp, forming series with lengths from two to several thousand nucleotides. STR are widely found in proand eukaryotes, including humans. They appear scattered more or less evenly throughout the human genome, accounting for ca. 3% of the entire genome (Sharma et al., 2007). STR are polymorphic but stable in general population; however, repeats can become unstable during DNA replication, resulting in mitotic or meiotic contractions or expansions. STR instability is an important and unique form of mutation that is linked to >40 neurological, neurodegenerative, and neuromuscular disorders (Pearson et al., 2005). In particular, abnormal expansion of trinucleotide repeats (CTG)n, (CGG)n, (CCG)n, (GAA)n, and (CAG)n have been associated with different diseases such as fragile X syndrome, Huntington disease (HD), Dentatorubral-pallidoluysian atrophy (DRPLA), Friedreich ataxia (FA), diverse Spinocerebellar ataxias (SCA), and Myotonic dystrophy type 1 (DM1).",
"title": ""
},
{
"docid": "90b913e3857625f3237ff7a47f675fbb",
"text": "A new approach for the design of UWB hairpin-comb filters is presented. The filters can be designed to possess broad upper stopband characteristics by controlling the overall size of their resonators. The measured frequency characteristics of implemented UWB filters show potential first spurious passbands centered at about six times the fundamental passband center frequencies.",
"title": ""
},
{
"docid": "f9c37f460fc0a4e7af577ab2cbe7045b",
"text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.",
"title": ""
},
{
"docid": "bac5b36d7da7199c1bb4815fa0d5f7de",
"text": "During quadrupedal trotting, diagonal pairs of limbs are set down in unison and exert forces on the ground simultaneously. Ground-reaction forces on individual limbs of trotting dogs were measured separately using a series of four force platforms. Vertical and fore-aft impulses were determined for each limb from the force/time recordings. When mean fore-aft acceleration of the body was zero in a given trotting step (steady state), the fraction of vertical impulse on the forelimb was equal to the fraction of body weight supported by the forelimbs during standing (approximately 60 %). When dogs accelerated or decelerated during a trotting step, the vertical impulse was redistributed to the hindlimb or forelimb, respectively. This redistribution of the vertical impulse is due to a moment exerted about the pitch axis of the body by fore-aft accelerating and decelerating forces. Vertical forces exerted by the forelimb and hindlimb resist this pitching moment, providing stability during fore-aft acceleration and deceleration.",
"title": ""
},
{
"docid": "5eb1aa594c3c6210f029b5bbf6acc599",
"text": "Intestinal nematodes affecting dogs, i.e. roundworms, hookworms and whipworms, have a relevant health-risk impact for animals and, for most of them, for human beings. Both dogs and humans are typically infected by ingesting infective stages, (i.e. larvated eggs or larvae) present in the environment. The existence of a high rate of soil and grass contamination with infective parasitic elements has been demonstrated worldwide in leisure, recreational, public and urban areas, i.e. parks, green areas, bicycle paths, city squares, playgrounds, sandpits, beaches. This review discusses the epidemiological and sanitary importance of faecal pollution with canine intestinal parasites in urban environments and the integrated approaches useful to minimize the risk of infection in different settings.",
"title": ""
},
{
"docid": "b52f9f47b972e797f11029111f5200b3",
"text": "Sentiment lexicons have been leveraged as a useful source of features for sentiment analysis models, leading to the state-of-the-art accuracies. On the other hand, most existing methods use sentiment lexicons without considering context, typically taking the count, sum of strength, or maximum sentiment scores over the whole input. We propose a context-sensitive lexicon-based method based on a simple weighted-sum model, using a recurrent neural network to learn the sentiments strength, intensification and negation of lexicon sentiments in composing the sentiment value of sentences. Results show that our model can not only learn such operation details, but also give significant improvements over state-of-the-art recurrent neural network baselines without lexical features, achieving the best results on a Twitter benchmark.",
"title": ""
},
{
"docid": "472f59fd9017e3c03650619c4f0201f3",
"text": "Software Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention from academia and industry. However, the centralized nature of SDN is a potential vulnerability to the system since attackers may launch denial of services (DoS) attacks against the controller. Existing solutions limit requests rate to the controller by dropping overflowed requests, but they also drop legitimate requests to the controller. To address this problem, we propose FlowRanger, a buffer prioritizing solution for controllers to handle routing requests based on their likelihood to be attacking requests, which derives the trust values of the requesting sources. Based on their trust values, FlowRanger classifies routing requests into multiple buffer queues with different priorities. Thus, attacking requests are served with a lower priority than regular requests. Our simulation results demonstrates that FlowRanger can significantly enhance the request serving rate of regular users under DoS attacks against the controller. To the best of our knowledge, our work is the first solution to battle against controller DoS attacks on the controller side.",
"title": ""
},
{
"docid": "1967de1be0b095b4a59a5bb0fdc403c0",
"text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "6dd1df4e520f5858d48db9860efb63a7",
"text": "This paper proposes single-phase direct pulsewidth modulation (PWM) buck-, boost-, and buck-boost-type ac-ac converters. The proposed converters are implemented with a series-connected freewheeling diode and MOSFET pair, which allows to minimize the switching and conduction losses of the semiconductor devices and resolves the reverse-recovery problem of body diode of MOSFET. The proposed converters are highly reliable because they can solve the shoot-through and dead-time problems of traditional ac-ac converters without voltage/current sensing module, lossy resistor-capacitor (RC) snubbers, or bulky coupled inductors. In addition, they can achieve high obtainable voltage gain and also produce output voltage waveforms of good quality because they do not use lossy snubbers. Unlike the recently developed switching cell (SC) ac-ac converters, the proposed ac-ac converters have no circulating current and do not require bulky coupled inductors; therefore, the total losses, current stresses, and magnetic volume are reduced and efficiency is improved. Detailed analysis and experimental results are provided to validate the novelty and merit of the proposed converters.",
"title": ""
},
{
"docid": "becbcb6ca7ac87a3e43dbc65748b258a",
"text": "We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreover, inspired by the nCCA’s objective function, we extend classical CNN+LSTM approach to train the network by directly maximizing the similarity between the internal representation of the deep learning architecture and candidate answers. Again, such approach achieves a significant improvement over the prior work that also uses CNN+LSTM approach on Visual Madlibs.",
"title": ""
}
] |
scidocsrr
|
76a79f93307b188952b2fe5e0210b0fe
|
I want to answer; who has a question?: Yahoo! answers recommender system
|
[
{
"docid": "e870f2fe9a26b241bdeca882b6186169",
"text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.",
"title": ""
}
] |
[
{
"docid": "e6ff5af0a9d6105a60771a2c447fab5e",
"text": "Object detection and classification in 3D is a key task in Automated Driving (AD). LiDAR sensors are employed to provide the 3D point cloud reconstruction of the surrounding environment, while the task of 3D object bounding box detection in real time remains a strong algorithmic challenge. In this paper, we build on the success of the oneshot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Our main contribution is in extending the loss function of YOLO v2 to include the yaw angle, the 3D box center in Cartesian coordinates and the height of the box as a direct regression problem. This formulation enables real-time performance, which is essential for automated driving. Our results are showing promising figures on KITTI benchmark, achieving real-time performance (40 fps) on Titan X GPU.",
"title": ""
},
{
"docid": "22b259233ffe842e91347792bd7b48e0",
"text": "The increase of the complexity and advancement in ecological and environmental sciences encourages scientists across the world to collect data from multiple places, times, and thematic scales to verify their hypotheses. Accumulated over time, such data not only increases in amount, but also in the diversity of the data sources spread around the world. This poses a huge challenge for scientists who have to manually search for information. To alleviate such problems, ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple repositories and makes it searchable. However, harvested metadata records sometimes are poorly annotated or lacking meaningful keywords, which could affect effective retrieval. Here, we develop algorithms for automatic annotation of metadata. We transform the problem into a tag recommendation problem with a controlled tag library, and propose two variants of an algorithm for recommending tags. Our experiments on four datasets of environmental science metadata records not only show great promises on the performance of our method, but also shed light on the different natures of the datasets.",
"title": ""
},
{
"docid": "c366303728d2a8ee47fe4cbfe67dec24",
"text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.",
"title": ""
},
{
"docid": "5afe9c613da51904d498b282fb1b62df",
"text": "Two types of suspended stripline ultra-wideband bandpass filters are described, one based on a standard lumped element (L-C) filter concept including transmission zeroes to improve the upper passband slope, and a second one consisting of the combination of a low-pass and a high-pass filter.",
"title": ""
},
{
"docid": "e447a0129f01a096f03b16c2ee16c888",
"text": "Many authors use feedforward neural networks for modeling and forecasting time series. Most of these applications are mainly experimental, and it is often difficult to extract a general methodology from the published studies. In particular, the choice of architecture is a tricky problem. We try to combine the statistical techniques of linear and nonlinear time series with the connectionist approach. The asymptotical properties of the estimators lead us to propose a systematic methodology to determine which weights are nonsignificant and to eliminate them to simplify the architecture. This method (SSM or statistical stepwise method) is compared to other pruning techniques and is applied to some artificial series, to the famous Sunspots benchmark, and to daily electrical consumption data.",
"title": ""
},
{
"docid": "884ee23f40ad31f7010f9486b74d9433",
"text": "A streamlined parallel traffic management system (PtMS) is outlined that works alongside a redesigned intelligent transportation system in Qingdao, China. The PtMS's structure provides enhanced control and management support, with increased versatility for use in real-world scenarios.",
"title": ""
},
{
"docid": "4dc8b11b9123c6a25dcf4765d77cb6ca",
"text": "Accurate and reliable information about land use and land cover is essential for change detection and monitoring of the specified area. It is also useful in the updating the geographical information about the area. Over the past decade, a significant amount of research has been conducted concerning the application of different classifier and image fusion technique in this area. In this paper, introductions to the land use and land cover classification techniques are given and the results from a number of different techniques are compared. It has been found that, in general fusion technique perform better than either conventional classifier or supervised/unsupervised classification.",
"title": ""
},
{
"docid": "83856fb0a5e53c958473fdf878b89b20",
"text": "Due to the expensive nature of an industrial robot, not all universities are equipped with areal robots for students to operate. Learning robotics without accessing to an actual robotic system has proven to be difficult for undergraduate students. For instructors, it is also an obstacle to effectively teach fundamental robotic concepts. Virtual robot simulator has been explored by many researchers to create a virtual environment for teaching and learning. This paper presents structure of a course project which requires students to develop a virtual robot simulator. The simulator integrates concept of kinematics, inverse kinematics and controls. Results show that this approach assists and promotes better students‟ understanding of robotics.",
"title": ""
},
{
"docid": "865cfae2da5ad3d1d10d21b1defdc448",
"text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.",
"title": ""
},
{
"docid": "75f895ff76e7a55d589ff30637524756",
"text": "This paper details the coreference resolution system submitted by Stanford at the CoNLL2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.",
"title": ""
},
{
"docid": "866c1e87076da5a94b9adeacb9091ea3",
"text": "Training a support vector machine (SVM) is usually done by ma pping the underlying optimization problem into a quadratic progr amming (QP) problem. Unfortunately, high quality QP solvers are not rea dily available, which makes research into the area of SVMs difficult for he those without a QP solver. Recently, the Sequential Minimal Optim ization algorithm (SMO) was introduced [1, 2]. SMO reduces SVM trainin g down to a series of smaller QP subproblems that have an analytical solution and, therefore, does not require a general QP solver. SMO has been shown to be very efficient for classification problems using l ear SVMs and/or sparse data sets. This work shows how SMO can be genera lized to handle regression problems.",
"title": ""
},
{
"docid": "0c0b099a2a4a404632a1f065cfa328c4",
"text": "Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms—Forest (pyQuil), QISKit, ProjectQ, and the Quantum Developer Kit—that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.",
"title": ""
},
{
"docid": "d7ea5e0bdf811f427b7c283d4aae7371",
"text": "This work investigates the development of students’ computational thinking (CT) skills in the context of educational robotics (ER) learning activity. The study employs an appropriate CT model for operationalising and exploring students’ CT skills development in two different age groups (15 and 18 years old) and across gender. 164 students of different education levels (Junior high: 89; High vocational: 75) engaged in ER learning activities (2 hours per week, 11 weeks totally) and their CT skills were evaluated at different phases during the activity, using different modality (written and oral) assessment tools. The results suggest that: (a) students reach eventually the same level of CT skills development independent of their age and gender, (b) CT skills inmost cases need time to fully develop (students’ scores improve significantly towards the end of the activity), (c) age and gender relevant differences appear when analysing students’ score in the various specific dimensions of the CT skills model, (d) the modality of the skill assessment instrumentmay have an impact on students’ performance, (e) girls appear inmany situations to need more training time to reach the same skill level compared to boys. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f53a2ca0fda368d0e90cbb38076658af",
"text": "RNAi therapeutics is a powerful tool for treating diseases by sequence-specific targeting of genes using siRNA. Since its discovery, the need for a safe and efficient delivery system for siRNA has increased. Here, we have developed and characterized a delivery platform for siRNA based on the natural polysaccharide starch in an attempt to address unresolved delivery challenges of RNAi. Modified potato starch (Q-starch) was successfully obtained by substitution with quaternary reagent, providing Q-starch with cationic properties. The results indicate that Q-starch was able to bind siRNA by self-assembly formation of complexes. For efficient and potent gene silencing we monitored the physical characteristics of the formed nanoparticles at increasing N/P molar ratios. The minimum ratio for complete entrapment of siRNA was 2. The resulting complexes, which were characterized by a small diameter (~30 nm) and positive surface charge, were able to protect siRNA from enzymatic degradation. Q-starch/siRNA complexes efficiently induced P-glycoprotein (P-gp) gene silencing in the human ovarian adenocarcinoma cell line, NCI-ADR/Res (NAR), over expressing the targeted gene and presenting low toxicity. Additionally, Q-starch-based complexes showed high cellular uptake during a 24-hour study, which also suggested that intracellular siRNA delivery barriers governed the kinetics of siRNA transfection. In this study, we have devised a promising siRNA delivery vector based on a starch derivative for efficient and safe RNAi application.",
"title": ""
},
{
"docid": "7b5df73b6fb0574bd7c039da53047724",
"text": "Many ad hoc network protocols and applications assume the knowledge of geographic location of nodes. The absolute position of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map. Finding position without the aid of GPS in each node of an ad hoc network is important in cases where GPS is either not accessible, or not practical to use due to power, form factor or line of sight conditions. Position would also enable routing in sufficiently isotropic large networks, without the use of large routing tables. We are proposing APS – a localized, distributed, hop by hop positioning algorithm, that works as an extension of both distance vector routing and GPS positioning in order to provide approximate position for all nodes in a network where only a limited fraction of nodes have self positioning capability.",
"title": ""
},
{
"docid": "e90c165a3e16035b56a4bb4ceb9282ed",
"text": "Point of care testing (POCT) refers to laboratory testing that occurs near to the patient, often at the patient bedside. POCT can be advantageous in situations requiring rapid turnaround time of test results for clinical decision making. There are many challenges associated with POCT, mainly related to quality assurance. POCT is performed by clinical staff rather than laboratory trained individuals which can lead to errors resulting from a lack of understanding of the importance of quality control and quality assurance practices. POCT is usually more expensive than testing performed in the central laboratory and requires a significant amount of support from the laboratory to ensure the quality testing and meet accreditation requirements. Here, specific challenges related to POCT compliance with accreditation standards are discussed along with strategies that can be used to overcome these challenges. These areas include: documentation of POCT orders, charting of POCT results as well as training and certification of individuals performing POCT. Factors to consider when implementing connectivity between POCT instruments and the electronic medical record are also discussed in detail and include: uni-directional versus bidirectional communication, linking patient demographic information with POCT software, the importance of positive patient identification and considering where to chart POCT results in the electronic medical record.",
"title": ""
},
{
"docid": "6cac6ab24b5e833e73c98db476e1437d",
"text": "The observation that a particular drug state may acquire the properties of a discriminative stimulus is explicable on the basis of drug-induced interoceptive cues. The present investigation sought to determine (a) whether the hallucinogens mescaline and LSD could serve as discriminative stimuli when either drug is paired with saline and (b) whether discriminative responding would occur when the paired stimuli are produced by equivalent doses of LSD and mescaline. In a standard two-lever operant test chamber, rats received a reinforcer (sweetened milk) for correct responses according to a variable interval schedule. All sessions were preceded by one of two treatments; following treatment A, only responses on lever A were reinforced and, in a similar fashion, lever B was correct following treatment B. No responses were reinforced during the first five minutes of a daily thirty-minute session. It was found that mescaline and LSD can serve as discriminative stimuli when either drug is paired with saline and that the degree of discrimination varies with drug dose. When equivalent doses of the two drugs were given to the same animal, no discriminated responding was observed. The latter finding suggests that mescaline and LSD produce qualitatively similar interoceptive cues in the rat.",
"title": ""
},
{
"docid": "4bce6150e9bc23716a19a0d7c02640c0",
"text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems",
"title": ""
},
{
"docid": "9b3db8c2632ad79dc8e20435a81ef2a1",
"text": "Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.",
"title": ""
},
{
"docid": "af1b98a3b40e8adc053ddafa49e44fd0",
"text": "Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data. 1 peA and Feature Spaces Principal Component Analysis (PC A) (e.g. [3]) is an orthogonal basis transformation. The new basis is found by diagonalizing the centered covariance matrix of a data set {Xk E RNlk = 1, ... ,f}, defined by C = ((Xi (Xk))(Xi (Xk))T). The coordinates in the Eigenvector basis are called principal components. The size of an Eigenvalue >. corresponding to an Eigenvector v of C equals the amount of variance in the direction of v. Furthermore, the directions of the first n Eigenvectors corresponding to the biggest n Eigenvalues cover as much variance as possible by n orthogonal directions. In many applications they contain the most interesting information: for instance, in data compression, where we project onto the directions with biggest variance to retain as much information as possible, or in de-noising, where we deliberately drop directions with small variance. Clearly, one cannot assert that linear PCA will always detect all structure in a given data set. By the use of suitable nonlinear features, one can extract more information. Kernel PCA is very well suited to extract interesting nonlinear structures in the data [9]. The purpose of this work is therefore (i) to consider nonlinear de-noising based on Kernel PCA and (ii) to clarify the connection between feature space expansions and meaningful patterns in input space. Kernel PCA first maps the data into some feature space F via a (usually nonlinear) function <II and then performs linear PCA on the mapped data. As the feature space F might be very high dimensional (e.g. when mapping into the space of all possible d-th order monomials of input space), kernel PCA employs Mercer kernels instead of carrying Kernel peA and De-Noising in Feature Spaces 537 out the mapping <I> explicitly. A Mercer kernel is a function k(x, y) which for all data sets {Xi} gives rise to a positive matrix Kij = k(Xi' Xj) [6]. One can show that using k instead of a dot product in input space corresponds to mapping the data with some <I> to a feature space F [1], i.e. k(x,y) = (<I>(x) . <I>(y)). Kernels that have proven useful include Gaussian kernels k(x, y) = exp( -llx Yll2 Ie) and polynomial kernels k(x, y) = (x·y)d. Clearly, all algorithms that can be formulated in terms of dot products, e.g. Support Vector Machines [1], can be carried out in some feature space F without mapping the data explicitly. All these algorithms construct their solutions as expansions in the potentially infinite-dimensional feature space. The paper is organized as follows: in the next section, we briefly describe the kernel PCA algorithm. In section 3, we present an algorithm for finding approximate pre-images of expansions in feature space. Experimental results on toy and real world data are given in section 4, followed by a discussion of our findings (section 5). 2 Kernel peA and Reconstruction To perform PCA in feature space, we need to find Eigenvalues A > 0 and Eigenvectors V E F\\{O} satisfying AV = GV with G = (<I>(Xk)<I>(Xk)T).1 Substituting G into the Eigenvector equation, we note that all solutions V must lie in the span of <I>-images of the training data. This implies that we can consider the equivalent system A( <I>(Xk) . V) = (<I>(Xk) . GV) for all k = 1, ... ,f (1) and that there exist coefficients Q1 , ... ,Ql such that l V = L i=l Qi<l>(Xi) (2) Substituting C and (2) into (1), and defining an f x f matrix K by Kij := (<I>(Xi)· <I>(Xj)) = k( Xi, X j), we arrive at a problem which is cast in terms of dot products: solve",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.