query_id
stringlengths 32
32
| query
stringlengths 5
4.91k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0687cc1454d931b15022c0ad9fc1d8c1
|
Effort during visual search and counting: insights from pupillometry.
|
[
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
}
] |
[
{
"docid": "ca26daaa9961f7ba2343ae84245c1181",
"text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.",
"title": ""
},
{
"docid": "3a71dd4c8d9e1cf89134141cfd97023e",
"text": "We introduce a novel solid modeling framework taking advantage of the architecture of parallel computing onmodern graphics hardware. Solidmodels in this framework are represented by an extension of the ray representation — Layered Depth-Normal Images (LDNI), which inherits the good properties of Boolean simplicity, localization and domain decoupling. The defect of ray representation in computational intensity has been overcome by the newly developed parallel algorithms running on the graphics hardware equipped with Graphics Processing Unit (GPU). The LDNI for a solid model whose boundary is representedby a closedpolygonalmesh canbe generated efficientlywith thehelp of hardware accelerated sampling. The parallel algorithm for computing Boolean operations on two LDNI solids runs well on modern graphics hardware. A parallel algorithm is also introduced in this paper to convert LDNI solids to sharp-feature preserved polygonal mesh surfaces, which can be used in downstream applications (e.g., finite element analysis). Different from those GPU-based techniques for rendering CSG-tree of solid models Hable and Rossignac (2007, 2005) [1,2], we compute and store the shape of objects in solid modeling completely on graphics hardware. This greatly eliminates the communication bottleneck between the graphics memory and the main memory. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fab72d1223fa94e918952b8715e90d30",
"text": "A novel wideband crossed dipole loaded with four parasitic elements is investigated in this letter. The printed crossed dipole is incorporated with a pair of vacant quarter rings to feed the antenna. The antenna is backed by a metallic plate to provide an unidirectional radiation pattern with a wide axial-ratio (AR) bandwidth. To verify the proposed design, a prototype is fabricated and measured. The final design with an overall size of $0.46\\ \\lambda_{0}\\times 0.46\\ \\lambda_{0}\\times 0.23\\ \\lambda_{0} (> \\lambda_{0}$ is the free-space wavelength of circularly polarized center frequency) yields a 10-dB impedance bandwidth of approximately 62.7% and a 3-dB AR bandwidth of approximately 47.2%. In addition, the proposed antenna has a stable broadside gain of 7.9 ± 0.5 dBi within passband.",
"title": ""
},
{
"docid": "4f15ef7dc7405f22e1ca7ae24154f5ef",
"text": "This position paper addresses current debates about data in general, and big data specifically, by examining the ethical issues arising from advances in knowledge production. Typically ethical issues such as privacy and data protection are discussed in the context of regulatory and policy debates. Here we argue that this overlooks a larger picture whereby human autonomy is undermined by the growth of scientific knowledge. To make this argument, we first offer definitions of data and big data, and then examine why the uses of data-driven analyses of human behaviour in particular have recently experienced rapid growth. Next, we distinguish between the contexts in which big data research is used, and argue that this research has quite different implications in the context of scientific as opposed to applied research. We conclude by pointing to the fact that big data analyses are both enabled and constrained by the nature of data sources available. Big data research will nevertheless inevitably become more pervasive, and this will require more awareness on the part of data scientists, policymakers and a wider public about its contexts and often unintended consequences.",
"title": ""
},
{
"docid": "46b5082df5dfd63271ec942ce28285fa",
"text": "The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.",
"title": ""
},
{
"docid": "2ee5e5ecd9304066b12771f3349155f8",
"text": "An intelligent wiper speed adjustment system can be found in most middle and upper class cars. A core piece of this gadget is the rain sensor on the windshield. With the upcoming number of cars being equipped with an in-vehicle camera for vision-based applications the call for integrating all sensors in the area of the rearview mirror into one device rises to reduce the number of parts and variants. In this paper, functionality of standard rain sensors and different vision-based approaches are explained and a novel rain sensing concept based on an automotive in-vehicle camera for Driver Assistance Systems (DAS) is developed to enhance applicability. Hereby, the region at the bottom of the field of view (FOV) of the imager is used to detect raindrops, while the upper part of the image is still usable for other vision-based applications. A simple algorithm is set up to keep the additional processing time low and to quantitatively gather the rain intensity. Mechanisms to avoid false activations of the wipers are introduced. First experimental experiences based on real scenarios show promising results.",
"title": ""
},
{
"docid": "10b4d77741d40a410b30b0ba01fae67f",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: [email protected] (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "cab386acd4cf89803325e5d33a095a62",
"text": "Dipyridamole is a widely prescribed drug in ischemic disorders, and it is here investigated for potential clinical use as a new treatment for breast cancer. Xenograft mice bearing triple-negative breast cancer 4T1-Luc or MDA-MB-231T cells were generated. In these in vivo models, dipyridamole effects were investigated for primary tumor growth, metastasis formation, cell cycle, apoptosis, signaling pathways, immune cell infiltration, and serum inflammatory cytokines levels. Dipyridamole significantly reduced primary tumor growth and metastasis formation by intraperitoneal administration. Treatment with 15 mg/kg/day dipyridamole reduced mean primary tumor size by 67.5 % (p = 0.0433), while treatment with 30 mg/kg/day dipyridamole resulted in an almost a total reduction in primary tumors (p = 0.0182). Experimental metastasis assays show dipyridamole reduces metastasis formation by 47.5 % in the MDA-MB-231T xenograft model (p = 0.0122), and by 50.26 % in the 4T1-Luc xenograft model (p = 0.0292). In vivo dipyridamole decreased activated β-catenin by 38.64 % (p < 0.0001), phospho-ERK1/2 by 25.05 % (p = 0.0129), phospho-p65 by 67.82 % (p < 0.0001) and doubled the expression of IkBα (p = 0.0019), thus revealing significant effects on Wnt, ERK1/2-MAPK and NF-kB pathways in both animal models. Moreover dipyridamole significantly decreased the infiltration of tumor-associated macrophages and myeloid-derived suppressor cells in primary tumors (p < 0.005), and the inflammatory cytokines levels in the sera of the treated mice. We suggest that when used at appropriate doses and with the correct mode of administration, dipyridamole is a promising agent for breast-cancer treatment, thus also implying its potential use in other cancers that show those highly activated pathways.",
"title": ""
},
{
"docid": "2949a903b7ab1949b6aaad305c532f4b",
"text": "This paper presents a semantics-based approach to Recommender Systems (RS), to exploit available contextual information about both the items to be recommended and the recommendation process, in an attempt to overcome some of the shortcomings of traditional RS implementations. An ontology is used as a backbone to the system, while multiple web services are orchestrated to compose a suitable recommendation model, matching the current recommendation context at run-time. To achieve such dynamic behaviour the proposed system tackles the recommendation problem by applying existing RS techniques on three different levels: the selection of appropriate sets of features, recommendation model and recommendable items.",
"title": ""
},
{
"docid": "41076f408c1c00212106433b47582a43",
"text": "Polyols such as mannitol, erythritol, sorbitol, and xylitol are naturally found in fruits and vegetables and are produced by certain bacteria, fungi, yeasts, and algae. These sugar alcohols are widely used in food and pharmaceutical industries and in medicine because of their interesting physicochemical properties. In the food industry, polyols are employed as natural sweeteners applicable in light and diabetic food products. In the last decade, biotechnological production of polyols by lactic acid bacteria (LAB) has been investigated as an alternative to their current industrial production. While heterofermentative LAB may naturally produce mannitol and erythritol under certain culture conditions, sorbitol and xylitol have been only synthesized through metabolic engineering processes. This review deals with the spontaneous formation of mannitol and erythritol in fermented foods and their biotechnological production by heterofermentative LAB and briefly presented the metabolic engineering processes applied for polyol formation.",
"title": ""
},
{
"docid": "cc2822b15ccf29978252b688111d58cd",
"text": "Today, even a moderately sized corporate intranet contains multiple firewalls and routers, which are all used to enforce various aspects of the global corporate security policy. Configuring these devices to work in unison is difficult, especially if they are made by different vendors. Even testing or reverse-engineering an existing configuration (say, when a new security administrator takes over) is hard. Firewall configuration files are written in low-level formalisms, whose readability is comparable to assembly code, and the global policy is spread over all the firewalls that are involved. To alleviate some of these difficulties, we designed and implemented a novel firewall analysis tool. Our software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Our tool uses a minimal description of the network topology, and directly parses the various vendor-specific lowlevel configuration files. It interacts with the user through a query-and-answer session, which is conducted at a much higher level of abstraction. A typical question our tool can answer is “from which machines can our DMZ be reached, and with which services?”. Thus, our tool complements existing vulnerability analysis tools, as it can be used before a policy is actually deployed, it operates on a more understandable level of abstraction, and it deals with all the firewalls at once.",
"title": ""
},
{
"docid": "b08f67bc9b84088f8298b35e50d0b9c5",
"text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.",
"title": ""
},
{
"docid": "cceec94ed2462cd657be89033244bbf9",
"text": "This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a posttest and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.",
"title": ""
},
{
"docid": "5efd5fb9caaeadb90a684d32491f0fec",
"text": "The ModelNiew/Controller design pattern is very useful for architecting interactive software systems. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. Applying the ModelNiew/Controller design pattern to web-applications is therefore complicated by the fact that current technologies encourage developers to partition the application as early as in the design phase. Subsequent changes to that partitioning require considerable changes to the application's implementation despite the fact that the application logic has not changed. This paper introduces the concept of Flexible Web-Application Partitioning, a programming model and implementation infrastructure, that allows developers to apply the ModeWViewKontroller design pattern in a partition-independent manner: Applications are developed and tested in a single address-space; they can then be deployed to various clientherver architectures without changing the application's source code. In addition, partitioning decisions can be changed without modifying the application.",
"title": ""
},
{
"docid": "90b3e6aee6351b196445843ca8367a3b",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "2af5e18cfb6dadd4d5145a1fa63f0536",
"text": "Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions.",
"title": ""
},
{
"docid": "356684bac2e5fecd903eb428dc5455f4",
"text": "Social media expose millions of users every day to information campaigns - some emerging organically from grassroots activity, others sustained by advertising or other coordinated efforts. These campaigns contribute to the shaping of collective opinions. While most information campaigns are benign, some may be deployed for nefarious purposes, including terrorist propaganda, political astroturf, and financial market manipulation. It is therefore important to be able to detect whether a meme is being artificially promoted at the very moment it becomes wildly popular. This problem has important social implications and poses numerous technical challenges. As a first step, here we focus on discriminating between trending memes that are either organic or promoted by means of advertisement. The classification is not trivial: ads cause bursts of attention that can be easily mistaken for those of organic trends. We designed a machine learning framework to classify memes that have been labeled as trending on Twitter. After trending, we can rely on a large volume of activity data. Early detection, occurring immediately at trending time, is a more challenging problem due to the minimal volume of activity data that is available prior to trending. Our supervised learning framework exploits hundreds of time-varying features to capture changing network and diffusion patterns, content and sentiment information, timing signals, and user meta-data. We explore different methods for encoding feature time series. Using millions of tweets containing trending hashtags, we achieve 75% AUC score for early detection, increasing to above 95% after trending. We evaluate the robustness of the algorithms by introducing random temporal shifts on the trend time series. Feature selection analysis reveals that content cues provide consistently useful signals; user features are more informative for early detection, while network and timing features are more helpful once more data is available.",
"title": ""
},
{
"docid": "a6fec60aeb6e5824ed07eaa3257969aa",
"text": "What aspects of information assurance can be identified in Business-to-Consumer (B-toC) online transactions? The purpose of this research is to build a theoretical framework for studying information assurance based on a detailed analysis of academic literature for online exchanges in B-to-C electronic commerce. Further, a semantic network content analysis is conducted to analyze the representations of information assurance in B-to-C electronic commerce in the real online market place (transaction Web sites of selected Fortune 500 firms). The results show that the transaction websites focus on some perspectives and not on others. For example, we see an emphasis on the importance of technological and consumer behavioral elements of information assurance such as issues of online security and privacy. Further corporate practitioners place most emphasis on transaction-related information assurance issues. Interestingly, the product and institutional dimension of information assurance in online transaction websites are only",
"title": ""
},
{
"docid": "fee4b80923ff9b6611e95836a90beb06",
"text": "We present an annotation management system for relational databases. In this system, every piece of data in a relation is assumed to have zero or more annotations associated with it and annotations are propagated along, from the source to the output, as data is being transformed through a query. Such an annotation management system could be used for understanding the provenance (aka lineage) of data, who has seen or edited a piece of data or the quality of data, which are useful functionalities for applications that deal with integration of scientific and biological data. We present an extension, pSQL, of a fragment of SQL that has three different types of annotation propagation schemes, each useful for different purposes. The default scheme propagates annotations according to where data is copied from. The default-all scheme propagates annotations according to where data is copied from among all equivalent formulations of a given query. The custom scheme allows a user to specify how annotations should propagate. We present a storage scheme for the annotations and describe algorithms for translating a pSQL query under each propagation scheme into one or more SQL queries that would correctly retrieve the relevant annotations according to the specified propagation scheme. For the default-all scheme, we also show how we generate finitely many queries that can simulate the annotation propagation behavior of the set of all equivalent queries, which is possibly infinite. The algorithms are implemented and the feasibility of the system is demonstrated by a set of experiments that we have conducted.",
"title": ""
}
] |
scidocsrr
|
d08dcc782dee5f9474939925134c4e18
|
Evaluation of hierarchical clustering algorithms for document datasets
|
[
{
"docid": "2e2960942966d92ac636fa0be2e9410e",
"text": "Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter/Gather (buckshot, fractionation, and split/join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time/quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time/quality tradeoff quantitatively.",
"title": ""
}
] |
[
{
"docid": "a11030c2031f96608eb3c2836c91a599",
"text": "Existing deep learning methods of video recognition usually require a large number of labeled videos for training. But for a new task, videos are often unlabeled and it is also time-consuming and labor-intensive to annotate them. Instead of human annotation, we try to make use of existing fully labeled images to help recognize those videos. However, due to the problem of domain shifts and heterogeneous feature representations, the performance of classifiers trained on images may be dramatically degraded for video recognition tasks. In this paper, we propose a novel method, called Hierarchical Generative Adversarial Networks (HiGAN), to enhance recognition in videos (i.e., target domain) by transferring knowledge from images (i.e., source domain). The HiGAN model consists of a low-level conditional GAN and a high-level conditional GAN. By taking advantage of these two-level adversarial learning, our method is capable of learning a domaininvariant feature representation of source images and target videos. Comprehensive experiments on two challenging video recognition datasets (i.e. UCF101 and HMDB51) demonstrate the effectiveness of the proposed method when compared with the existing state-of-the-art domain adaptation methods.",
"title": ""
},
{
"docid": "fa88546c3bbdc8de012ed7cadc552533",
"text": "The aim of this paper is to discuss new solutions in the design of insulated gate bipolar transistor (IGBT) gate drivers with advanced protections such as two-level turn-on to reduce peak current when turning on the device, two-level turn-off to limit over-voltage when the device is turned off, and an active Miller clamp function that acts against cross conduction phenomena. Afterwards, we describe a new circuit which includes a two-level turn-off driver and an active Miller clamp function. Tests and results for these advanced functions are discussed, with particular emphasis on the influence of an intermediate level in a two-level turn-off driver on overshoot across the IGBT.",
"title": ""
},
{
"docid": "f0a7f1f36c10cdd84f88f5e1c266f78d",
"text": "We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation – perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [11]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets.",
"title": ""
},
{
"docid": "483c3e0bd9406baef7040cdc3399442d",
"text": "Composite resins have been shown to be susceptible to discolouration on exposure to oral environment over a period of time. Discolouration of composite resins can be broadly classified as intrinsic or extrinsic. Intrinsic discolouration involves physico-chemical alteration within the material, while extrinsic stains are a result of surface discolouration by extrinsic compounds. Although the effects of various substances on the colour stability of composite resins have been extensively investigated, little has been published on the methods of removing the composite resins staining. The purpose of this paper is to provide a brief literature review on the colour stability of composite resins and clinical approaches in the stain removal.",
"title": ""
},
{
"docid": "d1072bc9960fc3697416c9d982ed5a9c",
"text": "We compared face identification by humans and machines using images taken under a variety of uncontrolled illumination conditions in both indoor and outdoor settings. Natural variations in a person's day-to-day appearance (e.g., hair style, facial expression, hats, glasses, etc.) contributed to the difficulty of the task. Both humans and machines matched the identity of people (same or different) in pairs of frontal view face images. The degree of difficulty introduced by photometric and appearance-based variability was estimated using a face recognition algorithm created by fusing three top-performing algorithms from a recent international competition. The algorithm computed similarity scores for a constant set of same-identity and different-identity pairings from multiple images. Image pairs were assigned to good, moderate, and poor accuracy groups by ranking the similarity scores for each identity pairing, and dividing these rankings into three strata. This procedure isolated the role of photometric variables from the effects of the distinctiveness of particular identities. Algorithm performance for these constant identity pairings varied dramatically across the groups. In a series of experiments, humans matched image pairs from the good, moderate, and poor conditions, rating the likelihood that the images were of the same person (1: sure same - 5: sure different). Algorithms were more accurate than humans in the good and moderate conditions, but were comparable to humans in the poor accuracy condition. To date, these are the most variable illumination- and appearance-based recognition conditions on which humans and machines have been compared. The finding that machines were never less accurate than humans on these challenging frontal images suggests that face recognition systems may be ready for applications with comparable difficulty. We speculate that the superiority of algorithms over humans in the less challenging conditions may be due to the algorithms' use of detailed, view-specific identity information. Humans may consider this information less important due to its limited potential for robust generalization in suboptimal viewing conditions.",
"title": ""
},
{
"docid": "7e68ac0eee3ab3610b7c68b69c27f3b6",
"text": "When digitizing a document into an image, it is common to include a surrounding border region to visually indicate that the entire document is present in the image. However, this border should be removed prior to automated processing. In this work, we present a deep learning system, PageNet, which identifies the main page region in an image in order to segment content from both textual and non-textual border noise. In PageNet, a Fully Convolutional Network obtains a pixel-wise segmentation which is post-processed into a quadrilateral region. We evaluate PageNet on 4 collections of historical handwritten documents and obtain over 94% mean intersection over union on all datasets and approach human performance on 2 collections. Additionally, we show that PageNet can segment documents that are overlayed on top of other documents.",
"title": ""
},
{
"docid": "72fa771855a178d8901d29c72acf5300",
"text": "Aspect extraction identifies relevant features of an entity from a textual description and is typically targeted to product reviews, and other types of short text, as an enabling task for, e.g., opinion mining and information retrieval. Current aspect extraction methods mostly focus on aspect terms, often neglecting associated modifiers or embedding them in the aspect terms without proper distinction. Moreover, flat syntactic structures are often assumed, resulting in inaccurate extractions of complex aspects. This paper studies the problem of structured aspect extraction, a variant of traditional aspect extraction aiming at a fine-grained extraction of complex (i.e., hierarchical) aspects. We propose an unsupervised and scalable method for structured aspect extraction consisting of statistical noun phrase clustering, cPMI-based noun phrase segmentation, and hierarchical pattern induction. Our evaluation shows a substantial improvement over existing methods in terms of both quality and computational efficiency.",
"title": ""
},
{
"docid": "b8f50ba62325ffddcefda7030515fd22",
"text": "The following statement is intended to provide an understanding of the governance and legal structure of the University of Sheffield. The University is an independent corporation whose legal status derives from a Royal Charter granted in 1905. It is an educational charity, with exempt status, regulated by the Office for Students in its capacity as Principal Regulator. The University has charitable purposes and applies them for the public benefit. It must comply with the general law of charity. The University’s objectives, powers and governance framework are set out in its Charter and supporting Statutes and Regulations.",
"title": ""
},
{
"docid": "3394eb51b71e5def4e4637963da347ab",
"text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.",
"title": ""
},
{
"docid": "2cd2a85598c0c10176a34c0bd768e533",
"text": "BACKGROUND\nApart from skills, and knowledge, self-efficacy is an important factor in the students' preparation for clinical work. The Physiotherapist Self-Efficacy (PSE) questionnaire was developed to measure physical therapy (TP) students' self-efficacy in the cardiorespiratory, musculoskeletal, and neurological clinical areas. The aim of this study was to establish the measurement properties of the Dutch PSE questionnaire, and to explore whether self-efficacy beliefs in students are clinical area specific.\n\n\nMETHODS\nMethodological quality of the PSE was studied using COSMIN guidelines. Item analysis, structural validity, and internal consistency of the PSE were determined in 207 students. Test-retest reliability was established in another sample of 60 students completing the PSE twice. Responsiveness of the scales was determined in 80 students completing the PSE at the start and the end of the second year. Hypothesis testing was used to determine construct validity of the PSE.\n\n\nRESULTS\nExploratory factor analysis resulted in three meaningful components explaining similar proportions of variance (25%, 21%, and 20%), reflecting the three clinical areas. Internal consistency of each of the three subscales was excellent (Cronbach's alpha > .90). Intra Class Correlation Coefficient was good (.80). Hypothesis testing confirmed construct validity of the PSE.\n\n\nCONCLUSION\nThe PSE shows excellent measurement properties. The component structure of the PSE suggests that self-efficacy about physiotherapy in PT students is not generic, but specific for a clinical area. As self-efficacy is considered a predictor of performance in clinical settings, enhancing self-efficacy is an explicit goal of educational interventions. Further research is needed to determine if the scale is specific enough to assess the effect of educational interventions on student self-efficacy.",
"title": ""
},
{
"docid": "45ea01d82897401058492bc2f88369b3",
"text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.",
"title": ""
},
{
"docid": "41353a12a579f72816f1adf3cba154dd",
"text": "The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3.",
"title": ""
},
{
"docid": "79a8281500227799d18d4f841af08795",
"text": "Fluctuating power is of serious concern in grid connected wind systems and energy storage systems are being developed to help alleviate this. This paper describes how additional energy storage can be provided within the existing wind turbine system by allowing the turbine speed to vary over a wider range. It also addresses the stability issue due to the modified control requirements. A control algorithm is proposed for a typical doubly fed induction generator (DFIG) arrangement and a simulation model is used to assess the ability of the method to smooth the output power. The disadvantage of the method is that there is a reduction in energy capture relative to a maximum power tracking algorithm. This aspect is evaluated using a typical turbine characteristic and wind profile and is shown to decrease by less than 1%. In contrast power fluctuations at intermediate frequency are reduced by typically 90%.",
"title": ""
},
{
"docid": "c0a1b48688cd0269b787a17fa5d15eda",
"text": "Animating human character has become an active research area in computer graphics. It is really important for development of virtual environment applications such as computer games and virtual reality. One of the popular methods to animate the character is by using motion graph. Since motion graph is the main focus of this research, we investigate the preliminary work of motion graph and discuss about the main components of motion graph like distance metrics and motion transition. These two components will be taken into consideration during the process of development of motion graph. In this paper, we will also present a general framework and future plan of this study.",
"title": ""
},
{
"docid": "8418c151e724d5e23662a9d70c050df1",
"text": "The issuing of pseudonyms is an established approach for protecting the privacy of users while limiting access and preventing sybil attacks. To prevent pseudonym deanonymization through continuous observation and correlation, frequent and unlinkable pseudonym changes must be enabled. Existing approaches for realizing sybil-resistant pseudonymization and pseudonym change (PPC) are either inherently dependent on trusted third parties (TTPs) or involve significant computation overhead at end-user devices. In this paper, we investigate a novel, TTP-independent approach towards sybil-resistant PPC. Our proposal is based on the use of cryptocurrency block chains as general-purpose, append-only bulletin boards. We present a general approach as well as BitNym, a specific design based on the unmodified Bitcoin network. We discuss and propose TTP-independent mechanisms for realizing sybil-free initial access control, pseudonym validation and pseudonym mixing. Evaluation results demonstrate the practical feasibility of our approach and show that anonymity sets encompassing nearly the complete user population are easily achievable.",
"title": ""
},
{
"docid": "4d5e8e1c8942256088f1c5ef0e122c9f",
"text": "Cybercrime and cybercriminal activities continue to impact communities as the steady growth of electronic information systems enables more online business. The collective views of sixty-six computer users and organizations, that have an exposure to cybercrime, were analyzed using concept analysis and mapping techniques in order to identify the major issues and areas of concern, and provide useful advice. The findings of the study show that a range of computing stakeholders have genuine concerns about the frequency of information security breaches and malware incursions (including the emergence of dangerous security and detection avoiding malware), the need for e-security awareness and education, the roles played by law and law enforcement, and the installation of current security software and systems. While not necessarily criminal in nature, some stakeholders also expressed deep concerns over the use of computers for cyberbullying, particularly where younger and school aged users are involved. The government’s future directions and recommendations for the technical and administrative management of cybercriminal activity were generally observed to be consistent with stakeholder concerns, with some users also taking practical steps to reduce cybercrime risks. a 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8aa8c24c794bc6187257d264e2586a0",
"text": "Bayesian optimization is a powerful framework for minimizing expensive objective functions while using very few function evaluations. It has been successfully applied to a variety of problems, including hyperparameter tuning and experimental design. However, this framework has not been extended to the inequality-constrained optimization setting, particularly the setting in which evaluating feasibility is just as expensive as evaluating the objective. Here we present constrained Bayesian optimization, which places a prior distribution on both the objective and the constraint functions. We evaluate our method on simulated and real data, demonstrating that constrained Bayesian optimization can quickly find optimal and feasible points, even when small feasible regions cause standard methods to fail.",
"title": ""
},
{
"docid": "b84fc12cfc3de65109f789d2a871a38a",
"text": "OBJECTIVE\nTo describe studies evaluating 3 generations of three-dimensional (3D) displays over the course of 20 years.\n\n\nSUMMARY BACKGROUND DATA\nMost previous studies have analyzed performance differences during 3D and two-dimensional (2D) laparoscopy without using appropriate controls that equated conditions in all respects except for 3D or 2D viewing.\n\n\nMETHODS\nDatabases search consisted of MEDLINE and PubMed. The reference lists for all relevant articles were also reviewed for additional articles. The search strategy employed the use of keywords \"3D,\" \"Laparoscopic,\" \"Laparoscopy,\" \"Performance,\" \"Education,\" \"Learning,\" and \"Surgery\" in appropriate combinations.\n\n\nRESULTS\nOur current understanding of the performance metrics between 3D and 2D laparoscopy is mostly from the research with flawed study designs. This review has been written in a qualitative style to explain in detail how prior research has underestimated the potential benefit of 3D displays and the improvements that must be made in future experiments comparing 3D and 2D displays to better determine any advantage of using one display or the other.\n\n\nCONCLUSIONS\nIndividual laparoscopic performance in 3D may be affected by a multitude of factors. It is crucial for studies to measure participant stereoscopic ability, control for system crosstalk, and use validated measures of performance.",
"title": ""
},
{
"docid": "fad4ff82e9b11f28a70749d04dfbf8ca",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected]. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.",
"title": ""
},
{
"docid": "e7eb15df383c92fcd5a4edc7e27b5265",
"text": "This article presents a new model for word sense disambiguation formulated in terms of evolutionary game theory, where each word to be disambiguated is represented as a node on a graph whose edges represent word relations and senses are represented as classes. The words simultaneously update their class membership preferences according to the senses that neighboring words are likely to choose. We use distributional information to weigh the influence that each word has on the decisions of the others and semantic similarity information to measure the strength of compatibility among the choices. With this information we can formulate the word sense disambiguation problem as a constraint satisfaction problem and solve it using tools derived from game theory, maintaining the textual coherence. The model is based on two ideas: Similar words should be assigned to similar classes and the meaning of a word does not depend on all the words in a text but just on some of them. The article provides an in-depth motivation of the idea of modeling the word sense disambiguation problem in terms of game theory, which is illustrated by an example. The conclusion presents an extensive analysis on the combination of similarity measures to use in the framework and a comparison with state-of-the-art systems. The results show that our model outperforms state-of-the-art algorithms and can be applied to different tasks and in different scenarios.",
"title": ""
}
] |
scidocsrr
|
dd6ca2a600026085dea0f7887cefb5ca
|
Urban mobility study using taxi traces
|
[
{
"docid": "3bc48489d80e824efb7e3512eafc6f30",
"text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.",
"title": ""
}
] |
[
{
"docid": "c708834dc328b9ab60471535bdd37cf0",
"text": "Trajectory optimizers are a powerful class of methods for generating goal-directed robot motion. Differential Dynamic Programming (DDP) is an indirect method which optimizes only over the unconstrained control-space and is therefore fast enough to allow real-time control of a full humanoid robot on modern computers. Although indirect methods automatically take into account state constraints, control limits pose a difficulty. This is particularly problematic when an expensive robot is strong enough to break itself. In this paper, we demonstrate that simple heuristics used to enforce limits (clamping and penalizing) are not efficient in general. We then propose a generalization of DDP which accommodates box inequality constraints on the controls, without significantly sacrificing convergence quality or computational effort. We apply our algorithm to three simulated problems, including the 36-DoF HRP-2 robot. A movie of our results can be found here goo.gl/eeiMnn.",
"title": ""
},
{
"docid": "d4c55e8e70392b7f7a9bcfe325b7a0da",
"text": "BACKGROUND\nFollicular mucinosis coexisting with lymphoproliferative disorders has been thoroughly debated. However, it has been rarely reported in association with inflammatory disorders.\n\n\nMETHODS\nThirteen cases have been retrieved, and those with cutaneous lymphoma or alopecia mucinosa were excluded.\n\n\nRESULTS\nFollicular mucinosis was found in the setting of squamous cell carcinoma, seborrheic keratosis, simple prurigo, acne vulgaris, dextrometorphan-induced phototoxicity, polymorphous light eruption (2 cases), insect bite (2 cases), tick bite, discoid lupus erythematosus, drug-related vasculitis, and demodecidosis. Unexpectedly, our observations revealed a preponderating accumulation of mucin related to photo-exposed areas, sun-associated dermatoses, and histopathologic solar elastosis. The amount of mucin filling the follicles apparently correlated with the intensity of perifollicular inflammatory infiltrate, which was present in all cases. The concurrence of dermal interstitial mucin was found in 7 cases (54%).\n\n\nCONCLUSIONS\nThe concurrence of interstitial dermal mucinosis or the potential role of both ultraviolet radiation and the perifollicular inflammatory infiltrates in its pathogenesis deserves further investigations. Precise recognition and understanding of this distinctive, reactive histological pattern may prevent our patients from unnecessary diagnostic and therapeutic strategies.",
"title": ""
},
{
"docid": "589078a80d4034d4929676d359c16398",
"text": "This paper describes the University of Sheffield’s submission for the WMT16 Multimodal Machine Translation shared task, where we participated in Task 1 to develop German-to-English and Englishto-German statistical machine translation (SMT) systems in the domain of image descriptions. Our proposed systems are standard phrase-based SMT systems based on the Moses decoder, trained only on the provided data. We investigate how image features can be used to re-rank the n-best list produced by the SMT model, with the aim of improving performance by grounding the translations on images. Our submissions are able to outperform the strong, text-only baseline system for both directions.",
"title": ""
},
{
"docid": "854d06ba08492ad68ea96c73908f81ca",
"text": "We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR100. Swapout samples from a rich set of architectures including dropout [20], stochastic depth [7] and residual architectures [5, 6] as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.",
"title": ""
},
{
"docid": "fe944f1845eca3b0c252ada2c0306d61",
"text": "Now a days sharing the information over internet is becoming a critical issue due to security problems. Hence more techniques are needed to protect the shared data in an unsecured channel. The present work focus on combination of cryptography and steganography to secure the data while transmitting in the network. Firstly the data which is to be transmitted from sender to receiver in the network must be encrypted using the encrypted algorithm in cryptography .Secondly the encrypted data must be hidden in an image or video or an audio file with help of steganographic algorithm. Thirdly by using decryption technique the receiver can view the original data from the hidden image or video or audio file. Transmitting data or document can be done through these ways will be secured. In this paper we implemented three encrypt techniques like DES, AES and RSA algorithm along with steganographic algorithm like LSB substitution technique and compared their performance of encrypt techniques based on the analysis of its stimulated time at the time of encryption and decryption process and also its buffer size experimentally. The entire process has done in C#.",
"title": ""
},
{
"docid": "bf23473b7fe711e9dce9487c7df5b624",
"text": "A focus on population health management is a necessary ingredient for success under value-based payment models. As part of that effort, nine ways to embrace technology can help healthcare organizations improve population health, enhance the patient experience, and reduce costs: Use predictive analytics for risk stratification. Combine predictive modeling with algorithms for financial risk management. Use population registries to identify care gaps. Use automated messaging for patient outreach. Engage patients with automated alerts and educational campaigns. Automate care management tasks. Build programs and organize clinicians into care teams. Apply new technologies effectively. Use analytics to measure performance of organizations and providers.",
"title": ""
},
{
"docid": "5d195ea9335a1a218db7e31340fe8ea3",
"text": "Over the past decade management of information systems security has emerged to be a challenging task. Given the increased dependence of businesses on computer-based systems and networks, vulnerabilities of systems abound. Clearly, exclusive reliance on either the technical or the managerial controls is inadequate. Rather, a multifaceted approach is needed. In this paper, based on a panel presented at the 2007 Americas Conference on Information Systems held in Keystone, Colorado, we provide examples of failures in information security, identify challenges for the management of information systems security, and make a case that these challenges require new theory development via examining reference disciplines. We identify these disciplines, recognize applicable research methodologies, and discuss desirable properties of applicable theories.",
"title": ""
},
{
"docid": "7e33af6ec0924681d7d51373ca70b957",
"text": "Total order broadcast is a fundamental communication primitive that plays a central role in bringing cheap software-based high availability to a wide range of services. This article studies the practical performance of such a primitive on a cluster of homogeneous machines.\n We present LCR, the first throughput optimal uniform total order broadcast protocol. LCR is based on a ring topology. It only relies on point-to-point inter-process communication and has a linear latency with respect to the number of processes. LCR is also fair in the sense that each process has an equal opportunity of having its messages delivered by all processes.\n We benchmark a C implementation of LCR against Spread and JGroups, two of the most widely used group communication packages. LCR provides higher throughput than the alternatives, over a large number of scenarios.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "68295a432f68900911ba29e5a6ca5e42",
"text": "In many forecasting applications, it is valuable to predict not only the value of a signal at a certain time point in the future, but also the values leading up to that point. This is especially true in clinical applications, where the future state of the patient can be less important than the patient's overall trajectory. This requires multi-step forecasting, a forecasting variant where one aims to predict multiple values in the future simultaneously. Standard methods to accomplish this can propagate error from prediction to prediction, reducing quality over the long term. In light of these challenges, we propose multi-output deep architectures for multi-step forecasting in which we explicitly model the distribution of future values of the signal over a prediction horizon. We apply these techniques to the challenging and clinically relevant task of blood glucose forecasting. Through a series of experiments on a real-world dataset consisting of 550K blood glucose measurements, we demonstrate the effectiveness of our proposed approaches in capturing the underlying signal dynamics. Compared to existing shallow and deep methods, we find that our proposed approaches improve performance individually and capture complementary information, leading to a large improvement over the baseline when combined (4.87 vs. 5.31 absolute percentage error (APE)). Overall, the results suggest the efficacy of our proposed approach in predicting blood glucose level and multi-step forecasting more generally.",
"title": ""
},
{
"docid": "e2d65924f76331ca8425bd5b2f4a3a83",
"text": "This review is intended to highlight some recent and particularly interesting examples of the synthesis of thiophene derivatives by heterocyclization of readily available S-containing alkyne substrates.",
"title": ""
},
{
"docid": "8a6e7ac784b63253497207c63caa1036",
"text": "Synchronized control (SYNC) is widely adopted for doubly fed induction generator (DFIG)-based wind turbine generators (WTGs) in microgrids and weak grids, which applies P-f droop control to achieve grid synchronization instead of phase-locked loop. The DFIG-based WTG with SYNC will reach a new equilibrium of rotor speed under frequency deviation, resulting in the WTG's acceleration or deceleration. The acceleration/deceleration process can utilize the kinetic energy stored in the rotating mass of WTG to provide active power support for the power grid, but the WTG may lose synchronous stability simultaneously. This stability problem occurs when the equilibrium of rotor speed is lost and the rotor speed exceeds the admissible range during the frequency deviations, which will be particularly analyzed in this paper. It is demonstrated that the synchronous stability can be improved by increasing the P-f droop coefficient. However, increasing the P-f droop coefficient will deteriorate the system's small signal stability. To address this contradiction, a modified synchronized control strategy is proposed. Simulation results verify the effectiveness of the analysis and the proposed control strategy.",
"title": ""
},
{
"docid": "7fd48dcff3d5d0e4bfccc3be67db8c00",
"text": "Criollo cacao (Theobroma cacao ssp. cacao) was cultivated by the Mayas over 1500 years ago. It has been suggested that Criollo cacao originated in Central America and that it evolved independently from the cacao populations in the Amazon basin. Cacao populations from the Amazon basin are included in the second morphogeographic group: Forastero, and assigned to T. cacao ssp. sphaerocarpum. To gain further insight into the origin and genetic basis of Criollo cacao from Central America, RFLP and microsatellite analyses were performed on a sample that avoided mixing pure Criollo individuals with individuals classified as Criollo but which might have been introgressed with Forastero genes. We distinguished these two types of individuals as Ancient and Modern Criollo. In contrast to previous studies, Ancient Criollo individuals formerly classified as ‘wild’, were found to form a closely related group together with Ancient Criollo individuals from South America. The Ancient Criollo trees were also closer to Colombian-Ecuadorian Forastero individuals than these Colombian-Ecuadorian trees were to other South American Forastero individuals. RFLP and microsatellite analyses revealed a high level of homozygosity and significantly low genetic diversity within the Ancient Criollo group. The results suggest that the Ancient Criollo individuals represent the original Criollo group. The results also implies that this group does not represent a separate subspecies and that it probably originated from a few individuals in South America that may have been spread by man within Central America.",
"title": ""
},
{
"docid": "4a87f5e0dfbb1007847ad57b287ae473",
"text": "The aim of this study was to clarify the effects of yohimbine hydrochloride and three intracavernosa l vasoactive agents in patients with erectile dysfunction and the effect of dihydrotestos terone gel in men with andropausal symptoms. The effect of transurethral resecti on of prostate (TURP) on sexual functions was also examined. Altogether 406 patients were included in five studies, and all patients were examined and controlled in the Oulu University Hospital during the years 1991-1998. Twenty-nine patients with mixed-type erectile dysfunction (ED) were recruited into a randomized, controlled, double-blind crossover comparison of placebo and high-dose yohimbine hydrochloride (36 mg per day orally). Positive clinical response s were obtained in 44% of the patients during yohimbine treatment and in 48% during placebo treatment. Thirty patient s with ED underwent an intracavernosa l injection test (ICI) using three different active agents (prostaglandin E1(PGE1) , papaverine hydrochloride (PV), moxisylyte (MS)) and physiological saline. PGE1 produced significantly better rigidity than either PV or MS. Sixty-nine patients with ED who had started ICI therapy with PGE1 at least three years previously were invited to acontrol exami nation to find out the long-term outcomeof this treatment and to evaluate thepatients’ overal satisfacti on with their sexual life.46.4% of thepatient shad discontinued PGE1 therapy, the mean time of using PGE1 having been 23.3 months (range 0-48 months) . 34.8% of the patientsreported that their own spontaneous erectionshad improved during thePGE1 therapy. The sexual functions of 155 patient s with benign prostati c hyperplasia (BPH) were evaluated before TURP and 6 and 12 months afterwards with questionnaires . Only 26% of the patient s had completel y satisfactory erections before TURP, while 22% had satisfactory erections 6 months later and 24% 12 months later. The majority of patients (about 70%) were satisfied with their sexual life both before and after theprocedure. 123 men with symptoms of andropause participated in a randomized, placebo-controlled study to assess the effects of dihydrotestosterone(DHT) gel in men with andropausal symptoms. Thedrug was administered transdermally once aday during six months. Early morning erections improved significantly (p<0.003) in the DHT group by the three-mont h control, the ability to maintain erections was better, and there was also a positive effect on libido. In the patient s with a elevated (>12) international index of the prostati c symptoms score (I-PSS) before DHT treatment , I-PSS decreased from 17.7 to 12.3 points. As a conclusion yohimbine hydrochloride is no better than placebo in the treatment of patients with mixed-type ED. PGE1, PV and MS are well tolerated, and PGE1 was shown to be the most effective drug of the three. ICI therapy with PGE1 in long-term use is safe and effective. Sexual functions in men did not change after TURP, and this group of aging men were fairly satisfied with their sexual lif e despite of the fact that they had some ED and one third of the patients had not had intercourse during the previous year. Transdermal administration of DHT in aging men improves sexual function.",
"title": ""
},
{
"docid": "0122057f9fd813efd9f9e0db308fe8d9",
"text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.",
"title": ""
},
{
"docid": "1538bcc562f0360ab005f757c9e4562f",
"text": "This paper presents the novel task of best topic word selection, that is the selection of the topic word that is the best label for a given topic, as a means of enhancing the interpretation and visualisation of topic models. We propose a number of features intended to capture the best topic word, and show that, in combination as inputs to a reranking model, we are able to consistently achieve results above the baseline of simply selecting the highest-ranked topic word. This is the case both when training in-domain over other labelled topics for that topic model, and cross-domain, using only labellings from independent topic models learned over document collections from different domains and genres.",
"title": ""
},
{
"docid": "a8fe62e387610682f90018ca1a56ba04",
"text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.",
"title": ""
},
{
"docid": "ba4cf5c09f167f74e573fdc196ac41a4",
"text": "In this paper we first give a presentation of the history and organisation of the electricity market in Scandinavia, which has been gradually restructured over the last decade. A futures market has been in operation there since September 1995. We analyse the historical prices in the spot and futures markets, using general theory for pricing of commodities futures contracts. We find that the futures prices on average exceeded the actual spot price at delivery. Hence, we conclude that there is a negative risk premium in the electricity futures market. This result contradicts the findings in most other commodities markets, where the risk premium from holding a futures contract tend to be zero or positive. Physical factors like unexpected precipitation can contribute to explain parts of the observations. However, we also identify the difference in flexibility between the supply and demand sides of the electricity market, leaving the demand side with higher incentive to hedge their positions in the futures market, as a possible explanation for the negative risk premium. The limited data available might not be sufficient to draw fully conclusive results. However, the analysis described in the paper can be repeated with higher significance in a few years from now.",
"title": ""
},
{
"docid": "2ef2e4f2d001ab9221b3d513627bcd0b",
"text": "Semantic segmentation is in-demand in satellite imagery processing. Because of the complex environment, automatic categorization and segmentation of land cover is a challenging problem. Solving it can help to overcome many obstacles in urban planning, environmental engineering or natural landscape monitoring. In this paper, we propose an approach for automatic multi-class land segmentation based on a fully convolutional neural network of feature pyramid network (FPN) family. This network is consisted of pre-trained on ImageNet Resnet50 encoder and neatly developed decoder. Based on validation results, leaderboard score and our own experience this network shows reliable results for the DEEPGLOBE - CVPR 2018 land cover classification sub-challenge. Moreover, this network moderately uses memory that allows using GTX 1080 or 1080 TI video cards to perform whole training and makes pretty fast predictions.",
"title": ""
},
{
"docid": "31fca4faa53520b240267562c9e394fe",
"text": "Purpose – The aim of this study was two-fold: first, to examine the noxious effects of presenteeism on employees’ work well-being in a cross-cultural context involving Chinese and British employees; second, to explore the role of supervisory support as a pan-cultural stress buffer in the presenteeism process. Design/methodology/approach – Using structured questionnaires, the authors compared data collected from samples of 245 Chinese and 128 British employees working in various organizations and industries. Findings – Cross-cultural comparison revealed that the act of presenteeism was more prevalent among Chinese and they reported higher levels of strains than their British counterparts. Hierarchical regression analyses showed that presenteeism had noxious effects on exhaustion for both Chinese and British employees. Moreover, supervisory support buffered the negative impact of presenteeism on exhaustion for both Chinese and British employees. Specifically, the negative relation between presenteeism and exhaustion was stronger for those with more supervisory support. Practical implications – Presenteeism may be used as a career-protecting or career-promoting tactic. However, the negative effects of this behavior on employees’ work well-being across the culture divide should alert us to re-think its pros and cons as a career behavior. Employees in certain cultures (e.g. the hardworking Chinese) may exhibit more presenteeism behaviour, thus are in greater risk of ill-health. Originality/value – This is the first cross-cultural study demonstrating the universality of the act of presenteeism and its damaging effects on employees’ well-being. The authors’ findings of the buffering role of supervisory support across cultural contexts highlight the necessity to incorporate resources in mitigating the harmful impact of presenteeism.",
"title": ""
}
] |
scidocsrr
|
c4fecb931da091a5614c02f88718a6a7
|
Major Traits / Qualities of Leadership
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
}
] |
[
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "b252aea38a537a22ab34fdf44e9443d2",
"text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.",
"title": ""
},
{
"docid": "e6db8cbbb3f7bac211f672ffdef44fb6",
"text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72420289372499b50e658ef0957a3ad9",
"text": "A ripple current cancellation technique injects AC current into the output voltage bus of a converter that is equal and opposite to the normal converter ripple current. The output current ripple is ideally zero, leading to ultra-low noise converter output voltages. The circuit requires few additional components, no active circuits are required. Only an additional filter inductor winding, an auxiliary inductor, and small capacitor are required. The circuit utilizes leakage inductance of the modified filter inductor as all or part of the required auxiliary inductance. Ripple cancellation is independent of switching frequency, duty cycle, and other converter parameters. The circuit eliminates ripple current in both continuous conduction mode and discontinuous conduction mode. Experimental results provide better than an 80/spl times/ ripple current reduction.",
"title": ""
},
{
"docid": "19f1a6c9c5faf73b8868164e8bb310c6",
"text": "Holoprosencephaly refers to a spectrum of craniofacial malformations including cyclopia, ethmocephaly, cebocephaly, and premaxillary agenesis. Etiologic heterogeneity is well documented. Chromosomal, genetic, and teratogenic factors have been implicated. Recognition of holoprosencephaly as a developmental field defect stresses the importance of close scrutiny of relatives for mild forms such as single median incisor, hypotelorism, bifid uvula, or pituitary deficiency.",
"title": ""
},
{
"docid": "c0b40058d003cdaa80d54aa190e48bc2",
"text": "Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.",
"title": ""
},
{
"docid": "ea42c551841cc53c84c63f72ee9be0ae",
"text": "Phishing is a prevalent issue of today’s Internet. Previous approaches to counter phishing do not draw on a crucial factor to combat the threat the users themselves. We believe user education about the dangers of the Internet is a further key strategy to combat phishing. For this reason, we developed an Android app, a game called –NoPhish–, which educates the user in the detection of phishing URLs. It is crucial to evaluate NoPhish with respect to its effectiveness and the users’ knowledge retention. Therefore, we conducted a lab study as well as a retention study (five months later). The outcomes of the studies show that NoPhish helps users make better decisions with regard to the legitimacy of URLs immediately after playing NoPhish as well as after some time has passed. The focus of this paper is on the description and the evaluation of both studies. This includes findings regarding those types of URLs that are most difficult to decide on as well as ideas to further improve NoPhish.",
"title": ""
},
{
"docid": "b468726c2901146f1ca02df13936e968",
"text": "Chinchillas have been successfully maintained in captivity for almost a century. They have only recently been recognized as excellent, long-lived, and robust pets. Most of the literature on diseases of chinchillas comes from farmed chinchillas, whereas reports of pet chinchilla diseases continue to be sparse. This review aims to provide information on current, poorly reported disorders of pet chinchillas, such as penile problems, urolithiasis, periodontal disease, otitis media, cardiac disease, pseudomonadal infections, and giardiasis. This review is intended to serve as a complement to current veterinary literature while providing valuable and clinically relevant information for veterinarians treating chinchillas.",
"title": ""
},
{
"docid": "872370f375d779435eb098571f3ab763",
"text": "The aim of this study was to explore the potential of fused-deposition 3-dimensional printing (FDM 3DP) to produce modified-release drug loaded tablets. Two aminosalicylate isomers used in the treatment of inflammatory bowel disease (IBD), 5-aminosalicylic acid (5-ASA, mesalazine) and 4-aminosalicylic acid (4-ASA), were selected as model drugs. Commercially produced polyvinyl alcohol (PVA) filaments were loaded with the drugs in an ethanolic drug solution. A final drug-loading of 0.06% w/w and 0.25% w/w was achieved for the 5-ASA and 4-ASA strands, respectively. 10.5mm diameter tablets of both PVA/4-ASA and PVA/5-ASA were subsequently printed using an FDM 3D printer, and varying the weight and densities of the printed tablets was achieved by selecting the infill percentage in the printer software. The tablets were mechanically strong, and the FDM 3D printing was shown to be an effective process for the manufacture of the drug, 5-ASA. Significant thermal degradation of the active 4-ASA (50%) occurred during printing, however, indicating that the method may not be appropriate for drugs when printing at high temperatures exceeding those of the degradation point. Differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) of the formulated blends confirmed these findings while highlighting the potential of thermal analytical techniques to anticipate drug degradation issues in the 3D printing process. The results of the dissolution tests conducted in modified Hank's bicarbonate buffer showed that release profiles for both drugs were dependent on both the drug itself and on the infill percentage of the tablet. Our work here demonstrates the potential role of FDM 3DP as an efficient and low-cost alternative method of manufacturing individually tailored oral drug dosage, and also for production of modified-release formulations.",
"title": ""
},
{
"docid": "1b30c14536db1161b77258b1ce213fbb",
"text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.",
"title": ""
},
{
"docid": "ae800ced5663d320fcaca2df6f6bf793",
"text": "Stowage planning for container vessels concerns the core competence of the shipping lines. As such, automated stowage planning has attracted much research in the past two decades, but with few documented successes. In an ongoing project, we are developing a prototype stowage planning system aiming for large containerships. The system consists of three modules: the stowage plan generator, the stability adjustment module, and the optimization engine. This paper mainly focuses on the stability adjustment module. The objective of the stability adjustment module is to check the global ship stability of the stowage plan produced by the stowage plan generator and resolve the stability issues by applying a heuristic algorithm to search for alternative feasible locations for containers that violate some of the stability criteria. We demonstrate that the procedure proposed is capable of solving the stability problems for a large containership with more than 5000 TEUs. Keywords— Automation, Stowage Planning, Local Search, Heuristic algorithm, Stability Optimization",
"title": ""
},
{
"docid": "f289b58d16bf0b3a017a9b1c173cbeb6",
"text": "All hospitalisations for pulmonary arterial hypertension (PAH) in the Scottish population were examined to determine the epidemiological features of PAH. These data were compared with expert data from the Scottish Pulmonary Vascular Unit (SPVU). Using the linked Scottish Morbidity Record scheme, data from all adults aged 16-65 yrs admitted with PAH (idiopathic PAH, pulmonary hypertension associated with congenital heart abnormalities and pulmonary hypertension associated with connective tissue disorders) during the period 1986-2001 were identified. These data were compared with the most recent data in the SPVU database (2005). Overall, 374 Scottish males and females aged 16-65 yrs were hospitalised with incident PAH during 1986-2001. The annual incidence of PAH was 7.1 cases per million population. On December 31, 2002, there were 165 surviving cases, giving a prevalence of PAH of 52 cases per million population. Data from the SPVU were available for 1997-2006. In 2005, the last year with a complete data set, the incidence of PAH was 7.6 cases per million population and the corresponding prevalence was 26 cases per million population. Hospitalisation data from the Scottish Morbidity Record scheme gave higher prevalences of pulmonary arterial hypertension than data from the expert centres (Scotland and France). The hospitalisation data may overestimate the true frequency of pulmonary arterial hypertension in the population, but it is also possible that the expert centres underestimate the true frequency.",
"title": ""
},
{
"docid": "99dcde334931eeb8e20ce7aa3c7982d5",
"text": "We describe a framework for multiscale image analysis in which line segments play a role analogous to the role played by points in wavelet analysis. The framework has five key components. The beamlet dictionary is a dyadicallyorganized collection of line segments, occupying a range of dyadic locations and scales, and occurring at a range of orientations. The beamlet transform of an image f(x, y) is the collection of integrals of f over each segment in the beamlet dictionary; the resulting information is stored in a beamlet pyramid. The beamlet graph is the graph structure with pixel corners as vertices and beamlets as edges; a path through this graph corresponds to a polygon in the original image. By exploiting the first four components of the beamlet framework, we can formulate beamlet-based algorithms which are able to identify and extract beamlets and chains of beamlets with special properties. In this paper we describe a four-level hierarchy of beamlet algorithms. The first level consists of simple procedures which ignore the structure of the beamlet pyramid and beamlet graph; the second level exploits only the parent-child dependence between scales; the third level incorporates collinearity and co-curvity relationships; and the fourth level allows global optimization over the full space of polygons in an image. These algorithms can be shown in practice to have suprisingly powerful and apparently unprecedented capabilities, for example in detection of very faint curves in very noisy data. We compare this framework with important antecedents in image processing (Brandt and Dym; Horn and collaborators; Götze and Druckenmiller) and in geometric measure theory (Jones; David and Semmes; and Lerman).",
"title": ""
},
{
"docid": "faa1a49f949d5ba997f4285ef2e708b2",
"text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "26dc59c30371f1d0b2ff2e62a96f9b0f",
"text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).",
"title": ""
},
{
"docid": "58702f835df43337692f855f35a9f903",
"text": "A dual-mode wide-band transformer based VCO is proposed. The two port impedance of the transformer based resonator is analyzed to derive the optimum primary to secondary capacitor load ratio, for robust mode selectivity and minimum power consumption. Fabricated in a 16nm FinFET technology, the design achieves 2.6× continuous tuning range spanning 7-to-18.3 GHz using a coil area of 120×150 μm2. The absence of lossy switches helps in maintaining phase noise of -112 to -100 dBc/Hz at 1 MHz offset, across the entire tuning range. The VCO consumes 3-4.4 mW and realizes power frequency tuning normalized figure of merit of 12.8 and 2.4 dB at 7 and 18.3 GHz respectively.",
"title": ""
},
{
"docid": "4d8c869c9d6e1d7ba38f56a124b84412",
"text": "We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated an nealing algorithm to optimize radial basis function (RBF) networks. This algorithm enables us to maximize the joint posterior distribution of the network parameters and the number of basis functions. It performs a global search in the joint space of the pa rameters and number of parameters, thereby surmounting the problem of local minima. We also show that by calibrating a Bayesian model, we can obtain the classical AIC, BIC and MDL model selection criteria within a penalized likelihood framework. Finally, we show theoretically and empirically that the algorithm converges to the modes of the full posterior distribution in an efficient way.",
"title": ""
},
{
"docid": "ceb59133deb7828edaf602308cb3450a",
"text": "Abstract While there has been a great deal of interest in the modelling of non-linearities and regime shifts in economic time series, there is no clear consensus regarding the forecasting abilities of these models. In this paper we develop a general approach to predict multiple time series subject to Markovian shifts in the regime. The feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.",
"title": ""
},
{
"docid": "55ffe87f74194ab3de60fea9d888d9ad",
"text": "A new priority queue implementation for the future event set problem is described in this article. The new implementation is shown experimentally to be O(1) in queue size for the priority increment distributions recently considered by Jones in his review article. It displays hold times three times shorter than splay trees for a queue size of 10,000 events. The new implementation, called a calendar queue, is a very simple structure of the multiple list variety using a novel solution to the overflow problem.",
"title": ""
}
] |
scidocsrr
|
c24da78b14df6173474ad114a6163879
|
First-Person Action-Object Detection with EgoNet
|
[
{
"docid": "c2402cea6e52ee98bc0c3de084580194",
"text": "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.",
"title": ""
},
{
"docid": "4b8bc69ff0edde314efbe626e334ea12",
"text": "We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”",
"title": ""
}
] |
[
{
"docid": "b5e19f1609aaaf1ad1c91eb3a846609c",
"text": "In this paper, Radial Basis Function (RBF) neural Network has been implemented on eight directional values of gradient features for handwritten Hindi character recognition. The character recognition system was trained by using different samples in different handwritings collected of various people of different age groups. The Radial Basis Function network with one input and one output layer has been used for the training of RBF Network. Experiment has been performed to study the recognition accuracy, training time and classification time of RBF neural network. The recognition accuracy, training time and classification time achieved by implementing the RBF network have been compared with the result achieved in previous related work i.e. Back propagation Neural Network. Comparative result shows that the RBF with directional feature provides slightly less recognition accuracy, reduced training and classification time.",
"title": ""
},
{
"docid": "83c9945f61900f4f15c09ff20eee09bc",
"text": "Rendering the user's body in virtual reality increases immersion and presence the illusion of \"being there\". Recent technology enables determining the pose and position of the hands to render them accordingly while interacting within the virtual environment. Virtual reality applications often use realistic male or female hands, mimic robotic hands, or cartoon hands. However, it is unclear how users perceive different hand styles. We conducted a study with 14 male and 14 female participants in virtual reality to investigate the effect of gender on the perception of six different hands. Quantitative and qualitative results show that women perceive lower levels of presence while using male avatar hands and male perceive lower levels of presence using non-human avatar hands. While women dislike male hands, men accept and feel presence with avatar hands of both genders. Our results highlight the importance of considering the users' diversity when designing virtual reality experiences.",
"title": ""
},
{
"docid": "6fb416991c80cb94ad09bc1bb09f81c7",
"text": "Children with Autism Spectrum Disorder often require therapeutic interventions to support engagement in effective social interactions. In this paper, we present the results of a study conducted in three public schools that use an educational and behavioral intervention for the instruction of social skills in changing situational contexts. The results of this study led to the concept of interaction immediacy to help children maintain appropriate spatial boundaries, reply to conversation initiators, disengage appropriately at the end of an interaction, and identify potential communication partners. We describe design principles for Ubicomp technologies to support interaction immediacy and present an example design. The contribution of this work is twofold. First, we present an understanding of social skills in mobile and dynamic contexts. Second, we introduce the concept of interaction immediacy and show its effectiveness as a guiding principle for the design of Ubicomp applications.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "dc8b19649f217d7fde46bb458d186923",
"text": "Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans. Word Count: 148 Anthropomorphism Increases Trust, 3 Technology is an increasingly common substitute for humanity. Sophisticated machines now perform tasks that once required a thoughtful human mind, from grading essays to diagnosing cancer to driving a car. As engineers overcome design barriers to creating such technology, important psychological barriers that users will face when using this technology emerge. Perhaps most important, will people be willing to trust competent technology to replace a human mind, such as a teacher’s mind when grading essays, or a doctor’s mind when diagnosing cancer, or their own mind when driving a car? Our research tests one important theoretical determinant of trust in any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley, 2010). Anthropomorphism is a process of inductive inference whereby people attribute to nonhumans distinctively human characteristics, particularly the capacity for rational thought (agency) and conscious feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical definitions of personhood focus on these mental capacities as essential to being human (Dennett, 1978; Locke, 1841/1997). Furthermore, studies examining people’s lay theories of humanness show that people define humanness in terms of emotions that implicate higher order mental process such as self-awareness and memory (e.g., humiliation, nostalgia; Leyens et al., 2000) and traits that involve cognition and emotion (e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhuman does not simply involve attributing superficial human characteristics (e.g., a humanlike face or body) to it, but rather attributing essential human characteristics to the agent (namely a humanlike mind, capable of thinking and feeling). Trust is a multifaceted concept that can refer to belief that another will behave with benevolence, integrity, predictability, or competence (McKnight & Chervany, 2001). Our prediction that anthropomorphism will increase trust centers on this last component of trust in another's competence (akin to confidence) (Siegrist, Earle, & Gutscher, 2003; Twyman, Harvey, & Harries, Anthropomorphism Increases Trust, 4 2008). Just as a patient would trust a thoughtful doctor to diagnose cancer more than a thoughtless one, or would rely on mindful cab driver to navigate through rush hour traffic more than a mindless cab driver, this conceptualization of anthropomorphism predicts that people would trust easily anthropomorphized technology to perform its intended function more than seemingly mindless technology. An autonomous vehicle (one that that drives itself) for instance, should seem better able to navigate through traffic when it seems able to think and sense its surroundings than when it seems to be simply mindless machinery. Or a “warbot” intended to kill should seem more lethal and sinister when it appears capable of thinking and planning than when it seems to be simply a computer mindlessly following an operator’s instructions. The more technology seems to have humanlike mental capacities, the more people should trust it to perform its intended function competently, regardless of the valence of its intended function (Epley, Caruso, & Bazerman, 2006; Pierce, Kilduff, Galinsky, & Sivanathan, 2013). This prediction builds on the common association between people’s perceptions of others’ mental states and of competent action. Because mindful agents appear capable of controlling their own actions, people judge others to be more responsible for successful actions they perform with conscious awareness, foresight, and planning (Cushman, 2008; Malle & Knobe, 1997) than for actions they perform mindlessly (see Alicke, 2000; Shaver, 1985; Weiner, 1995). Attributing a humanlike mind to a nonhuman agent should therefore more make the agent seem better able to control its own actions, and therefore better able to perform its intended functions competently. Our prediction also advances existing research on the consequences of anthropomorphism by articulating the psychological processes by which anthropomorphism could affect trust in technology (Nass & Moon, 2000), and by both experimentally manipulating anthropomorphism as well as measuring it as a critical mediator. Some experiments have manipulated the humanlike appearance of robots and Anthropomorphism Increases Trust, 5 assessed measures indirectly related to trust. However, such studies have not measured whether such superficial manipulations actually increases the attribution of essential humanlike qualities to that agent (the attribution we predict is critical for trust in technology; Hancock, Billings, Schaeffer, Chen, De Visser, 2011), and therefore cannot explain factors found ad-hoc to moderate the apparent effect of anthropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012). Another study found that individual differences in anthropomorphism predicted differences in willingness to trust technology in hypothetical scenarios (Waytz et al., 2010), but did not manipulate anthropomorphism experimentally. Our experiment is therefore the first to test our theoretical model of how anthropomorphism affects trust in technology. We conducted our experiment in a domain of practical relevance: people’s willingness to trust an autonomous vehicle. Autonomous vehicles—cars that control their own steering and speed—are expected to account for 75% of vehicles on the road by 2040 (Newcomb, 2012). Employing these autonomous features means surrendering personal control of the vehicle and trusting technology to drive safely. We manipulated the ease with which a vehicle, approximated by a driving simulator, could be anthropomorphized by merely giving it independent agency, or by also giving it a name, gender, and a human voice. We predicted that independent agency alone would make the car seem more mindful than a normal car, and that adding further anthropomorphic qualities would make the vehicle seem even more mindful. More important, we predicted that these relative increases in anthropomorphism would increase physiological, behavioral, and psychological measures of trust in the vehicle’s ability to drive effectively. Because anthropomorphism increases trust in the agent’s ability to perform its job, we also predicted that increased anthropomorphism of an autonomous agent would mitigate blame for an agent’s involvement in an undesirable outcome. To test this, we implemented a virtually unavoidable Anthropomorphism Increases Trust, 6 accident during the driving simulation in which participants were struck by an oncoming car, an accident clearly caused by the other driver. We implemented this to maintain experimental control over participants’ experience because everyone in the autonomous vehicle conditions would get into the same accident, one clearly caused by the other driver. Indeed, when two people are potentially responsible for an outcome, the agent seen to be more competent tends to be credited for a success whereas the agent seen to be less competent tends to be blamed for a failure (Beckman, 1970; Wetzel, 1972). Because we predicted that anthropomorphism would increase trust in the vehicle’s competence, we also predicted that it would reduce blame for an accident clear caused by another vehicle. Experiment Method One hundred participants (52 female, Mage=26.39) completed this experiment using a National Advanced Driving Simulator. Once in the simulator, the experimenter attached physiological equipment to participants and randomly assigned them to condition: Normal, Agentic, or Anthropomorphic. Participants in the Normal condition drove the vehicle themselves, without autonomous features. Participants in the Agentic condition drove a vehicle capable of controlling its steering and speed (an “autonomous vehicle”). The experimenter followed a script describing the vehicle's features, suggesting when to use the autonomous features, and describing what was about to happen. Participants in the Anthropomorphic condition drove the same autonomous vehicle, but with additional anthropomorphic features beyond mere agency—the vehicle was referred to by name (Iris), was given a gender (female), and was given a voice through human audio files played at predetermined times throughout the course. The voice files followed the same script used by the experimenter in the Agentic condition, modified where necessary (See Supplemental Online Material [SOM]). Anthropomorphism Increases Trust, 7 All participants first completed a driving history questionnaire and a measure of dispositional anthropomorphism (Waytz et al., 2010). Scores on this measure did not vary significantly by condition, so we do not discuss them further. Participants in the Agentic and Anthropomorphic conditions then drove a short practice course to familiarize themselves with the car’s autonomous features. Participants coul",
"title": ""
},
{
"docid": "b4a5ebf335cc97db3790c9e2208e319d",
"text": "We examine whether conservative white males are more likely than are other adults in the U.S. general public to endorse climate change denial. We draw theoretical and analytical guidance from the identityprotective cognition thesis explaining the white male effect and from recent political psychology scholarship documenting the heightened system-justification tendencies of political conservatives. We utilize public opinion data from ten Gallup surveys from 2001 to 2010, focusing specifically on five indicators of climate change denial. We find that conservative white males are significantly more likely than are other Americans to endorse denialist views on all five items, and that these differences are even greater for those conservative white males who self-report understanding global warming very well. Furthermore, the results of our multivariate logistic regression models reveal that the conservative white male effect remains significant when controlling for the direct effects of political ideology, race, and gender as well as the effects of nine control variables. We thus conclude that the unique views of conservative white males contribute significantly to the high level of climate change denial in the United States. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "93c2ed30659e6b9c2020866cd3670705",
"text": "Longitudinal melanonychia (LM) is a pigmented longitudinal band of the nail unit, which results from pigment deposition, generally melanin, in the nail plate. Such lesion is frequently observed in specific ethnic groups, such as Asians and African Americans, typically affecting multiple nails. When LM involves a single nail plate, it may be the sign of a benign lesion within the matrix, such as a melanocytic nevus, simple lentigo, or nail matrix melanocyte activation. However, the possibility of melanoma must be considered. Nail melanoma in children is exceptionally rare and only 2 cases have been reported in fairskinned Caucasian individuals.",
"title": ""
},
{
"docid": "a4037343fa0df586946d8034b0bf8a5b",
"text": "Security researchers are applying software reliability models to vulnerability data, in an attempt to model the vulnerability discovery process. I show that most current work on these vulnerability discovery models (VDMs) is theoretically unsound. I propose a standard set of definitions relevant to measuring characteristics of vulnerabilities and their discovery process. I then describe the theoretical requirements of VDMs and highlight the shortcomings of existing work, particularly the assumption that vulnerability discovery is an independent process.",
"title": ""
},
{
"docid": "821b6ce6e6d51e9713bb44c4c9bf8cf0",
"text": "Rapidly destructive arthritis (RDA) of the shoulder is a rare disease. Here, we report two cases, with different destruction patterns, which were most probably due to subchondral insufficiency fractures (SIFs). Case 1 involved a 77-year-old woman with right shoulder pain. Rapid destruction of both the humeral head and glenoid was seen within 1 month of the onset of shoulder pain. We diagnosed shoulder RDA and performed a hemiarthroplasty. Case 2 involved a 74-year-old woman with left shoulder pain. Humeral head collapse was seen within 5 months of pain onset, without glenoid destruction. Magnetic resonance imaging showed a bone marrow edema pattern with an associated subchondral low-intensity band, typical of SIF. Total shoulder arthroplasty was performed in this case. Shoulder RDA occurs as a result of SIF in elderly women; the progression of the joint destruction is more rapid in cases with SIFs of both the humeral head and the glenoid. Although shoulder RDA is rare, this disease should be included in the differential diagnosis of acute onset shoulder pain in elderly female patients with osteoporosis and persistent joint effusion.",
"title": ""
},
{
"docid": "91b49384769b178b300f2e3a4bd0b265",
"text": "The recently proposed self-ensembling methods have achieved promising results in deep semi-supervised learning, which penalize inconsistent predictions of unlabeled data under different perturbations. However, they only consider adding perturbations to each single data point, while ignoring the connections between data samples. In this paper, we propose a novel method, called Smooth Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on the predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of \"similar\" neighboring points are learned to be smooth on the low-dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are fewer. For the non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81% to 1.36%. Our method also shows robustness to noisy labels.",
"title": ""
},
{
"docid": "e28ab50c2d03402686cc9a465e1231e7",
"text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.",
"title": ""
},
{
"docid": "392fc4decf7a474277ec0fe596e19145",
"text": "This paper proposes an approach to establish cooperative behavior within traffic scenarios involving only autonomously driving vehicles. The main idea is to employ principles of auction-based control to determine driving strategies by which the vehicles reach their driving goals, while adjusting their paths to each other and adhering to imposed constraints like traffic rules. Driving plans (bids) are repetitively negotiated among the control units of the vehicles (the auction) to obtain a compromise between separate (local) vehicle goals and the global objective to resolve the considered traffic scenario. The agreed driving plans serve as reference trajectories for local model-predictive controllers of the vehicles to realize the driving behavior. The approach is illustrated for a cooperative overtaking scenario comprising three vehicles.",
"title": ""
},
{
"docid": "695264db0ca1251ab0f63b04d41c68cd",
"text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.",
"title": ""
},
{
"docid": "e9858b151a3f042f198184cda0917639",
"text": "Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domainor representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "28533f1b8aa1e6191efb818d4e93fb66",
"text": "Pelvic tilt is often quantified using the angle between the horizontal and a line connecting the anterior superior iliac spine (ASIS) and the posterior superior iliac spine (PSIS). Although this angle is determined by the balance of muscular and ligamentous forces acting between the pelvis and adjacent segments, it could also be influenced by variations in pelvic morphology. The primary objective of this anatomical study was to establish how such variation may affect the ASIS-PSIS measure of pelvic tilt. In addition, we also investigated how variability in pelvic landmarks may influence measures of innominate rotational asymmetry and measures of pelvic height. Thirty cadaver pelves were used for the study. Each specimen was positioned in a fixed anatomical reference position and the angle between the ASIS and PSIS measured bilaterally. In addition, side-to-side differences in the height of the innominate bone were recorded. The study found a range of values for the ASIS-PSIS of 0-23 degrees, with a mean of 13 and standard deviation of 5 degrees. Asymmetry of pelvic landmarks resulted in side-to-side differences of up to 11 degrees in ASIS-PSIS tilt and 16 millimeters in innominate height. These results suggest that variations in pelvic morphology may significantly influence measures of pelvic tilt and innominate rotational asymmetry.",
"title": ""
},
{
"docid": "f98b1b9808b3eb41f3d60f207854ec79",
"text": "The newly emerging event-based social networks (EBSNs) connect online and offline social interactions, offering a great opportunity to understand behaviors in the cyber-physical space. While existing efforts have mainly focused on investigating user behaviors in traditional social network services (SNS), this paper aims to exploit individual behaviors in EBSNs, which remains an unsolved problem. In particular, our method predicts activity attendance by discovering a set of factors that connect the physical and cyber spaces and influence individual's attendance of activities in EBSNs. These factors, including content preference, context (spatial and temporal) and social influence, are extracted using different models and techniques. We further propose a novel Singular Value Decomposition with Multi-Factor Neighborhood (SVD-MFN) algorithm to predict activity attendance by integrating the discovered heterogeneous factors into a single framework, in which these factors are fused through a neighborhood set. Experiments based on real-world data from Douban Events demonstrate that the proposed SVD-MFN algorithm outperforms the state-of-the-art prediction methods.",
"title": ""
},
{
"docid": "7a1f244aae5f28cd9fb2d5ba54113c28",
"text": "Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support.",
"title": ""
}
] |
scidocsrr
|
94057608623a7644e71b477a75cdfeda
|
Exponentiated Gradient Exploration for Active Learning
|
[
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
}
] |
[
{
"docid": "ac96b284847f58c7683df92e13157f40",
"text": "Falls are dangerous for the aged population as they can adversely affect health. Therefore, many fall detection systems have been developed. However, prevalent methods only use accelerometers to isolate falls from activities of daily living (ADL). This makes it difficult to distinguish real falls from certain fall-like activities such as sitting down quickly and jumping, resulting in many false positives. Body orientation is also used as a means of detecting falls, but it is not very useful when the ending position is not horizontal, e.g. falls happen on stairs. In this paper we present a novel fall detection system using both accelerometers and gyroscopes. We divide human activities into two categories: static postures and dynamic transitions. By using two tri-axial accelerometers at separate body locations, our system can recognize four kinds of static postures: standing, bending, sitting, and lying. Motions between these static postures are considered as dynamic transitions. Linear acceleration and angular velocity are measured to determine whether motion transitions are intentional. If the transition before a lying posture is not intentional, a fall event is detected. Our algorithm, coupled with accelerometers and gyroscopes, reduces both false positives and false negatives, while improving fall detection accuracy. In addition, our solution features low computational cost and real-time response.",
"title": ""
},
{
"docid": "6cbd51bbef3b56df6d97ec7b4348cd94",
"text": "This study reviews human clinical experience to date with several synthetic cannabinoids, including nabilone, levonantradol, ajulemic acid (CT3), dexanabinol (HU-211), HU-308, and SR141716 (Rimonabant®). Additionally, the concept of “clinical endogenous cannabinoid deficiency” is explored as a possible factor in migraine, idiopathic bowel disease, fibromyalgia and other clinical pain states. The concept of analgesic synergy of cannabinoids and opioids is addressed. A cannabinoid-mediated improvement in night vision at the retinal level is discussed, as well as its potential application to treatment of retinitis pigmentosa and other conditions. Additionally noted is the role of cannabinoid treatment in neuroprotection and its application to closed head injury, cerebrovascular accidents, and CNS degenerative diseases including Alzheimer, Huntington, Parkinson diseases and ALS. Excellent clinical results employing cannabis based medicine extracts (CBME) in spasticity and spasms of MS suggests extension of such treatment to other spasmodic and dystonic conditions. Finally, controversial areas of cannabinoid treatment in obstetrics, gynecology and pediatrics are addressed along with a rationale for such interventions. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <docdelivery@haworthpress. com> Website: <http://www.HaworthPress.com> 2003 by The Haworth Press, Inc. All rights reserved.]",
"title": ""
},
{
"docid": "56a7243414824a2e4ab3993dc3a90fbe",
"text": "The primary objectives of periodontal therapy are to maintain and to obtain health and integrity of the insertion apparatus and to re-establish esthetics by means of the quantitative and qualitative restoration of the gingival margin. Esthetics can be considered essential to the success of any dental procedure. However, in cleft lip and palate patients gingival esthetics do not play a relevant role, since most patients present little gingiva exposure (Mikami, 1990). The treatment protocol for cleft palate patients is complex and often requires a myriad of surgical and rehabilitative procedures that last until adulthood. In order to rehabilitate these patients and provide them with adequate physical and psychological conditions for a good quality of life, plastic surgery has been taking place since the 19th century, with the development of new techniques. By the age of six months the patients have undergone lip repair procedures (Bill, 1956; Jolleys, 1954), followed by palatoplasty at the age of 1218 months. As a consequence of these surgical interventions, the formation of innumerous scars and fibrous tissue in the anterior region may cause some sequels, such as orofacial growth alterations (Quarta and Koch, 1989; Ozawa, 2001), a shallow vestibule with lack of attached gingiva and gingival margin mobility (Falcone, 1966). A shallow vestibule in the cleft lip and palate patient is associated with the contraction of the upper lip during healing (Iino et al, 2001), which causes deleterious effects on growth, facial expression, speech, orthodontic and prosthetic treatment problems, diminished keratinized gingiva, bone graft resorption and changes in the upper lip muscle pattern. The surgical protocol at the Hospital for Rehabilitation of Craniofacial Anomalies (HRCA) in Bauru consists of carrying out primary surgeries (cheiloplasty and palatoplasty) during the first months of Periodontal Health Re-Establishment in Cleft Lip and Palate Patients through Vestibuloplasty Associated with Free Gingival Graft",
"title": ""
},
{
"docid": "ec58915a7fd321bcebc748a369153509",
"text": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.",
"title": ""
},
{
"docid": "a5001e03007f3fd166e15db37dcd3bc7",
"text": "Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models.",
"title": ""
},
{
"docid": "6300f94dbfa58583e15741e5c86aa372",
"text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.",
"title": ""
},
{
"docid": "cd42f9eba7e1018f8a21c8830400af59",
"text": "This chapter proposes a conception of lexical meaning as use-potential, in contrast to prevailing atomistic and reificational views. The issues are illustrated on the example of spatial expressions, pre-eminently prepositions. It is argued that the dichotomy between polysemy and semantic generality is a false one, with expressions occupying points on a continuum from full homonymy to full monosemy, and with typical cases of polysemy falling in between. The notion of use-potential is explored in connectionist models of spatial categorization. Some possible objections to the use-potential approach are also addressed.",
"title": ""
},
{
"docid": "c91cb54598965e1111020ab70f9fbe94",
"text": "This paper proposes a parameter estimation method for doubly-fed induction generators (DFIGs) in variable-speed wind turbine systems (WTS). The proposed method employs an extended Kalman filter (EKF) for estimation of all electrical parameters of the DFIG, i.e., the stator and rotor resistances, the leakage inductances of stator and rotor, and the mutual inductance. The nonlinear state space model of the DFIG is derived and the design procedure of the EKF is described. The observability matrix of the linearized DFIG model is computed and the observability is checked online for different operation conditions. The estimation performance of the EKF is illustrated by simulation results. The estimated parameters are plotted against their actual values. The estimation performance of the EKF is also tested under variations of the DFIG parameters to investigate the estimation accuracy for changing parameters.",
"title": ""
},
{
"docid": "7f9b9bef62aed80a918ef78dcd15fb2a",
"text": "Transferring image-based object detectors to domain of videos remains a challenging problem. Previous efforts mostly exploit optical flow to propagate features across frames, aiming to achieve a good trade-off between performance and computational complexity. However, introducing an extra model to estimate optical flow would significantly increase the overall model size. The gap between optical flow and high-level features can hinder it from establishing the spatial correspondence accurately. Instead of relying on optical flow, this paper proposes a novel module called Progressive Sparse Local Attention (PSLA), which establishes the spatial correspondence between features across frames in a local region with progressive sparse strides and uses the correspondence to propagate features. Based on PSLA, Recursive Feature Updating (RFU) and Dense feature Transforming (DFT) are introduced to model temporal appearance and enrich feature representation respectively. Finally, a novel framework for video object detection is proposed. Experiments on ImageNet VID are conducted. Our framework achieves a state-of-the-art speedaccuracy trade-off with significantly reduced model capacity.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "67755a3dd06b09f458d1ee013e18c8ef",
"text": "Spiking neural networks are naturally asynchronous and use pulses to carry information. In this paper, we consider implementing such networks on a digital chip. We used an event-based simulator and we started from a previously established simulation, which emulates an analog spiking neural network, that can extract complex and overlapping, temporally correlated features. We modified this simulation to allow an easier integration in an embedded digital implementation. We first show that a four bits synaptic weight resolution is enough to achieve the best performance, although the network remains functional down to a 2 bits weight resolution. Then we show that a linear leak could be implemented to simplify the neurons leakage calculation. Finally, we demonstrate that an order-based STDP with a fixed number of potentiated synapses as low as 200 is efficient for features extraction. A simulation including these modifications, which lighten and increase the efficiency of digital spiking neural network implementation shows that the learning behavior is not affected, with a recognition rate of 98% in a cars trajectories detection application.",
"title": ""
},
{
"docid": "af98839cc3e28820c8d79403d58d903a",
"text": "Annotating the increasing amounts of user-contributed images in a personalized manner is in great demand. However, this demand is largely ignored by the mainstream of automated image annotation research. In this paper we aim for personalizing automated image annotation by jointly exploiting personalized tag statistics and content-based image annotation. We propose a cross-entropy based learning algorithm which personalizes a generic annotation model by learning from a user's multimedia tagging history. Using cross-entropy-minimization based Monte Carlo sampling, the proposed algorithm optimizes the personalization process in terms of a performance measurement which can be flexibly chosen. Automatic image annotation experiments with 5,315 realistic users in the social web show that the proposed method compares favorably to a generic image annotation method and a method using personalized tag statistics only. For 4,442 users the performance improves, where for 1,088 users the absolute performance gain is at least 0.05 in terms of average precision. The results show the value of the proposed method.",
"title": ""
},
{
"docid": "e4ce06c8e1dba5f9ec537dc137acf3ec",
"text": "Hemangiomas are relatively common benign proliferative lesion of vascular tissue origin. They are often present at birth and may become more apparent throughout life. They are seen on facial skin, tongue, lips, buccal mucosa and palate as well as muscles. Hemangiomas occur more common in females than males. This case report presents a case of capillary hemangioma in maxillary anterior region in a 10-year-old boy. How to cite this article: Satish V, Bhat M, Maganur PC, Shah P, Biradar V. Capillary Hemangioma in Maxillary Anterior Region: A Case Report. Int J Clin Pediatr Dent 2014;7(2):144-147.",
"title": ""
},
{
"docid": "a6f9dc745682efb871e338b63c0cbbc4",
"text": "Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.",
"title": ""
},
{
"docid": "ffa974993a412ddba571e65f8b87f7df",
"text": "Synthetic gene switches are basic building blocks for the construction of complex gene circuits that transform mammalian cells into useful cell-based machines for next-generation biotechnological and biomedical applications. Ligand-responsive gene switches are cellular sensors that are able to process specific signals to generate gene product responses. Their involvement in complex gene circuits results in sophisticated circuit topologies that are reminiscent of electronics and that are capable of providing engineered cells with the ability to memorize events, oscillate protein production, and perform complex information-processing tasks. Microencapsulated mammalian cells that are engineered with closed-loop gene networks can be implanted into mice to sense disease-related input signals and to process this information to produce a custom, fine-tuned therapeutic response that rebalances animal metabolism. Progress in gene circuit design, in combination with recent breakthroughs in genome engineering, may result in tailored engineered mammalian cells with great potential for future cell-based therapies.",
"title": ""
},
{
"docid": "cd977d0e24fd9e26e90f2cf449141842",
"text": "Several leadership and ethics scholars suggest that the transformational leadership process is predicated on a divergent set of ethical values compared to transactional leadership. Theoretical accounts declare that deontological ethics should be associated with transformational leadership while transactional leadership is likely related to teleological ethics. However, very little empirical research supports these claims. Furthermore, despite calls for increasing attention as to how leaders influence their followers’ perceptions of the importance of ethics and corporate social responsibility (CSR) for organizational effectiveness, no empirical study to date has assessed the comparative impact of transformational and transactional leadership styles on follower CSR attitudes. Data from 122 organizational leaders and 458 of their followers indicated that leader deontological ethical values (altruism, universal rights, Kantian principles, etc.) were strongly associated with follower ratings of transformational leadership, while leader teleological ethical values (utilitarianism) were related to follower ratings of transactional leadership. As predicted, only transformational leadership was associated with follower beliefs in the stakeholder view of CSR. Implications for the study and practice of ethical leadership, future research directions, and management education are discussed.",
"title": ""
},
{
"docid": "9078698db240725e1eb9d1f088fb05f4",
"text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.",
"title": ""
},
{
"docid": "e541ae262655b7f5affefb32ce9267ee",
"text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.",
"title": ""
},
{
"docid": "eaa2ed7e15a3b0a3ada381a8149a8214",
"text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.",
"title": ""
},
{
"docid": "171e9eef8a23f5fdf05ba61a56415130",
"text": "Human moral judgment depends critically on “theory of mind,” the capacity to represent the mental states of agents. Recent studies suggest that the right TPJ (RTPJ) and, to lesser extent, the left TPJ (LTPJ), the precuneus (PC), and the medial pFC (MPFC) are robustly recruited when participants read explicit statements of an agent's beliefs and then judge the moral status of the agent's action. Real-world interactions, by contrast, often require social partners to infer each other's mental states. The current study uses fMRI to probe the role of these brain regions in supporting spontaneous mental state inference in the service of moral judgment. Participants read descriptions of a protagonist's action and then either (i) “moral” facts about the action's effect on another person or (ii) “nonmoral” facts about the situation. The RTPJ, PC, and MPFC were recruited selectively for moral over nonmoral facts, suggesting that processing moral stimuli elicits spontaneous mental state inference. In a second experiment, participants read the same scenarios, but explicit statements of belief preceded the facts: Protagonists believed their actions would cause harm or not. The response in the RTPJ, PC, and LTPJ was again higher for moral facts but also distinguished between neutral and negative outcomes. Together, the results illuminate two aspects of theory of mind in moral judgment: (1) spontaneous belief inference and (2) stimulus-driven belief integration.",
"title": ""
}
] |
scidocsrr
|
214d3555055146bd6209a393b734d2d6
|
Stress and multitasking in everyday college life: an empirical study of online activity
|
[
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
}
] |
[
{
"docid": "fe0587c51c4992aa03f28b18f610232f",
"text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.",
"title": ""
},
{
"docid": "124fa48e1e842f2068a8fb55a2b8bb8e",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "5339554b6f753b69b5ace705af0263cd",
"text": "We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase perclass performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible.",
"title": ""
},
{
"docid": "8183fe0c103e2ddcab5b35549ed8629f",
"text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.",
"title": ""
},
{
"docid": "25a7f23c146add12bfab3f1fc497a065",
"text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).",
"title": ""
},
{
"docid": "f9580093dcf61a9d6905265cfb3a0d32",
"text": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "7b681d1f200c0281beb161b71e6a3604",
"text": "Data quality remains a persistent problem in practice and a challenge for research. In this study we focus on the four dimensions of data quality noted as the most important to information consumers, namely accuracy, completeness, consistency, and timeliness. These dimensions are of particular concern for operational systems, and most importantly for data warehouses, which are often used as the primary data source for analyses such as classification, a general type of data mining. However, the definitions and conceptual models of these dimensions have not been collectively considered with respect to data mining in general or classification in particular. Nor have they been considered for problem complexity. Conversely, these four dimensions of data quality have only been indirectly addressed by data mining research. Using definitions and constructs of data quality dimensions, our research evaluates the effects of both data quality and problem complexity on generated data and tests the results in a real-world case. Six different classification outcomes selected from the spectrum of classification algorithms show that data quality and problem complexity have significant main and interaction effects. From the findings of significant effects, the economics of higher data quality are evaluated for a frequent application of classification and illustrated by the real-world case.",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "02eccb2c0aeae243bf2023b25850890f",
"text": "In order to meet performance goals, it is widely agreed that vehicular ad hoc networks (VANETs) must rely heavily on node-to-node communication, thus allowing for malicious data traffic. At the same time, the easy access to information afforded by VANETs potentially enables the difficult security goal of data validation. We propose a general approach to evaluating the validity of VANET data. In our approach a node searches for possible explanations for the data it has collected based on the fact that malicious nodes may be present. Explanations that are consistent with the node's model of the VANET are scored and the node accepts the data as dictated by the highest scoring explanations. Our techniques for generating and scoring explanations rely on two assumptions: 1) nodes can tell \"at least some\" other nodes apart from one another and 2) a parsimony argument accurately reflects adversarial behavior in a VANET. We justify both assumptions and demonstrate our approach on specific VANETs.",
"title": ""
},
{
"docid": "c166ae2b9085cc4769438b1ca8ac8ee0",
"text": "Texts in web pages, images and videos contain important clues for information indexing and retrieval. Most existing text extraction methods depend on the language type and text appearance. In this paper, a novel and universal method of image text extraction is proposed. A coarse-to-fine text location method is implemented. Firstly, a multi-scale approach is adopted to locate texts with different font sizes. Secondly, projection profiles are used in location refinement step. Color-based k-means clustering is adopted in text segmentation. Compared to grayscale image which is used in most existing methods, color image is more suitable for segmentation based on clustering. It treats corner-points, edge-points and other points equally so that it solves the problem of handling multilingual text. It is demonstrated in experimental results that best performance is obtained when k is 3. Comparative experimental results on a large number of images show that our method is accurate and robust in various conditions.",
"title": ""
},
{
"docid": "77437d225dcc535fdbe5a7e66e15f240",
"text": "We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. Importantly, our network is able to segment real world scenes without any noise modelling. We present encouraging preliminary results.",
"title": ""
},
{
"docid": "eb8fd891a197e5a028f1ca5eaf3988a3",
"text": "Information-centric networking (ICN) replaces the widely used host-centric networking paradigm in communication networks (e.g., Internet and mobile ad hoc networks) with an information-centric paradigm, which prioritizes the delivery of named content, oblivious of the contents’ origin. Content and client security, provenance, and identity privacy are intrinsic by design in the ICN paradigm as opposed to the current host centric paradigm where they have been instrumented as an after-thought. However, given its nascency, the ICN paradigm has several open security and privacy concerns. In this paper, we survey the existing literature in security and privacy in ICN and present open questions. More specifically, we explore three broad areas: 1) security threats; 2) privacy risks; and 3) access control enforcement mechanisms. We present the underlying principle of the existing works, discuss the drawbacks of the proposed approaches, and explore potential future research directions. In security, we review attack scenarios, such as denial of service, cache pollution, and content poisoning. In privacy, we discuss user privacy and anonymity, name and signature privacy, and content privacy. ICN’s feature of ubiquitous caching introduces a major challenge for access control enforcement that requires special attention. We review existing access control mechanisms including encryption-based, attribute-based, session-based, and proxy re-encryption-based access control schemes. We conclude the survey with lessons learned and scope for future work.",
"title": ""
},
{
"docid": "aed264522ed7ee1d3559fe4863760986",
"text": "A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center. KeywordsSensor Networks; Clustering Methods; Voronoi Tessellations; Algorithms.",
"title": ""
},
{
"docid": "d269ebe2bc6ab4dcaaac3f603037b846",
"text": "The contribution of power production by photovoltaic (PV) systems to the electricity supply is constantly increasing. An efficient use of the fluctuating solar power production will highly benefit from forecast information on the expected power production. This forecast information is necessary for the management of the electricity grids and for solar energy trading. This paper presents an approach to predict regional PV power output based on forecasts up to three days ahead provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Focus of the paper is the description and evaluation of the approach of irradiance forecasting, which is the basis for PV power prediction. One day-ahead irradiance forecasts for single stations in Germany show a rRMSE of 36%. For regional forecasts, forecast accuracy is increasing in dependency on the size of the region. For the complete area of Germany, the rRMSE amounts to 13%. Besides the forecast accuracy, also the specification of the forecast uncertainty is an important issue for an effective application. We present and evaluate an approach to derive weather specific prediction intervals for irradiance forecasts. The accuracy of PV power prediction is investigated in a case study.",
"title": ""
},
{
"docid": "101fbbe7760c3961f11da7f1e080e5f7",
"text": "Probiotic ingestion can be recommended as a preventative approach to maintaining the balance of the intestinal microflora and thereby enhance 'well-being'. Research into the use of probiotic intervention in specific illnesses and disorders has identified certain patient populations that may benefit from the approach. Undoubtedly, probiotics will vary in their efficacy and it may not be the case that the same results occur with all species. Those that prove most efficient will likely be strains that are robust enough to survive the harsh physico-chemical conditions present in the gastrointestinal tract. This includes gastric acid, bile secretions and competition with the resident microflora. A survey of the literature indicates positive results in over fifty human trials, with prevention/treatment of infections the most frequently reported output. In theory, increased levels of probiotics may induce a 'barrier' influence against common pathogens. Mechanisms of effect are likely to include the excretion of acids (lactate, acetate), competition for nutrients and gut receptor sites, immunomodulation and the formation of specific antimicrobial agents. As such, persons susceptible to diarrhoeal infections may benefit greatly from probiotic intake. On a more chronic basis, it has been suggested that some probiotics can help maintain remission in the inflammatory conditions, ulcerative colitis and pouchitis. They have also been suggested to repress enzymes responsible for genotoxin formation. Moreover, studies have suggested that probiotics are as effective as anti-spasmodic drugs in the alleviation of irritable bowel syndrome. The approach of modulating the gut flora for improved health has much relevance for the management of those with acute and chronic gut disorders. Other target groups could include those susceptible to nosocomial infections, as well as the elderly, who have an altered microflora, with a decreased number of beneficial microbial species. For the future, it is imperative that mechanistic interactions involved in probiotic supplementation be identified. Moreover, the survival issues associated with their establishment in the competitive gut ecosystem should be addressed. Here, the use of prebiotics in association with useful probiotics may be a worthwhile approach. A prebiotic is a dietary carbohydrate selectively metabolised by probiotics. Combinations of probiotics and prebiotics are known as synbiotics.",
"title": ""
},
{
"docid": "a2082f1b4154cd11e94eff18a016e91e",
"text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.",
"title": ""
},
{
"docid": "1406e39d95505da3d7ab2b5c74c2e068",
"text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.",
"title": ""
},
{
"docid": "0d93bf1b3b891a625daa987652ca1964",
"text": "In this paper, we show that a continuous spectrum of randomis ation exists, in which most existing tree randomisations are only operating around the tw o ends of the spectrum. That leaves a huge part of the spectrum largely unexplored. We propose a ba se le rner VR-Tree which generates trees with variable-randomness. VR-Trees are able to span f rom the conventional deterministic trees to the complete-random trees using a probabilistic pa rameter. Using VR-Trees as the base models, we explore the entire spectrum of randomised ensemb les, together with Bagging and Random Subspace. We discover that the two halves of the spectrum have their distinct characteristics; and the understanding of which allows us to propose a new appr o ch in building better decision tree ensembles. We name this approach Coalescence, which co ales es a number of points in the random-half of the spectrum. Coalescence acts as a committe e of “ xperts” to cater for unforeseeable conditions presented in training data. Coalescence is found to perform better than any single operating point in the spectrum, without the need to tune to a specific level of randomness. In our empirical study, Coalescence ranks top among the benchm arking ensemble methods including Random Forests, Random Subspace and C5 Boosting; and only Co alescence is significantly better than Bagging and Max-Diverse Ensemble among all the methods in the comparison. Although Coalescence is not significantly better than Random Forests , we have identified conditions under which one will perform better than the other.",
"title": ""
},
{
"docid": "a972fb96613715b1d17ac69fdd86c115",
"text": "Saliency detection has been widely studied to predict human fixations, with various applications in computer vision and image processing. For saliency detection, we argue in this paper that the state-of-the-art High Efficiency Video Coding (HEVC) standard can be used to generate the useful features in compressed domain. Therefore, this paper proposes to learn the video saliency model, with regard to HEVC features. First, we establish an eye tracking database for video saliency detection, which can be downloaded from https://github.com/remega/video_database. Through the statistical analysis on our eye tracking database, we find out that human fixations tend to fall into the regions with large-valued HEVC features on splitting depth, bit allocation, and motion vector (MV). In addition, three observations are obtained with the further analysis on our eye tracking database. Accordingly, several features in HEVC domain are proposed on the basis of splitting depth, bit allocation, and MV. Next, a kind of support vector machine is learned to integrate those HEVC features together, for video saliency detection. Since almost all video data are stored in the compressed form, our method is able to avoid both the computational cost on decoding and the storage cost on raw data. More importantly, experimental results show that the proposed method is superior to other state-of-the-art saliency detection methods, either in compressed or uncompressed domain.",
"title": ""
}
] |
scidocsrr
|
92a4cd0463da8ba8b11b8ddc5e4576c6
|
Project management and IT governance. Integrating PRINCE2 and ISO 38500
|
[
{
"docid": "70b9aad14b2fc75dccab0dd98b3d8814",
"text": "This paper describes the first phase of an ongoing program of research into theory and practice of IT governance. It conceptually explores existing IT governance literature and reveals diverse definitions of IT governance, that acknowledge its structures, control frameworks and/or processes. The definitions applied within the literature and the nature and breadth of discussion demonstrate a lack of a clear shared understanding of the term IT governance. This lack of clarity has the potential to confuse and possibly impede useful research in the field and limit valid cross-study comparisons of results. Using a content analysis approach, a number of existing diverse definitions are moulded into a \"definitive\" definition of IT governance and its usefulness is critically examined. It is hoped that this exercise will heighten awareness of the \"broad reach\" of the IT governance concept to assist researchers in the development of research projects and more effectively guide practitioners in the overall assessment of IT governance.",
"title": ""
},
{
"docid": "2eff84064f1d9d183eddc7e048efa8e6",
"text": "Rupinder Kaur, Dr. Jyotsna Sengupta Abstract— The software process model consists of a set of activities undertaken to design, develop and maintain software systems. A variety of software process models have been designed to structure, describe and prescribe the software development process. The software process models play a very important role in software development, so it forms the core of the software product. Software project failure is often devastating to an organization. Schedule slips, buggy releases and missing features can mean the end of the project or even financial ruin for a company. Oddly, there is disagreement over what it means for a project to fail. In this paper, discussion is done on current process models and analysis on failure of software development, which shows the need of new research.",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "9b1a4e27c5d387ef091fdb9140eb8795",
"text": "In this study I investigated the relation between normal heterosexual attraction and autogynephilia (a man's propensity to be sexually aroused by the thought or image of himself as a woman). The subjects were 427 adult male outpatients who reported histories of dressing in women's garments, of feeling like women, or both. The data were questionnaire measures of autogynephilia, heterosexual interest, and other psychosexual variables. As predicted, the highest levels of autogynephilia were observed at intermediate rather than high levels of heterosexual interest; that is, the function relating these variables took the form of an inverted U. This finding supports the hypothesis that autogynephilia is a misdirected type of heterosexual impulse, which arises in association with normal heterosexuality but also competes with it.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "3e5041c6883ce6ab59234ed2c8c995b7",
"text": "Self-amputation of the penis treated immediately: case report and review of the literature. Self-amputation of the penis is rare in urological practice. It occurs more often in a context psychotic disease. It can also be secondary to alcohol or drugs abuse. Treatment and care vary according on the severity of the injury, the delay of consultation and the patient's mental state. The authors report a case of self-amputation of the penis in an alcoholic context. The authors analyze the etiological and urological aspects of this trauma.",
"title": ""
},
{
"docid": "1fd51acb02bafb3ea8f5678581a873a4",
"text": "How often has this scenario happened? You are driving at night behind a car that has bright light-emitting diode (LED) taillights. When looking directly at the taillights, the light is not blurry, but when glancing at other objects, a trail of lights appears, known as a phantom array. The reason for this trail of lights might not be what you expected: it is not due to glare, degradation of eyesight, or astigmatism. The culprit may be the flickering of the LED lights caused by pulse-width modulating (PWM) drive circuitry. Actually, many LED taillights flicker on and off at frequencies between 200 and 500 Hz, which is too fast to notice when the eye is not in rapid motion. However, during a rapid eye movement (saccade), the images of the LED lights appear in different positions on the retina, causing a trail of images to be perceived (Figure 1). This disturbance of vision may not occur with all LED taillights because some taillights keep a constant current through the LEDs. However, when there is a PWM current through the LEDs, the biological effect of the light flicker may become noticeable during the eye saccade.",
"title": ""
},
{
"docid": "c60957f1bf90450eb947d2b0ab346ffb",
"text": "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.",
"title": ""
},
{
"docid": "f25c0b1fef38b7322197d61dd5dcac41",
"text": "Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide and one of the few malignancies with an increasing incidence in the USA. While the relationship between HCC and its inciting risk factors (e.g., hepatitis B, hepatitis C and alcohol liver disease) is well defined, driving genetic alterations are still yet to be identified. Clinically, HCC tends to be hypervascular and, for that reason, transarterial chemoembolization has proven to be effective in managing many patients with localized disease. More recently, angiogenesis has been targeted effectively with pharmacologic strategies, including monoclonal antibodies against VEGF and the VEGF receptor, as well as small-molecule kinase inhibitors of the VEGF receptor. Targeting angiogenesis with these approaches has been validated in several different solid tumors since the initial approval of bevacizumab for advanced colon cancer in 2004. In HCC, only sorafenib has been shown to extend survival in patients with advanced HCC and has opened the door for other anti-angiogenic strategies. Here, we will review the data supporting the targeting of the VEGF axis in HCC and the preclinical and early clinical development of bevacizumab.",
"title": ""
},
{
"docid": "291a1927343797d72f50134b97f73d88",
"text": "This paper proposes a half-rate single-loop reference-less binary CDR that operates from 8.5 Gb/s to 12.1 Gb/s (36% capture range). The high capture range is made possible by adding a novel frequency detection mechanism which limits the magnitude of the phase error between the input data and the VCO clock. The proposed frequency detector produces three phases of the data, and feeds into the phase detector the data phase that minimizes the CDR phase error. This frequency detector, implemented within a 10 Gb/s CDR in Fujitsu's 65 nm CMOS, consumes 11 mW and improves the capture range by up to 6 × when it is activated.",
"title": ""
},
{
"docid": "a6c3a4dfd33eb902f5338f7b8c7f78e5",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "a2b3cdf440dd6aa139ea51865d8f81cc",
"text": "Hyperspectral image (HSI) classification is a hot topic in the remote sensing community. This paper proposes a new framework of spectral-spatial feature extraction for HSI classification, in which for the first time the concept of deep learning is introduced. Specifically, the model of autoencoder is exploited in our framework to extract various kinds of features. First we verify the eligibility of autoencoder by following classical spectral information based classification and use autoencoders with different depth to classify hyperspectral image. Further in the proposed framework, we combine PCA on spectral dimension and autoencoder on the other two spatial dimensions to extract spectral-spatial information for classification. The experimental results show that this framework achieves the highest classification accuracy among all methods, and outperforms classical classifiers such as SVM and PCA-based SVM.",
"title": ""
},
{
"docid": "0d7586e443f265015beed6f8bdc15def",
"text": "With the rapid growth of E-Commerce on the Internet, online product search service has emerged as a popular and effective paradigm for customers to find desired products and select transactions. Most product search engines today are based on adaptations of relevance models devised for information retrieval. However, there is still a big gap between the mechanism of finding products that customers really desire to purchase and that of retrieving products of high relevance to customers' query. In this paper, we address this problem by proposing a new ranking framework for enhancing product search based on dynamic best-selling prediction in E-Commerce. Specifically, we first develop an effective algorithm to predict the dynamic best-selling, i.e. the volume of sales, for each product item based on its transaction history. By incorporating such best-selling prediction with relevance, we propose a new ranking model for product search, in which we rank higher the product items that are not only relevant to the customer's need but with higher probability to be purchased by the customer. Results of a large scale evaluation, conducted over the dataset from a commercial product search engine, demonstrate that our new ranking method is more effective for locating those product items that customers really desire to buy at higher rank positions without hurting the search relevance.",
"title": ""
},
{
"docid": "8bea1f9e107cfcebc080bc62d7ac600d",
"text": "The introduction of wireless transmissions into the data center has shown to be promising in improving cost effectiveness of data center networks DCNs. For high transmission flexibility and performance, a fundamental challenge is to increase the wireless availability and enable fully hybrid and seamless transmissions over both wired and wireless DCN components. Rather than limiting the number of wireless radios by the size of top-of-rack switches, we propose a novel DCN architecture, Diamond, which nests the wired DCN with radios equipped on all servers. To harvest the gain allowed by the rich reconfigurable wireless resources, we propose the low-cost deployment of scalable 3-D ring reflection spaces RRSs which are interconnected with streamlined wired herringbone to enable large number of concurrent wireless transmissions through high-performance multi-reflection of radio signals over metal. To increase the number of concurrent wireless transmissions within each RRS, we propose a precise reflection method to reduce the wireless interference. We build a 60-GHz-based testbed to demonstrate the function and transmission ability of our proposed architecture. We further perform extensive simulations to show the significant performance gain of diamond, in supporting up to five times higher server-to-server capacity, enabling network-wide load balancing, and ensuring high fault tolerance.",
"title": ""
},
{
"docid": "fec16344f8b726b9d232423424c101d3",
"text": "A triboelectric separator manufactured by PlasSep, Ltd., Canada was evaluated at MBA Polymers, Inc. as part of a project sponsored by the American Plastics Council (APC) to explore the potential of triboelectric methods for separating commingled plastics from end-oflife durables. The separator works on a very simple principle: that dissimilar materials will transfer electrical charge to one another when rubbed together, the resulting surface charge differences can then be used to separate these dissimilar materials from one another in an electric field. Various commingled plastics were tested under controlled operating conditions. The feed materials tested include commingled plastics derived from electronic shredder residue (ESR), automobile shredder residue (ASR), refrigerator liners, and water bottle plastics. The separation of ESR ABS and HIPS, and water bottle PC and PVC were very promising. However, this device did not efficiently separate many plastic mixtures, such as rubber and plastics; nylon and acetal; and PE and PP from ASR. All tests were carried out based on the standard operating conditions determined for ESR ABS and HIPS. There is the potential to improve the separation performance for many of the feed materials by individually optimizing their operating conditions. Cursory economics shows that the operation cost is very dependent upon assumed throughput, separation efficiency and requisite purity. Unit operation cost could range from $0.03/lb. to $0.05/lb. at capacities of 2000 lb./hr. and 1000 lb./hr.",
"title": ""
},
{
"docid": "532ded1b0cc25a21464996a15a976125",
"text": "Folded-plate structures provide an efficient design using thin laminated veneer lumber panels. Inspired by Japanese furniture joinery, the multiple tab-and-slot joint was developed for the multi-assembly of timber panels with non-parallel edges without adhesive or metal joints. Because the global analysis of our origami structures reveals that the rotational stiffness at ridges affects the global behaviour, we propose an experimental and numerical study of this linear interlocking connection. Its geometry is governed by three angles that orient the contact faces. Nine combinations of these angles were tested and the rotational slip was measured with two different bending set-ups: closing or opening the fold formed by two panels. The non-linear behaviour was conjointly reproduced numerically using the finite element method and continuum damage mechanics.",
"title": ""
},
{
"docid": "d83853692581644f3a86ad0e846c48d2",
"text": "This paper investigates cyber security issues with automatic dependent surveillance broadcast (ADS-B) based air traffic control. Before wide-scale deployment in civil aviation, any airborne or ground-based technology must be ensured to have no adverse impact on safe and profitable system operations, both under normal conditions and failures. With ADS-B, there is a lack of a clear understanding about vulnerabilities, how they can impact airworthiness and what failure conditions they can potentially induce. The proposed work streamlines a threat assessment methodology for security evaluation of ADS-B based surveillance. To the best of our knowledge, this work is the first to identify the need for mechanisms to secure ADS-B based airborne surveillance and propose a security solution. This paper presents preliminary findings and results of the ongoing investigation.12",
"title": ""
},
{
"docid": "1a5189a09df624d496b83470eed4cfb6",
"text": "Vol. 24, No. 1, 2012 103 Received January 5, 2011, Revised March 9, 2011, Accepted for publication April 6, 2011 Corresponding author: Gyong Moon Kim, M.D., Department of Dermatology, St. Vincent Hospital, College of Medicine, The Catholic University of Korea, 93-6 Ji-dong, Paldal-gu, Suwon 442-723, Korea. Tel: 82-31-249-7465, Fax: 82-31-253-8927, E-mail: gyongmoonkim@ catholic.ac.kr This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http:// creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Ann Dermatol Vol. 24, No. 1, 2012 http://dx.doi.org/10.5021/ad.2012.24.1.103",
"title": ""
},
{
"docid": "9973de0dc30f8e8f7234819163a15db2",
"text": "Jennifer L. Docktor, Natalie E. Strand, José P. Mestre, and Brian H. Ross Department of Physics, University of Wisconsin–La Crosse, La Crosse, Wisconsin 54601, USA Department of Physics, University of Illinois, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, Illinois 61801, USA Department of Educational Psychology, University of Illinois, Champaign, Illinois 61820, USA Department of Psychology, University of Illinois, Champaign, Illinois 61820, USA (Received 30 April 2015; published 1 September 2015)",
"title": ""
},
{
"docid": "39e6ddd04b7fab23dbbeb18f2696536e",
"text": "Moving IoT components from the cloud onto edge hosts helps in reducing overall network traffic and thus minimizes latency. However, provisioning IoT services on the IoT edge devices presents new challenges regarding system design and maintenance. One possible approach is the use of software-defined IoT components in the form of virtual IoT resources. This, in turn, allows exposing the thing/device layer and the core IoT service layer as collections of micro services that can be distributed to a broad range of hosts.\n This paper presents the idea and evaluation of using virtual resources in combination with a permission-based blockchain for provisioning IoT services on edge hosts.",
"title": ""
},
{
"docid": "55a798fd7ec96239251fce2a340ba1ba",
"text": "At EUROCRYPT’88, we introduced an interactive zero-howledge protocol ( G ~ O U and Quisquater [13]) fitted to the authentication of tamper-resistant devices (e.g. smart cads , Guillou and Ugon [14]). Each security device stores its secret authentication number, an RSA-like signature computed by an authority from the device identity. Any transaction between a tamperresistant security device and a verifier is limited to a unique interaction: the device sends its identity and a random test number; then the verifier teUs a random large question; and finally the device answers by a witness number. The transaction is successful when the test number is reconstructed from the witness number, the question and the identity according to numbers published by the authority and rules of redundancy possibly standardized. This protocol allows a cooperation between users in such a way that a group of cooperative users looks like a new entity, having a shadowed identity the product of the individual shadowed identities, while each member reveals nothing about its secret. In another scenario, the secret is partitioned between distinkt devices sharing the same identity. A group of cooperative users looks like a unique user having a larger public exponent which is the greater common multiple of each individual exponent. In this paper, additional features are introduced in order to provide: firstly, a mutual interactive authentication of both communicating entities and previously exchanged messages, and, secondly, a digital signature of messages, with a non-interactive zero-knowledge protocol. The problem of multiple signature is solved here in a very smart way due to the possibilities of cooperation between users. The only secret key is the factors of the composite number chosen by the authority delivering one authentication number to each smart card. This key is not known by the user. At the user level, such a scheme may be considered as a keyless identity-based integrity scheme. This integrity has a new and important property: it cannot be misused, i.e. derived into a confidentiality scheme.",
"title": ""
}
] |
scidocsrr
|
b3d61252436267694daa1f132f6726ca
|
Progress in Tourism Management Tourism supply chain management : A new research agenda
|
[
{
"docid": "5bd3cf8712d04b19226e53fca937e5a6",
"text": "This paper reviews the published studies on tourism demand modelling and forecasting since 2000. One of the key findings of this review is that the methods used in analysing and forecasting the demand for tourism have been more diverse than those identified by other review articles. In addition to the most popular time series and econometric models, a number of new techniques have emerged in the literature. However, as far as the forecasting accuracy is concerned, the study shows that there is no single model that consistently outperforms other models in all situations. Furthermore, this study identifies some new research directions, which include improving the forecasting accuracy through forecast combination; integrating both qualitative and quantitative forecasting approaches, tourism cycles and seasonality analysis, events’ impact assessment and risk forecasting.",
"title": ""
}
] |
[
{
"docid": "1274ab286b1e3c5701ebb73adc77109f",
"text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.",
"title": ""
},
{
"docid": "5d23af3f778a723b97690f8bf54dfa41",
"text": "Software engineering techniques have been employed for many years to create software products. The selections of appropriate software development methodologies for a given project, and tailoring the methodologies to a specific requirement have been a challenge since the establishment of software development as a discipline. In the late 1990’s, the general trend in software development techniques has changed from traditional waterfall approaches to more iterative incremental development approaches with different combination of old concepts, new concepts, and metamorphosed old concepts. Nowadays, the aim of most software companies is to produce software in short time period with minimal costs, and within unstable, changing environments that inspired the birth of Agile. Agile software development practice have caught the attention of software development teams and software engineering researchers worldwide during the last decade but scientific research and published outcomes still remains quite scarce. Every agile approach has its own development cycle that results in technological, managerial and environmental changes in the software companies. This paper explains the values and principles of ten agile practices that are becoming more and more dominant in the software development industry. Agile processes are not always beneficial, they have some limitations as well, and this paper also discusses the advantages and disadvantages of Agile processes.",
"title": ""
},
{
"docid": "21e235169d37658afee28d5f3f7c831b",
"text": "Two studies assessed the effects of a training procedure (Goal Management Training, GMT), derived from Duncan's theory of goal neglect, on disorganized behavior following TBI. In Study 1, patients with traumatic brain injury (TBI) were randomly assigned to brief trials of GMT or motor skills training. GMT, but not motor skills training, was associated with significant gains on everyday paper-and-pencil tasks designed to mimic tasks that are problematic for patients with goal neglect. In Study 2, GMT was applied in a postencephalitic patient seeking to improve her meal-preparation abilities. Both naturalistic observation and self-report measures revealed improved meal preparation performance following GMT. These studies provide both experimental and clinical support for the efficacy of GMT toward the treatment of executive functioning deficits that compromise independence in patients with brain damage.",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "8b1bd5243d4512324e451a780c1ec7d3",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this fundamentals of computer security by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "ed63ebf895f1f37ba9b788c36b8e6cfc",
"text": "Melanocyte stem cells (McSCs) and mouse models of hair graying serve as useful systems to uncover mechanisms involved in stem cell self-renewal and the maintenance of regenerating tissues. Interested in assessing genetic variants that influence McSC maintenance, we found previously that heterozygosity for the melanogenesis associated transcription factor, Mitf, exacerbates McSC differentiation and hair graying in mice that are predisposed for this phenotype. Based on transcriptome and molecular analyses of Mitfmi-vga9/+ mice, we report a novel role for MITF in the regulation of systemic innate immune gene expression. We also demonstrate that the viral mimic poly(I:C) is sufficient to expose genetic susceptibility to hair graying. These observations point to a critical suppressor of innate immunity, the consequences of innate immune dysregulation on pigmentation, both of which may have implications in the autoimmune, depigmenting disease, vitiligo.",
"title": ""
},
{
"docid": "cee3833160aa1cc513e96d49b72eeea9",
"text": "Spatial filtering (SF) constitutes an integral part of building EEG-based brain-computer interfaces (BCIs). Algorithms frequently used for SF, such as common spatial patterns (CSPs) and independent component analysis, require labeled training data for identifying filters that provide information on a subject's intention, which renders these algorithms susceptible to overfitting on artifactual EEG components. In this study, beamforming is employed to construct spatial filters that extract EEG sources originating within predefined regions of interest within the brain. In this way, neurophysiological knowledge on which brain regions are relevant for a certain experimental paradigm can be utilized to construct unsupervised spatial filters that are robust against artifactual EEG components. Beamforming is experimentally compared with CSP and Laplacian spatial filtering (LP) in a two-class motor-imagery paradigm. It is demonstrated that beamforming outperforms CSP and LP on noisy datasets, while CSP and beamforming perform almost equally well on datasets with few artifactual trials. It is concluded that beamforming constitutes an alternative method for SF that might be particularly useful for BCIs used in clinical settings, i.e., in an environment where artifact-free datasets are difficult to obtain.",
"title": ""
},
{
"docid": "4af5b29ebda47240d51cd5e7765d990f",
"text": "In this paper, a Rectangular Waveguide (RW) to microstrip transition with Low-Temperature Co-fired Ceramic (LTCC) technology in Ka-band is designed, fabricated and measured. Compared to the traditional transition using a rectangular slot, the proposed Stepped-Impedance Resonator (SIR) slot enlarges the bandwidth of the transition. By introducing an additional design parameter, it generates multi-modes within the transition. To further improve the bandwidth and to adjust the performance of the transition, a resonant strip is embedded between the open microstrip line and its ground plane. Measured results agree well with that of the simulation, showing an effective bandwidth about 22% (from 28.5 GHz to 36.5GHz), an insertion loss approximately 3 dB and return loss better than 15 dB in the pass-band.",
"title": ""
},
{
"docid": "b7eb2c65c459c9d5776c1e2cba84706c",
"text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.",
"title": ""
},
{
"docid": "44480b69d1f49703db82977d1e248946",
"text": "Civic crowdfunding is a sub-type of crowdfunding whereby citizens contribute to funding community-based projects ranging from physical structures to amenities. Though civic crowdfunding has great potential for impact, it remains a developing field in terms of project success and widespread adoption. To explore how technology shapes interactions and outcomes within civic projects, our research addresses two interrelated questions: how do offline communities engage online across civic crowdfunding projects, and, what purpose does this activity serve both projects and communities? These questions are explored through discussion of types of offline communities and description of online activity across civic crowdfunding projects. We conclude by considering the implications of this knowledge for civic crowdfunding and its continued research.",
"title": ""
},
{
"docid": "5efd5fb9caaeadb90a684d32491f0fec",
"text": "The ModelNiew/Controller design pattern is very useful for architecting interactive software systems. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. Applying the ModelNiew/Controller design pattern to web-applications is therefore complicated by the fact that current technologies encourage developers to partition the application as early as in the design phase. Subsequent changes to that partitioning require considerable changes to the application's implementation despite the fact that the application logic has not changed. This paper introduces the concept of Flexible Web-Application Partitioning, a programming model and implementation infrastructure, that allows developers to apply the ModeWViewKontroller design pattern in a partition-independent manner: Applications are developed and tested in a single address-space; they can then be deployed to various clientherver architectures without changing the application's source code. In addition, partitioning decisions can be changed without modifying the application.",
"title": ""
},
{
"docid": "a9372375af0500609b7721120181c280",
"text": "Copyright © 2014 Alicia Garcia-Falgueras. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In accordance of the Creative Commons Attribution License all Copyrights © 2014 are reserved for SCIRP and the owner of the intellectual property Alicia Garcia-Falgueras. All Copyright © 2014 are guarded by law and by SCIRP as a guardian.",
"title": ""
},
{
"docid": "b856ab3760ff0f762fda12cc852903da",
"text": "This paper presents a detection method of small-foreign-metal particles using a 400 kHz SiC-MOSFETs high-frequency inverter. A 400 kHz SiC-MOSFETs high-frequency inverter is developed and applied to the small-foreign-metal particles detection on high-performance chemical films (HPCFs). HPCFs are manufactured with continuous production lines in industries. A new arrangement of IH coils are proposed, which is applicable for the practical production-lines of HPCFs. A prototype experimental model is constructed and tested. Experimental results demonstrate that the newly proposed IH coils with the constructed 400 kHz SiC-MOSFETs can heat small-foreign-metal particles and the heated small-foreign-metal particles can be detected by a thermographic camera. Experimental results with a new arrangement of IH coils also demonstrate that the proposed detection method of small-foreign-metal particles using 400 kHz SiC-MOSFETs high-frequency inverter can be applicable for the practical production lines of HPCFs.",
"title": ""
},
{
"docid": "8f4b873cab626dbf0ebfc79397086545",
"text": "R emote-sensing techniques have transformed ecological research by providing both spatial and temporal perspectives on ecological phenomena that would otherwise be difficult to study (eg Kerr and Ostrovsky 2003; Running et al. 2004; Vierling et al. 2008). In particular, a strong focus has been placed on the use of data obtained from space-borne remote-sensing instruments because these provide regional-to global-scale observations and repeat time-series sampling of ecological indicators (eg Gould 2000). The main limitation of most of the research-focused satellite missions is the mismatch between the pixel resolution of many regional-extent sensors (eg Landsat [spatial resolution of ~30 m] to the Moderate Resolution Imaging Spectro-radiometer [spatial resolution of ~1 km]), the revisit period (eg 18 days for Landsat), and the scale of many ecological processes. Indeed, data provided by these platforms are often \" too general to meet regional or local objectives \" in ecology (Wulder et al. 2004). To address this limitation, a range of new (largely commercially operated) satellite sensors have become operational over the past decade, offering data at finer than 10-m spatial resolution with more responsive capabilities (eg Quickbird, IKONOS, GeoEye-1, OrbView-3, WorldView-2). Such data are useful for ecological studies (Fretwell et al. 2012), but there remain three operational constraints: (1) a high cost per scene; (2) suitable repeat times are often only possible if oblique view angles are used, distorting geometric and radiometric pixel properties; and (3) cloud contamination, which can obscure features of interest (Loarie et al. 2007). Imaging sensors on board civilian aircraft platforms may also be used; these can provide more scale-appropriate data for fine-scale ecological studies, including data from light detection and ranging (LiDAR) sensors (Vierling et al. 2008). In theory, these surveys can be made on demand, but in practice data acquisition is costly, meaning that regular time-series monitoring is operationally constrained. A new method for fine-scale remote sensing is now emerging that could address all of these operational issues and thus potentially revolutionize spatial ecology and environmental science. Unmanned aerial vehicles (UAVs) are lightweight, low-cost aircraft platforms operated from the ground that can carry imaging or non-imaging payloads. UAVs offer ecologists a promising route to responsive, timely, and cost-effective monitoring of environmental phenomena at spatial and temporal resolutions that are appropriate to the scales of many ecologically relevant variables. Emerging from a military background, there are now a growing number of civilian agencies and organizations that have recognized the …",
"title": ""
},
{
"docid": "72d75ebfc728d3b287bcaf429a6b2ee5",
"text": "We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.",
"title": ""
},
{
"docid": "caf5b727bfc59efc9f60697321796920",
"text": "As humans start to spend more time in collaborative virtual environments (CVEs) it becomes important to study their interactions in such environments. One aspect of such interactions is personal space. To begin to address this, we have conducted empirical investigations in a non immersive virtual environment: an experiment to investigate the influence on personal space of avatar gender, and an observational study to further explore the existence of personal space. Experimental results give some evidence to suggest that avatar gender has an influence on personal space although the participants did not register high personal space invasion anxiety, contrary to what one might expect from personal space invasion in the physical world. The observational study suggests that personal space does exist in CVEs, as the users tend to maintain, in a similar way to the physical world, a distance when they are interacting with each other. Our studies provide an improved understanding of personal space in CVEs and the results can be used to further enhance the usability of these environments.",
"title": ""
},
{
"docid": "2b97e03fa089cdee0bf504dd85e5e4bb",
"text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.",
"title": ""
},
{
"docid": "54b4726650b3afcddafb120ff99c9951",
"text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.",
"title": ""
},
{
"docid": "4418a2cfd7216ecdd277bde2d7799e4d",
"text": "Most of legacy systems use nowadays were modeled and documented using structured approach. Expansion of these systems in terms of functionality and maintainability requires shift towards object-oriented documentation and design, which has been widely accepted by the industry. In this paper, we present a survey of the existing Data Flow Diagram (DFD) to Unified Modeling language (UML) transformation techniques. We analyze transformation techniques using a set of parameters, identified in the survey. Based on identified parameters, we present an analysis matrix, which describes the strengths and weaknesses of transformation techniques. It is observed that most of the transformation approaches are rule based, which are incomplete and defined at abstract level that does not cover in depth transformation and automation issues. Transformation approaches are data centric, which focuses on datastore for class diagram generation. Very few of the transformation techniques have been applied on case study as a proof of concept, which are not comprehensive and majority of them are partially automated. Keywords-Unified Modeling Language (UML); Data Flow Diagram (DFD); Class Diagram; Model Transformation.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
}
] |
scidocsrr
|
dda7e725f5664f85045c296ce3436776
|
Real-time liquid biopsy in cancer patients: fact or fiction?
|
[
{
"docid": "a4d18ca808d30a25d7f974a8d9093124",
"text": "Metastases, rather than primary tumours, are responsible for most cancer deaths. To prevent these deaths, improved ways to treat metastatic disease are needed. Blood flow and other mechanical factors influence the delivery of cancer cells to specific organs, whereas molecular interactions between the cancer cells and the new organ influence the probability that the cells will grow there. Inhibition of the growth of metastases in secondary sites offers a promising approach for cancer therapy.",
"title": ""
}
] |
[
{
"docid": "20cb30a452bf20c9283314decfb7eb6e",
"text": "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.",
"title": ""
},
{
"docid": "10a213a6bbf6269eb7f3d0dae8601b9a",
"text": "Behaviour trees provide the possibility of improving on existing Artificial Intelligence techniques in games by being simple to implement, scalable, able to handle the complexity of games, and modular to improve reusability. This ultimately improves the development process for designing automated game players. We cover here the use of behaviour trees to design and develop an AI-controlled player for the commercial real-time strategy game DEFCON. In particular, we evolved behaviour trees to develop a competitive player which was able to outperform the game’s original AI-bot more than 50% of the time. We aim to highlight the potential for evolving behaviour trees as a practical approach to developing AI-bots in games.",
"title": ""
},
{
"docid": "242e78ed606d13502ace6d5eae00b315",
"text": "Use of information technology management framework plays a major influence on organizational success. This article focuses on the field of Internet of Things (IoT) management. In this study, a number of risks in the field of IoT is investigated, then with review of a number of COBIT5 risk management schemes, some associated strategies, objectives and roles are provided. According to the in-depth studies of this area it is expected that using the best practices of COBIT5 can be very effective, while the use of this standard considerably improve some criteria such as performance, cost and time. Finally, the paper proposes a framework which reflects the best practices and achievements in the field of IoT risk management.",
"title": ""
},
{
"docid": "d399e142488766759abf607defd848f0",
"text": "The high penetration of cell phones in today's global environment offers a wide range of promising mobile marketing activities, including mobile viral marketing campaigns. However, the success of these campaigns, which remains unexplored, depends on the consumers' willingness to actively forward the advertisements that they receive to acquaintances, e.g., to make mobile referrals. Therefore, it is important to identify and understand the factors that influence consumer referral behavior via mobile devices. The authors analyze a three-stage model of consumer referral behavior via mobile devices in a field study of a firm-created mobile viral marketing campaign. The findings suggest that consumers who place high importance on the purposive value and entertainment value of a message are likely to enter the interest and referral stages. Accounting for consumers' egocentric social networks, we find that tie strength has a negative influence on the reading and decision to refer stages and that degree centrality has no influence on the decision-making process. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5a9d8c0531a06b5542e8f02b2673b26d",
"text": "Given that e-tailing service failure is inevitable, a better understanding of how service failure and recovery affect customer loyalty represents an important topic for academics and practitioners. This study explores the relationship of service failure severity, service recovery justice (i.e., interactional justice, procedural justice, and distributive justice), and perceived switching costs with customer loyalty; as well, the moderating relationship of service recovery justice and perceived switching costs on the link between service failure severity and customer loyalty in the context of e-tailing are investigated. Data collected from 221 erceived switching costs ustomer loyalty useful respondents are tested against the research model using the partial least squares (PLS) approach. The results indicate that service failure severity, interactional justice, procedural justice and perceived switching costs have a significant relationship with customer loyalty, and that interactional justice can mitigate the negative relationship between service failure severity and customer loyalty. These findings provide several important theoretical and practical implications in terms of e-tailing service failure and",
"title": ""
},
{
"docid": "5b0d5ebe7666334b09a1136c1cb2d8e4",
"text": "In this paper, lesion areas affected by anthracnose are segmented using segmentation techniques, graded based on percentage of affected area and neural network classifier is used to classify normal and anthracnose affected on fruits. We have considered three types of fruit namely mango, grape and pomegranate for our work. The developed processing scheme consists of two phases. In the first phase, segmentation techniques namely thresholding, region growing, K-means clustering and watershed are employed for separating anthracnose affected lesion areas from normal area. Then these affected areas are graded by calculating the percentage of affected area. In the second phase texture features are extracted using Runlength Matrix. These features are then used for classification purpose using ANN classifier. We have conducted experimentation on a dataset of 600 fruits’ image samples. The classification accuracies for normal and affected anthracnose fruit types are 84.65% and 76.6% respectively. The work finds application in developing a machine vision system in horticulture field.",
"title": ""
},
{
"docid": "ae991359d6e76d0038de5a65f8218732",
"text": "Spatial data mining is the process of discovering interesting and previously unknown, but potentially useful patterns from the spatial and spatiotemporal data. However, explosive growth in the spatial and spatiotemporal data, and the emergence of social media and location sensing technologies emphasize the need for developing new and computationally efficient methods tailored for analyzing big data. In this paper, we review major spatial data mining algorithms by closely looking at the computational and I/O requirements and allude to few applications dealing with big spatial data.",
"title": ""
},
{
"docid": "5f3dc141b69eb50e17bdab68a2195e13",
"text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.",
"title": ""
},
{
"docid": "ef3b9dd6b463940bc57cdf7605c24b1e",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "88ae7446c9a63086bda9109a696459bd",
"text": "OBJECTIVES\nTo perform a systematic review of neurologic involvement in Systemic sclerosis (SSc) and Localized Scleroderma (LS), describing clinical features, neuroimaging, and treatment.\n\n\nMETHODS\nWe performed a literature search in PubMed using the following MeSH terms, scleroderma, systemic sclerosis, localized scleroderma, localized scleroderma \"en coup de sabre\", Parry-Romberg syndrome, cognitive impairment, memory, seizures, epilepsy, headache, depression, anxiety, mood disorders, Center for Epidemiologic Studies Depression (CES-D), SF-36, Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), Patient Health Questionnaire-9 (PHQ-9), neuropsychiatric, psychosis, neurologic involvement, neuropathy, peripheral nerves, cranial nerves, carpal tunnel syndrome, ulnar entrapment, tarsal tunnel syndrome, mononeuropathy, polyneuropathy, radiculopathy, myelopathy, autonomic nervous system, nervous system, electroencephalography (EEG), electromyography (EMG), magnetic resonance imaging (MRI), and magnetic resonance angiography (MRA). Patients with other connective tissue disease knowingly responsible for nervous system involvement were excluded from the analyses.\n\n\nRESULTS\nA total of 182 case reports/studies addressing SSc and 50 referring to LS were identified. SSc patients totalized 9506, while data on 224 LS patients were available. In LS, seizures (41.58%) and headache (18.81%) predominated. Nonetheless, descriptions of varied cranial nerve involvement and hemiparesis were made. Central nervous system involvement in SSc was characterized by headache (23.73%), seizures (13.56%) and cognitive impairment (8.47%). Depression and anxiety were frequently observed (73.15% and 23.95%, respectively). Myopathy (51.8%), trigeminal neuropathy (16.52%), peripheral sensorimotor polyneuropathy (14.25%), and carpal tunnel syndrome (6.56%) were the most frequent peripheral nervous system involvement in SSc. Autonomic neuropathy involving cardiovascular and gastrointestinal systems was regularly described. Treatment of nervous system involvement, on the other hand, varied in a case-to-case basis. However, corticosteroids and cyclophosphamide were usually prescribed in severe cases.\n\n\nCONCLUSIONS\nPreviously considered a rare event, nervous system involvement in scleroderma has been increasingly recognized. Seizures and headache are the most reported features in LS en coup de sabre, while peripheral and autonomic nervous systems involvement predominate in SSc. Moreover, recently, reports have frequently documented white matter lesions in asymptomatic SSc patients, suggesting smaller branches and perforating arteries involvement.",
"title": ""
},
{
"docid": "cfe09d26531229bd54a8009b67e9bfd7",
"text": "Rail transportation plays a critical role to safely and efficiently transport hazardous materials. A number of strategies have been implemented or are being developed to reduce the risk of hazardous materials release from train accidents. Each of these risk reduction strategies has its safety benefit and corresponding implementation cost. However, the cost effectiveness of the integration of different risk reduction strategies is not well understood. Meanwhile, there has been growing interest in the U.S. rail industry and government to best allocate resources for improving hazardous materials transportation safety. This paper presents an optimization model that considers the combination of two types of risk reduction strategies, broken rail prevention and tank car safety design enhancement. A Pareto-optimality technique is used to maximize risk reduction at a given level of investment. The framework presented in this paper can be adapted to address a broader set of risk reduction strategies and is intended to assist decision makers for local, regional and system-wide risk management of rail hazardous materials transportation.",
"title": ""
},
{
"docid": "e4c2fcc09b86dc9509a8763e7293cfe9",
"text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.",
"title": ""
},
{
"docid": "1f18623625304f7c47ca144c8acf4bc9",
"text": "Deep neural networks (DNNs) are known to be vulnerable to adversarial perturbations, which imposes a serious threat to DNN-based decision systems. In this paper, we propose to apply the lossy Saak transform to adversarially perturbed images as a preprocessing tool to defend against adversarial attacks. Saak transform is a recently-proposed state-of-the-art for computing the spatial-spectral representations of input images. Empirically, we observe that outputs of the Saak transform are very discriminative in differentiating adversarial examples from clean ones. Therefore, we propose a Saak transform based preprocessing method with three steps: 1) transforming an input image to a joint spatial-spectral representation via the forward Saak transform, 2) apply filtering to its high-frequency components, and, 3) reconstructing the image via the inverse Saak transform. The processed image is found to be robust against adversarial perturbations. We conduct extensive experiments to investigate various settings of the Saak transform and filtering functions. Without harming the decision performance on clean images, our method outperforms state-of-the-art adversarial defense methods by a substantial margin on both the CIFAR10 and ImageNet datasets. Importantly, our results suggest that adversarial perturbations can be effectively and efficiently defended using state-of-the-art frequency analysis.",
"title": ""
},
{
"docid": "7819d359e169ae18f9bb50f464e1233c",
"text": "As large amount of data is generated in medical organizations (hospitals, medical centers) but this data is not properly used. There is a wealth of hidden information present in the datasets. The healthcare environment is still “information rich” but “knowledge poor”. There is a lack of effective analysis tools to discover hidden relationships and trends in data. Advanced data mining techniques can help remedy this situation. For this purpose we can use different data mining techniques. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today’s medical research particularly in Heart Disease Prediction. This research has developed a prototype Heart Disease Prediction System (HDPS) using data mining techniques namely, Decision Trees, Naïve Bayes and Neural Network. This Heart disease prediction system can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established.",
"title": ""
},
{
"docid": "02c41de2c0447eec4c5198bccdb1d414",
"text": "The paper contends that the use of cost-benefit analysis (CBA) for or against capital punishment is problematic insofar as CBA (1) commodifies and thus reduces the value of human life and (2) cannot quantify all costs and benefits. The paramount theories of punishment, retribution and utilitarianism, which are used as rationales for capital punishment, do not justify the use of cost-benefit analysis as part of that rationale. Calling on the theory of restorative justice, the paper recommends a change in the linguistic register used to describe the value of human beings. In particular, abolitionists should emphasize that human beings have essential value. INTRODUCTION Advocates of the death penalty use economics to justify the use of capital punishment. Scott Turow, an Illinois-based lawyer says it well when he comments that two arguments frequently used by death penalty advocates are that, “the death penalty is a deterrent to others and it is more cost effective than keeping an individual in jail for life” (Turow). Edward Elijas takes the point further in writing the following, “Let’s imagine for a moment there was no death penalty. The only reasonable sentence would a life sentence. This would be costly to the tax payers, not only for the cost of housing and feeding the prisoner but because of the numerous appeals which wastes man hours and money. By treating criminals in this manner, we are encouraging behavior that will result in a prison sentence. If there is no threat of death to one who commits a murder, than that person is guaranteed to be provided with a decent living environment until their next parole hearing. They are definitely not getting the punishment they deserve” (http://www.cwrl.utexas.edu/). According to the argument, whether a person convicted",
"title": ""
},
{
"docid": "dad0c9ce47334ca6133392322068dd68",
"text": "A monolithic 64Gb MLC NAND flash based on 21nm process technology has been developed for the first time. The device consists of 4-plane arrays and provides page size of up to 32KB. It also features a newly developed DDR interface that can support up to the maximum bandwidth of 400MB/s. To address performance and reliability, on-chip randomizer, soft data readout, and incremental bit line precharge scheme have been developed.",
"title": ""
},
{
"docid": "21b4f160b73d7dbe934f7a716c667aef",
"text": "The rapid growth of silicon densities has made it feasible to deploy reconfigurable hardware as a highly parallel computing platform. However, in most cases, the application needs to be programmed in hardware description or assembly languages, whereas most application programmers are familiar with the algorithmic programming paradigm. SA-C has been proposed as an expression-oriented language designed to implicitly express data parallel operations. Morphosys is a reconfigurable system-on-chip architecture that supports a data-parallel, SIMD computational model. This paper describes a compiler framework to analyze SA-C programs, perform optimizations, and map the application onto the Morphosys architecture. The mapping process involves operation scheduling, resource allocation and binding and register allocation in the context of the Morphosys architecture. The execution times of some compiled image-processing kernels can achieve up to 42x speed-up over an 800 MHz Pentium III machine.",
"title": ""
},
{
"docid": "b0ea0b7e3900b440cb4e1d5162c6830b",
"text": "Product Lifecycle Management (PLM) solutions have been serving as the basis for collaborative product definition, manufacturing, and service management in many industries. They capture and provide access to product and process information and preserve integrity of information throughout the lifecycle of a product. Efficient growth in the role of Building Information Modeling (BIM) can benefit vastly from unifying solutions to acquire, manage and make use of information and processes from various project and enterprise level systems, selectively adapting functionality from PLM systems. However, there are important differences between PLM’s target industries and the Architecture, Engineering, and Construction (AEC) industry characteristics that require modification and tailoring of some aspects of current PLM technology. In this study we examine the fundamental PLM functionalities that create synergy with the BIM-enabled AEC industry. We propose a conceptual model for the information flow and integration between BIM and PLM systems. Finally, we explore the differences between the AEC industry and traditional scope of service for PLM solutions.",
"title": ""
},
{
"docid": "509fe613e25c9633df2520e4c3a62b74",
"text": "This study, in an attempt to rise above the intricacy of 'being informed on the verge of globalization,' is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper favors not judging against the superiority of human translation vs. machine translation or automated translation in non-English speaking settings, but rather referring to the inadequacies and adequacies of MT at certain pragmatic levels, lacking the right sense and dynamic equivalence, but producing syntactically well-formed or meaning-extractable outputs in restricted settings. Reasoning in this way, the present study supports MT before, during, and after translation. It aims at making translators understand that they could cooperate with the software to obtain a synergistic effect. In other words, they could have a say and have an essential part to play in a semi-automated translation process (Rodrigo, 2001). In this respect, semi-automated translation or MT courses should be included in the curricula of translation departments worldwide to keep track of the state of the art as well as make potential translators aware of future trends.",
"title": ""
},
{
"docid": "be3204a5a4430cc3150bf0368a972e38",
"text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.",
"title": ""
}
] |
scidocsrr
|
0c00ccb5f363f28347e55517cfb78f95
|
A Measure of Similarity of Time Series Containing Missing Data Using the Mahalanobis Distance
|
[
{
"docid": "d4f1cdfe13fda841edfb31ced34a4ee8",
"text": "ÐMissing data are often encountered in data sets used to construct effort prediction models. Thus far, the common practice has been to ignore observations with missing data. This may result in biased prediction models. In this paper, we evaluate four missing data techniques (MDTs) in the context of software cost modeling: listwise deletion (LD), mean imputation (MI), similar response pattern imputation (SRPI), and full information maximum likelihood (FIML). We apply the MDTs to an ERP data set, and thereafter construct regression-based prediction models using the resulting data sets. The evaluation suggests that only FIML is appropriate when the data are not missing completely at random (MCAR). Unlike FIML, prediction models constructed on LD, MI and SRPI data sets will be biased unless the data are MCAR. Furthermore, compared to LD, MI and SRPI seem appropriate only if the resulting LD data set is too small to enable the construction of a meaningful regression-based prediction model.",
"title": ""
},
{
"docid": "b9b85e8e4824b7f0cb6443d70ef38b38",
"text": "This paper presents methods for analyzing and manipulating unevenly spaced time series without a transformation to equally spaced data. Processing and analyzing such data in its unaltered form avoids the biases and information loss caused by resampling. Care is taken to develop a framework consistent with a traditional analysis of equally spaced data, as in Brockwell and Davis (1991), Hamilton (1994) and Box, Jenkins, and Reinsel (2004).",
"title": ""
}
] |
[
{
"docid": "00527294606231986ba34d68e847e01a",
"text": "In this paper, we describe a new scheme to learn dynamic user's interests in an automated information filtering and gathering system running on the Internet. Our scheme is aimed to handle multiple domains of long-term and short-term user's interests simultaneously, which is learned through positive and negative user's relevance feedback. We developed a 3-descriptor approach to represent the user's interest categories. Using a learning algorithm derived for this representation, our scheme adapts quickly to significant changes in user interest, and is also able to learn exceptions to interest categories.",
"title": ""
},
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "62e7974231c091845f908a50f5365d7f",
"text": "Sequentiality of access is an inherent characteristic of many database systems. We use this observation to develop an algorithm which selectively prefetches data blocks ahead of the point of reference. The number of blocks prefetched is chosen by using the empirical run length distribution and conditioning on the observed number of sequential block references immediately preceding reference to the current block. The optimal number of blocks to prefetch is estimated as a function of a number of “costs,” including the cost of accessing a block not resident in the buffer (a miss), the cost of fetching additional data blocks at fault times, and the cost of fetching blocks that are never referenced. We estimate this latter cost, described as memory pollution, in two ways. We consider the treatment (in the replacement algorithm) of prefetched blocks, whether they are treated as referenced or not, and find that it makes very little difference. Trace data taken from an operational IMS database system is analyzed and the results are presented. We show how to determine optimal block sizes. We find that anticipatory fetching of data can lead to significant improvements in system operation.",
"title": ""
},
{
"docid": "11cf4c50ced7ceafe7176a597f0f983d",
"text": "All mature hemopoietic lineage cells, with exclusion of platelets and mature erythrocytes, share the surface expression of a transmembrane phosphatase, the CD45 molecule. It is also present on hemopoietic stem cells and most leukemic clones and therefore presents as an appropriate target for immunotherapy with anti-CD45 antibodies. This short review details the biology of CD45 and its recent targeting for both treatment of malignant disorders and tolerance induction. In particular, the question of potential stem cell depletion for induction of central tolerance or depletion of malignant hemopoietic cells is addressed. Mechanisms underlying the effects downstream of CD45 binding to the cell surface are discussed.",
"title": ""
},
{
"docid": "62d63c1177b2426e133daca0ead7e50f",
"text": "⎯The problem of how to plan coal fuel blending and distribution from overseas coal sources to domestic power plants through some possible seaports by certain types of fleet in order to meet operational and environmental requirements is a complex task. The aspects under consideration includes each coal source contract’s supply, quality and price, each power plant’s demand, environmental requirements and limit on maximum number of different coal sources that can supply it, installation of blending facilities, selection of fleet types, and transient seaport’s capacity limit on fleet types. A coal blending and inter-model transportation model is explored to find optimal blending and distribution decisions for coal fuel from overseas contracts to domestic power plants. The objective in this study is to minimize total logistics costs, including procurement cost, shipping cost, and inland delivery cost. The developed model is one type of mix-integer zero-one programming problems. A real-world case problem is presented using the coal logistics system of a local electric utility company to demonstrate the benefit of the proposed approach. A well-known optimization package, AMPL-CPLEX, is utilized to solve this problem. Results from this study suggest that the obtained solution is better than the rule-of-thumb solution and the developed model provides a tool for management to conduct capacity expansion planning and power generation options. Keywords⎯Blending and inter-modal transportation model, Integer programming, Coal fuel. ∗ Corresponding author’s email: [email protected] International Journal of Operations Research",
"title": ""
},
{
"docid": "8583702b48549c5bbf1553fa0e39a882",
"text": "A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.",
"title": ""
},
{
"docid": "493748a07dbf457e191487fe7459ee7e",
"text": "60 Computer T he Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: Taken as a whole, the set of Web pages lacks a unifying structure and shows far more author-ing style and content variation than that seen in traditional text-document collections. This level of complexity makes an \" off-the-shelf \" database-management and information-retrieval solution impossible. To date, index-based search engines for the Web have been the primary tool by which users search for information. The largest such search engines exploit technology's ability to store and index much of the Web. Such engines can therefore build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained keywords and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. Yet a user will be willing, typically , to look at only a few of these pages. How then, from this sea of pages, should a search engine select the correct ones—those of most value to the user? AUTHORITATIVE WEB PAGES First, to distill a large Web search topic to a size that makes sense to a human user, we need a means of identifying the topic's most definitive or authoritative Web pages. The notion of authority adds a crucial second dimension to the concept of relevance: We wish to locate not only a set of relevant pages, but also those relevant pages of the highest quality. Second, the Web consists not only of pages, but hyperlinks that connect one page to another. This hyperlink structure contains an enormous amount of latent human annotation that can help automatically infer notions of authority. Specifically, the creation of a hyperlink by the author of a Web page represents an implicit endorsement of the page being pointed to; by mining the collective judgment contained in the set of such endorsements, we can gain a richer understanding of the relevance and quality of the Web's contents. To address both these parameters, we began development of the Clever system 1-3 three years ago. Clever …",
"title": ""
},
{
"docid": "8cbfb79df2516bb8a06a5ae9399e3685",
"text": "We consider the problem of approximate set similarity search under Braun-Blanquet similarity <i>B</i>(<i>x</i>, <i>y</i>) = |<i>x</i> â© <i>y</i>| / max(|<i>x</i>|, |<i>y</i>|). The (<i>b</i><sub>1</sub>, <i>b</i><sub>2</sub>)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets <i>P</i> such that, given a query set <i>q</i>, if there exists <i>x</i> â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>) ⥠<i>b</i><sub>1</sub>, then we can efficiently return <i>x</i>â² â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>â²) > <i>b</i><sub>2</sub>. \nWe present a simple data structure that solves this problem with space usage <i>O</i>(<i>n</i><sup>1+Ï</sup>log<i>n</i> + â<sub><i>x</i> â <i>P</i></sub>|<i>x</i>|) and query time <i>O</i>(|<i>q</i>|<i>n</i><sup>Ï</sup> log<i>n</i>) where <i>n</i> = |<i>P</i>| and Ï = log(1/<i>b</i><sub>1</sub>)/log(1/<i>b</i><sub>2</sub>). Making use of existing lower bounds for locality-sensitive hashing by OâDonnell et al. (TOCT 2014) we show that this value of Ï is tight across the parameter space, i.e., for every choice of constants 0 < <i>b</i><sub>2</sub> < <i>b</i><sub>1</sub> < 1. \nIn the case where all sets have the same size our solution strictly improves upon the value of Ï that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broderâs MinHash (CCS 1997) for Jaccard similarity and Andoni et al.âs cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-<em>dependent</em> method by Andoni and Razenshteyn (STOC 2015).",
"title": ""
},
{
"docid": "608bf85fa593c7ddff211c5bcc7dd20a",
"text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.",
"title": ""
},
{
"docid": "ac1b28346ae9df1dd3b455d113551caf",
"text": "The new IEEE 802.11 standard, IEEE 802.11ax, has the challenging goal of serving more Uplink (UL) traffic and users as compared with his predecessor IEEE 802.11ac, enabling consistent and reliable streams of data (average throughput) per station. In this paper we explore several new IEEE 802.11ax UL scheduling mechanisms and compare between the maximum throughputs of unidirectional UDP Multi Users (MU) triadic. The evaluation is conducted based on Multiple-Input-Multiple-Output (MIMO) and Orthogonal Frequency Division Multiple Access (OFDMA) transmission multiplexing format in IEEE 802.11ax vs. the CSMA/CA MAC in IEEE 802.11ac in the Single User (SU) and MU modes for 1, 4, 8, 16, 32 and 64 stations scenario in reliable and unreliable channels. The comparison is conducted as a function of the Modulation and Coding Schemes (MCS) in use. In IEEE 802.11ax we consider two new flavors of acknowledgment operation settings, where the maximum acknowledgment windows are 64 or 256 respectively. In SU scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 64% and 85% in reliable and unreliable channels respectively. In MU-MIMO scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 263% and 270% in reliable and unreliable channels respectively. Also, as the number of stations increases, the advantage of IEEE 802.11ax in terms of the access delay also increases.",
"title": ""
},
{
"docid": "2383c90591822bc0c8cec2b1b2309b7a",
"text": "Apple's iPad has attracted a lot of attention since its release in 2010 and one area in which it has been adopted is the education sector. The iPad's large multi-touch screen, sleek profile and the ability to easily download and purchase a huge variety of educational applications make it attractive to educators. This paper presents a case study of the iPad's adoption in a primary school, one of the first in the world to adopt it. From interviews with teachers and IT staff, we conclude that the iPad's main strengths are the way in which it provides quick and easy access to information for students and the support it provides for collaboration. However, staff need to carefully manage both the teaching and the administrative environment in which the iPad is used, and we provide some lessons learned that can help other schools considering adopting the iPad in the classroom.",
"title": ""
},
{
"docid": "c7fb516fbba3293c92a00beaced3e95e",
"text": "Latent Dirichlet Allocation (LDA) is a generative model describing the observed data as being composed of a mixture of underlying unobserved topics, as introduced by Blei et al. (2003). A key hyperparameter of LDA is the number of underlying topics k, which must be estimated empirically in practice. Selecting the appropriate value of k is essentially selecting the correct model to represent the data; an important issue concerning the goodness of fit. We examine in the current work a series of metrics from literature on a quantitative basis by performing benchmarks against a generated dataset with a known value of k and evaluate the ability of each metric to recover the true value, varying over multiple levels of topic resolution in the Dirichlet prior distributions. Finally, we introduce a new metric and heuristic for estimating k and demonstrate improved performance over existing metrics from the literature on several benchmarks.",
"title": ""
},
{
"docid": "f03cc92b0bc69845b9f2b6c0c6f3168b",
"text": "Relational database management systems (RDBMSs) are powerful because they are able to optimize and answer queries against any relational database. A natural language interface (NLI) for a database, on the other hand, is tailored to support that specific database. In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database. We adopt the data management principle of separating data and its schema, but with the additional support for the idiosyncrasy and complexity of natural languages. Specifically, we introduce an automatic annotation mechanism that separates the schema and the data, where the schema also covers knowledge about natural language. Furthermore, we propose a customized sequence model that translates annotated natural language queries to SQL statements. We show in experiments that our approach outperforms previous NLI methods on the WikiSQL dataset and the model we learned can be applied to another benchmark dataset OVERNIGHT without retraining.",
"title": ""
},
{
"docid": "6c7172b5c91601646a7cdc502c88d22f",
"text": "In this paper, a number of options and issues are illustrated which companies and organizations seeking to incorporate environmental issues in product design and realization should consider. A brief overview and classification of a number of approaches for reducing the environmental impact is given, as well as their organizational impact. General characteristics, representative examples, and integration and information management issues of design tools supporting environmentally conscious product design are provided as well. 1 From Design for Manufacture to Design for the Life Cycle and Beyond One can argue that the “good old days” where a product was being designed, manufactured and sold to the customer with little or no subsequent concern are over. In the seventies, with the emergence of life-cycle engineering and concurrent engineering in the United States, companies became more aware of the need to include serviceability and maintenance issues in their design processes. A formal definition for Concurrent Engineering is given in (Winner, et al., 1988), as “a systematic approach to the integrated, concurrent design of products and their related processes, including manufacturing and support. This approach is intended to cause the developers, from the outset, to consider all elements of the product life cycle from conception through disposal, including quality, cost, schedule, and user requirements.” Although concurrent engineering seems to span the entire life-cycle of a product according to the preceding definition, its traditional focus has been on design, manufacturing, and maintenance. Perhaps one of the most striking areas where companies now have to be concerned is with the environment. The concern regarding environmental impact stems from the fact that, whether we want it or not, all our products affect in some way our environment during their life-span. In Figure 1, a schematic representation of a system’s life-cycle is given. Materials are mined from the earth, air and sea, processed into products, and distributed to consumers for usage, as represented by the flow from left to right in the top half of Figure 1.",
"title": ""
},
{
"docid": "962a653490e8afbcf13c47426c85ecec",
"text": "Alzheimer’s disease (AD) and mild cognitive impairment (MCI) are the most prevalent neurodegenerative brain diseases in elderly population. Recent studies on medical imaging and biological data have shown morphological alterations of subcortical structures in patients with these pathologies. In this work, we take advantage of these structural deformations for classification purposes. First, triangulated surface meshes are extracted from segmented hippocampus structures in MRI and point-to-point correspondences are established among population of surfaces using a spectral matching method. Then, a deep learning variational auto-encoder is applied on the vertex coordinates of the mesh models to learn the low dimensional feature representation. A multi-layer perceptrons using softmax activation is trained simultaneously to classify Alzheimer’s patients from normal subjects. Experiments on ADNI dataset demonstrate the potential of the proposed method in classification of normal individuals from early MCI (EMCI), late MCI (LMCI), and AD subjects with classification rates outperforming standard SVM based approach.",
"title": ""
},
{
"docid": "7ab232fbbda235c42e0dabb2b128ed59",
"text": "Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.",
"title": ""
},
{
"docid": "4b012d1dc18f18118a73488e934eff4d",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: s u m m a r y Current drought information is based on indices that do not capture the joint behaviors of hydrologic variables. To address this limitation, the potential of copulas in characterizing droughts from multiple variables is explored in this study. Starting from the standardized index (SI) algorithm, a modified index accounting for seasonality is proposed for precipitation and streamflow marginals. Utilizing Indiana stations with long-term observations (a minimum of 80 years for precipitation and 50 years for streamflow), the dependence structures of precipitation and streamflow marginals with various window sizes from 1-to 12-months are constructed from empirical copulas. A joint deficit index (JDI) is defined by using the distribution function of copulas. This index provides a probability-based description of the overall drought status. Not only is the proposed JDI able to reflect both emerging and prolonged droughts in a timely manner, it also allows a month-by-month drought assessment such that the required amount of precipitation for achieving normal conditions in future can be computed. The use of JDI is generalizable to other hydrologic variables as evidenced by similar drought severities gleaned from JDIs constructed separately from precipitation and streamflow data. JDI further allows the construction of an inter-variable drought index, where the entire dependence structure of precipitation and streamflow marginals is preserved. Introduction Drought, as a prolonged status of water deficit, has been a challenging topic in water resources management. It is perceived as one of the most expensive and least understood natural disasters. In monetary terms, a typical drought costs American farmers and businesses $6–8 billion each year (WGA, 2004), more than damages incurred from floods and hurricanes. The consequences tend to be more severe in areas such as the mid-western part of the United States, where agriculture is the major economic driver. Unfortunately , though there is a strong need to develop an algorithm for characterizing and predicting droughts, it cannot be achieved easily either through physical or statistical analyses. The main obstacles are identification of complex drought-causing mechanisms, and lack of a precise (universal) scientific definition for droughts. When a drought event occurs, moisture deficits are observed in many hydrologic variables, such as precipitation, …",
"title": ""
},
{
"docid": "8ed247a04a8e5ab201807e0d300135a3",
"text": "We reproduce the Structurally Constrained Recurrent Network (SCRN) model, and then regularize it using the existing widespread techniques, such as naïve dropout, variational dropout, and weight tying. We show that when regularized and optimized appropriately the SCRN model can achieve performance comparable with the ubiquitous LSTMmodel in language modeling task on English data, while outperforming it on non-English data. Title and Abstract in Russian Воспроизведение и регуляризация SCRN модели Мы воспроизводим структурно ограниченную рекуррентную сеть (SCRN), а затем добавляем регуляризацию, используя существующие широко распространенные методы, такие как исключение (дропаут), вариационное исключение и связка параметров. Мы показываем, что при правильной регуляризации и оптимизации показатели SCRN сопоставимы с показателями вездесущей LSTM в задаче языкового моделирования на английских текстах, а также превосходят их на неанглийских данных.",
"title": ""
},
{
"docid": "b518deb76d6a59f6b88d58b563100f4b",
"text": "As part of the 50th anniversary of the Canadian Operational Research Society, we reviewed queueing applications by Canadian researchers and practitioners. We concentrated on finding real applications, but also considered theoretical contributions to applied areas that have been developed by the authors based on real applications. There were a surprising number of applications, many not well documented. Thus, this paper features examples of queueing theory applications over a spectrum of areas, years and types. One conclusion is that some of the successful queueing applications were achieved and ameliorated by using simple principles gained from studying queues and not by complex mathematical models.",
"title": ""
},
{
"docid": "f9692d0410cb97fd9c2ecf6f7b043b9f",
"text": "This paper develops and analyzes four energy scenarios for California that are both exploratory and quantitative. The businessas-usual scenario represents a pathway guided by outcomes and expectations emerging from California’s energy crisis. Three alternative scenarios represent contexts where clean energy plays a greater role in California’s energy system: Split Public is driven by local and individual activities; Golden State gives importance to integrated state planning; Patriotic Energy represents a national drive to increase energy independence. Future energy consumption, composition of electricity generation, energy diversity, and greenhouse gas emissions are analyzed for each scenario through 2035. Energy savings, renewable energy, and transportation activities are identified as promising opportunities for achieving alternative energy pathways in California. A combined approach that brings together individual and community activities with state and national policies leads to the largest energy savings, increases in energy diversity, and reductions in greenhouse gas emissions. Critical challenges in California’s energy pathway over the next decades identified by the scenario analysis include dominance of the transportation sector, dependence on fossil fuels, emissions of greenhouse gases, accounting for electricity imports, and diversity of the electricity sector. The paper concludes with a set of policy lessons revealed from the California energy scenarios. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
fe8fcd0de803e1e871c46dae2508eb8d
|
Experiments with SVM to classify opinions in different domains
|
[
{
"docid": "095dbdc1ac804487235cdd0aeffe8233",
"text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.",
"title": ""
},
{
"docid": "8a7ea746acbfd004d03d4918953d283a",
"text": "Sentiment analysis is an important current research area. This paper combines rule-based classification, supervised learning andmachine learning into a new combinedmethod. Thismethod is tested onmovie reviews, product reviews and MySpace comments. The results show that a hybrid classification can improve the classification effectiveness in terms of microand macro-averaged F1. F1 is a measure that takes both the precision and recall of a classifier’s effectiveness into account. In addition, we propose a semi-automatic, complementary approach in which each classifier can contribute to other classifiers to achieve a good level of effectiveness.",
"title": ""
}
] |
[
{
"docid": "e5d107b5f81d9cd1b6d5ac58339cc427",
"text": "While one of the first steps in many NLP systems is selecting what embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce a novel, straightforward yet highly effective method for combining multiple types of word embeddings in a single model, leading to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new insight into the usage of word embeddings in NLP systems.",
"title": ""
},
{
"docid": "258655a00ea8acde4e2bde42376c1ead",
"text": "A main puzzle of deep networks revolves around the absence of overfitting despite large overparametrization and despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results. Our main propositions extend to deep nonlinear networks two properties of gradient descent for linear networks, that have been recently established (1) to be key to their generalization properties: 1. Gradient descent enforces a form of implicit regularization controlled by the number of iterations, and asymptotically converges to the minimum norm solution for appropriate initial conditions of gradient descent. This implies that there is usually an optimum early stopping that avoids overfitting of the loss. This property, valid for the square loss and many other loss functions, is relevant especially for regression. 2. For classification, the asymptotic convergence to the minimum norm solution implies convergence to the maximum margin solution which guarantees good classification error for “low noise” datasets. This property holds for loss functions such as the logistic and cross-entropy loss independently of the initial conditions. The robustness to overparametrization has suggestive implications for the robustness of the architecture of deep convolutional networks with respect to the curse of dimensionality. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 1 ar X iv :1 80 1. 00 17 3v 2 [ cs .L G ] 1 6 Ja n 20 18",
"title": ""
},
{
"docid": "7c171e744df03df658c02e899e197bd4",
"text": "In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments.",
"title": ""
},
{
"docid": "c12d27988e70e9b3e6987ca2f0ca8bca",
"text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.",
"title": ""
},
{
"docid": "305f877227516eded75819bdf48ab26d",
"text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.",
"title": ""
},
{
"docid": "353500d18d56c0bf6dc13627b0517f41",
"text": "In order to accelerate the learning process in high dimensional reinforcement learning problems, TD methods such as Q-learning and Sarsa are usually combined with eligibility traces. The recently introduced DQN (Deep Q-Network) algorithm, which is a combination of Q-learning with a deep neural network, has achieved good performance on several games in the Atari 2600 domain. However, the DQN training is very slow and requires too many time steps to converge. In this paper, we use the eligibility traces mechanism and propose the deep Q(λ) network algorithm. The proposed method provides faster learning in comparison with the DQN method. Empirical results on a range of games show that the deep Q(λ) network significantly reduces learning time.",
"title": ""
},
{
"docid": "62f5640954e5b731f82599fb52ea816f",
"text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.",
"title": ""
},
{
"docid": "0d0fd1c837b5e45b83ee590017716021",
"text": "General intelligence and personality traits from the Five-Factor model were studied as predictors of academic achievement in a large sample of Estonian schoolchildren from elementary to secondary school. A total of 3618 students (1746 boys and 1872 girls) from all over Estonia attending Grades 2, 3, 4, 6, 8, 10, and 12 participated in this study. Intelligence, as measured by the Raven’s Standard Progressive Matrices, was found to be the best predictor of students’ grade point average (GPA) in all grades. Among personality traits (measured by self-reports on the Estonian Big Five Questionnaire for Children in Grades 2 to 4 and by the NEO Five Factor Inventory in Grades 6 to 12), Openness, Agreeableness, and Conscientiousness correlated positively and Neuroticism correlated negatively with GPA in almost every grade. When all measured variables were entered together into a regression model, intelligence was still the strongest predictor of GPA, being followed by Agreeableness in Grades 2 to 4 and Conscientiousness in Grades 6 to 12. Interactions between predictor variables and age accounted for only a small percentage of variance in GPA, suggesting that academic achievement relies basically on the same mechanisms through the school years. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9cc899155bd5f88ae1a3d5b88de52af",
"text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.",
"title": ""
},
{
"docid": "2d998d0e0966acf04dfe377cde35aafa",
"text": "This paper proposes a generalization of the multi- Bernoulli filter called the labeled multi-Bernoulli filter that outputs target tracks. Moreover, the labeled multi-Bernoulli filter does not exhibit a cardinality bias due to a more accurate update approximation compared to the multi-Bernoulli filter by exploiting the conjugate prior form for labeled Random Finite Sets. The proposed filter can be interpreted as an efficient approximation of the δ-Generalized Labeled Multi-Bernoulli filter. It inherits the advantages of the multi-Bernoulli filter in regards to particle implementation and state estimation. It also inherits advantages of the δ-Generalized Labeled Multi-Bernoulli filter in that it outputs (labeled) target tracks and achieves better performance.",
"title": ""
},
{
"docid": "855a8cfdd9d01cd65fe32d18b9be4fdf",
"text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.",
"title": ""
},
{
"docid": "d8c40ed2d2b2970412cc8404576d0c80",
"text": "In this paper an adaptive control technique combined with the so-called IDA-PBC (Interconnexion Damping Assignment, Passivity Based Control) controller is proposed for the stabilization of a class of underactuated mechanical systems, namely, the Inertia Wheel Inverted Pendulum (IWIP). It has two degrees of freedom with one actuator. The IDA-PBC stabilizes for all initial conditions (except a set of zeros measure) the upward position of the IWIP. The efficiency of this controller depends on the tuning of several gains. Motivated by this issue we propose to automatically adapt some of these gains in order to regain performance rapidly. The effectiveness of the proposed adaptive scheme is demonstrated through numerical simulations and experimental results.",
"title": ""
},
{
"docid": "1073c1f4013f6c57259502391d75d356",
"text": "A long-standing dream of Artificial Intelligence (AI) has pursued to enrich computer programs with commonsense knowledge enabling machines to reason about our world. This paper offers a new practical insight towards the automation of commonsense reasoning with first-order logic (FOL) ontologies. We propose a new black-box testing methodology of FOL SUMO-based ontologies by exploiting WordNet and its mapping into SUMO. Our proposal includes a method for the (semi-)automatic creation of a very large set of tests and a procedure for its automated evaluation by using automated theorem provers (ATPs). Applying our testing proposal, we are able to successfully evaluate a) the competency of several translations of SUMO into FOL and b) the performance of various automated ATPs. In addition, we are also able to evaluate the resulting set of tests according to different quality criteria.",
"title": ""
},
{
"docid": "1053359e8374c47d4645c5609ffafaee",
"text": "In this paper, we derive a new infinite series representation for the trivariate non-central chi-squared distribution when the underlying correlated Gaussian variables have tridiagonal form of inverse covariance matrix. We make use of the Miller's approach and the Dougall's identity to derive the joint density function. Moreover, the trivariate cumulative distribution function (cdf) and characteristic function (chf) are also derived. Finally, bivariate noncentral chi-squared distribution and some known forms are shown to be special cases of the more general distribution. However, non-central chi-squared distribution for an arbitrary covariance matrix seems intractable with the Miller's approach.",
"title": ""
},
{
"docid": "31e8d60af8a1f9576d28c4c1e0a3db86",
"text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.",
"title": ""
},
{
"docid": "c69a480600fea74dab84290e6c0e2204",
"text": "Mobile cloud computing is computing of Mobile application through cloud. As we know market of mobile phones is growing rapidly. According to IDC, the premier global market intelligence firm, the worldwide Smartphone market grew 42. 5% year over year in the first quarter of 2012. With the growing demand of Smartphone the demand for fast computation is also growing. Inspite of comparatively more processing power and storage capability of Smartphone's, they still lag behind Personal Computers in meeting processing and storage demands of high end applications like speech recognition, security software, gaming, health services etc. Mobile cloud computing is an answer to intensive processing and storage demand of real-time and high end applications. Being in nascent stage, Mobile Cloud Computing has privacy and security issues which deter the users from adopting this technology. This review paper throws light on privacy and security issues of Mobile Cloud Computing.",
"title": ""
},
{
"docid": "83f1fc22d029b3a424afcda770a5af23",
"text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.",
"title": ""
},
{
"docid": "e9bd226d50c9a6633c32b9162cbd14f4",
"text": "PURPOSE\nTo report clinical features and treatment outcomes of ocular juvenile xanthogranuloma (JXG).\n\n\nDESIGN\nRetrospective case series.\n\n\nPARTICIPANTS\nThere were 32 tumors in 31 eyes of 30 patients with ocular JXG.\n\n\nMETHODS\nReview of medical records.\n\n\nMAIN OUTCOME MEASURES\nTumor control, intraocular pressure (IOP), and visual acuity.\n\n\nRESULTS\nThe mean patient age at presentation was 51 months (median, 15 months; range, 1-443 months). Eye redness (12/30, 40%) and hyphema (4/30, 13%) were the most common presenting symptoms. Cutaneous JXG was concurrently present in 3 patients (3/30, 10%), and spinal JXG was present in 1 patient (1/30, 3%). The ocular tissue affected by JXG included the iris (21/31, 68%), conjunctiva (6/31, 19%), eyelid (2/31, 6%), choroid (2/31, 6%), and orbit (1/31, 3%). Those with iris JXG presented at a median age of 13 months compared with 30 months for those with conjunctival JXG. In the iris JXG group, mean IOP was 19 mmHg (median, 18 mmHg; range, 11-30 mmHg) and hyphema was noted in 8 eyes (8/21, 38%). The iris tumor was nodular (16/21, 76%) or diffuse (5/21, 24%). Fine-needle aspiration biopsy was used in 10 cases and confirmed JXG cytologically in all cases. The iris lesion was treated with topical (18/21, 86%) and/or periocular (4/21, 19%) corticosteroids. The eyelid, conjunctiva, and orbital JXG were treated with excisional biopsy in 5 patients (5/9, 56%), topical corticosteroids in 2 patients (2/9, 22%), and observation in 2 patients (2/9, 22%). Of 28 patients with a mean follow-up of 15 months (median, 6 months; range, 1-68 months), tumor regression was achieved in all cases, without recurrence. Two patients were lost to follow-up. Upon follow-up of the iris JXG group, visual acuity was stable or improved (18/19 patients, 95%) and IOP was controlled long-term without medication (14/21 patients, 74%). No eyes were managed with enucleation.\n\n\nCONCLUSIONS\nOcular JXG preferentially affects the iris and is often isolated without cutaneous involvement. Iris JXG responds to topical or periocular corticosteroids, often with stabilization or improvement of vision and IOP.",
"title": ""
},
{
"docid": "1e7721225d84896a72f2ea790570ecbd",
"text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.",
"title": ""
},
{
"docid": "3348e5aaa5f610f47e11f58aa1094d4d",
"text": "Accountability has emerged as a critical concept related to data protection in cloud ecosystems. It is necessary to maintain chains of accountability across cloud ecosystems. This is to enhance the confidence in the trust that cloud actors have while operating in the cloud. This paper is concerned with accountability in the cloud. It presents a conceptual model, consisting of attributes, practices and mechanisms for accountability in the cloud. The proposed model allows us to explain, in terms of accountability attributes, cloud-mediated interactions between actors. This forms the basis for characterizing accountability relationships between cloud actors, and hence chains of accountability in cloud ecosystems.",
"title": ""
}
] |
scidocsrr
|
afa4b96604b51dfd4b8c09d1433f174b
|
ACOUSTIC SCENE CLASSIFICATION USING PARALLEL COMBINATION OF LSTM AND
|
[
{
"docid": "afee419227629f8044b5eb0addd65ce3",
"text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.",
"title": ""
},
{
"docid": "29c91c8d6f7faed5d23126482a2f553b",
"text": "In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods.",
"title": ""
},
{
"docid": "927afdfa9f14c96a034d78be03936ff8",
"text": "Multimedia event detection (MED) is the task of detecting given events (e.g. birthday party, making a sandwich) in a large collection of video clips. While visual features and automatic speech recognition typically provide the best features for this task, nonspeech audio can also contribute useful information, such as crowds cheering, engine noises, or animal sounds. MED is typically formulated as a two-stage process: the first stage generates clip-level feature representations, often by aggregating frame-level features; the second stage performs binary or multi-class classification to decide whether a given event occurs in a video clip. Both stages are usually performed \"statically\", i.e. using only local temporal information, or bag-of-words models. In this paper, we introduce longer-range temporal information with deep recurrent neural networks (RNNs) for both stages. We classify each audio frame among a set of semantic units called \"noisemes\" the sequence of frame-level confidence distributions is used as a variable-length clip-level representation. Such confidence vector sequences are then fed into long short-term memory (LSTM) networks for clip-level classification. We observe improvements in both frame-level and clip-level performance compared to SVM and feed-forward neural network baselines.",
"title": ""
},
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
}
] |
[
{
"docid": "02156199912027e9230b3c000bcbe87b",
"text": "Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.",
"title": ""
},
{
"docid": "e648fb690dae270c4e63442a49aacaa9",
"text": "It is argued that the concept of free will, like the concept of truth in formal languages, requires a separation between an object level and a meta-level for being consistently defined. The Jamesian two-stage model, which deconstructs free will into the causally open “free” stage with its closure in the “will” stage, is implicitly a move in this direction. However, to avoid the dilemma of determinism, free will additionally requires an infinite regress of causal meta-stages, making free choice a hypertask. We use this model to define free will of the rationalist-compatibilist type. This is shown to provide a natural three-way distinction between quantum indeterminism, freedom and free will, applicable respectively to artificial intelligence (AI), animal agents and human agents. We propose that the causal hierarchy in our model corresponds to a hierarchy of Turing uncomputability. Possible neurobiological and behavioral tests to demonstrate free will experimentally are suggested. Ramifications of the model for physics, evolutionary biology, neuroscience, neuropathological medicine and moral philosophy are briefly outlined.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "12ee85d0fa899e4e864bc1c30dedcd22",
"text": "An object-oriented simulation (OOS) consists of a set of objects that interact with each other over time. This paper provides a thorough introduction to OOS, addresses the important issue of composition versus inheritance, describes frames and frameworks for OOS, and presents an example of a network simulation language as an illustration of OOS.",
"title": ""
},
{
"docid": "62ee277e32395dd9d5883e3160d2cf7a",
"text": "Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here1.",
"title": ""
},
{
"docid": "3003c878b36fa5c7be329cd3bb226dea",
"text": "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework. Learning disentangled representations without supervision is a difficult open problem. Disentangled variables are generally considered to contain interpretable semantic information and reflect separate factors of variation in the data. While the definition of disentanglement is open to debate, many believe a factorial representation, one with statistically independent variables, is a good starting point [1, 2, 3]. Such representations distill information into a compact form which is oftentimes semantically meaningful and useful for a variety of tasks [2, 4]. For instance, it is found that such representations are more generalizable and robust against adversarial attacks [5]. Many state-of-the-art methods for learning disentangled representations are based on re-weighting parts of an existing objective. For instance, it is claimed that mutual information between latent variables and the observed data can encourage the latents into becoming more interpretable [6]. It is also argued that encouraging independence between latent variables induces disentanglement [7]. However, there is no strong evidence linking factorial representations to disentanglement. In part, this can be attributed to weak qualitative evaluation procedures. While traversals in the latent representation can qualitatively illustrate disentanglement, quantitative measures of disentanglement are in their infancy. In this paper, we: • show a decomposition of the variational lower bound that can be used to explain the success of the β-VAE [7] in learning disentangled representations. • propose a simple method based on weighted minibatches to stochastically train with arbitrary weights on the terms of our decomposition without any additional hyperparameters. • introduce the β-TCVAE, which can be used as a plug-in replacement for the β-VAE with no extra hyperparameters. Empirical evaluations suggest that the β-TCVAE discovers more interpretable representations than existing methods, while also being fairly robust to random initialization. • propose a new information-theoretic disentanglement metric, which is classifier-free and generalizable to arbitrarily-distributed and non-scalar latent variables. While Kim & Mnih [8] have independently proposed augmenting VAEs with an equivalent total correlation penalty to the β-TCVAE, their proposed training method differs from ours and requires an auxiliary discriminator network. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.",
"title": ""
},
{
"docid": "32fd7a91091f74a5ea55226aa44403d3",
"text": "Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.",
"title": ""
},
{
"docid": "3d6886b96d1a6fdf1339ce4c2e2b76af",
"text": "Crisis informatics is a field of research that investigates the use of computer-mediated communication— including social media—by members of the public and other entities during times of mass emergency. Supporting this type of research is challenging because large amounts of ephemeral event data can be generated very quickly and so must then be just as rapidly captured. Such data sets are challenging to analyze because of their heterogeneity and size. We have been designing, developing, and deploying software infrastructure to enable the large-scale collection and analysis of social media data during crisis events. We report on the challenges encountered when working in this space, the desired characteristics of such infrastructure, and the techniques, technology, and architectures that have been most useful in providing both scalability and flexibility. We also discuss the types of analytics this infrastructure supports and implications for future crisis informatics research.",
"title": ""
},
{
"docid": "1940721177615adccce0906e7c93cd28",
"text": "Pattern Matching is a computationally intensive task used in many research fields and real world applications. Due to the ever-growing volume of data to be processed, and increasing link speeds, the number of patterns to be matched has risen significantly. In this paper we explore the parallel capabilities of modern General Purpose Graphics Processing Units (GPGPU) applications for high speed pattern matching. A highly compressed failure-less Aho-Corasick algorithm is presented for Intrusion Detection Systems on off-the-shelf hardware. This approach maximises the bandwidth for data transfers between the host and the Graphics Processing Unit (GPU). Experiments are performed on multiple alphabet sizes, demonstrating the capabilities of the library to be used in different research fields, while sustaining an adequate throughput for intrusion detection systems or DNA sequencing. The work also explores the performance impact of adequate prefix matching for alphabet sizes and varying pattern numbers achieving speeds up to 8Gbps and low memory consumption for intrusion detection systems.",
"title": ""
},
{
"docid": "e8e1bf877e45de0d955d8736c342ec76",
"text": "Parking guidance and information (PGI) systems are becoming important parts of intelligent transportation systems due to the fact that cars and infrastructure are becoming more and more connected. One major challenge in developing efficient PGI systems is the uncertain nature of parking availability in parking facilities (both on-street and off-street). A reliable PGI system should have the capability of predicting the availability of parking at the arrival time with reliable accuracy. In this paper, we study the nature of the parking availability data in a big city and propose a multivariate autoregressive model that takes into account both temporal and spatial correlations of parking availability. The model is used to predict parking availability with high accuracy. The prediction errors are used to recommend the parking location with the highest probability of having at least one parking spot available at the estimated arrival time. The results are demonstrated using real-time parking data in the areas of San Francisco and Los Angeles.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: [email protected] (Luiz Souza), [email protected] (Luciano Oliveira), [email protected] (Mauricio Pamplona), [email protected] (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "75ed4cabbb53d4c75fda3a291ea0ab67",
"text": "Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.",
"title": ""
},
{
"docid": "b75336a7470fe2b002e742dbb6bfa8d5",
"text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.",
"title": ""
},
{
"docid": "14b6af9d7199f724112021f81694c7ea",
"text": "Much research indicates that East Asians, more than Americans, explain events with reference to the context. The authors examined whether East Asians also attend to the context more than Americans do. In Study 1, Japanese and Americans watched animated vignettes of underwater scenes and reported the contents. In a subsequent recognition test, they were shown previously seen objects as well as new objects, either in their original setting or in novel settings, and then were asked to judge whether they had seen the objects. Study 2 replicated the recognition task using photographs of wildlife. The results showed that the Japanese (a) made more statements about contextual information and relationships than Americans did and (b) recognized previously seen objects more accurately when they saw them in their original settings rather than in the novel settings, whereas this manipulation had relatively little effect on Americans.",
"title": ""
},
{
"docid": "adad5599122e63cde59322b7ba46461b",
"text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.",
"title": ""
},
{
"docid": "050c701f2663f4fa85aadd65a5dc96f2",
"text": "The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs) from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after euk aryotic o rthologous g roups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted) proteins encoded in 66 genomes of unicellular organisms. The euk aryotic o rthologous g roups (KOGs) include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens), one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe), and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the KOG set is much greater than the ubiquitous portion of the COG set (~1% of the COGs). In part, this difference is probably due to the small number of included eukaryotic genomes, but it could also reflect the relative compactness of eukaryotes as a clade and the greater evolutionary stability of eukaryotic genomes. The updated collection of orthologous protein sets for prokaryotes and eukaryotes is expected to be a useful platform for functional annotation of newly sequenced genomes, including those of complex eukaryotes, and genome-wide evolutionary studies.",
"title": ""
},
{
"docid": "1377bac68319fcc57fbafe6c21e89107",
"text": "In recent years, robotics in agriculture sector with its implementation based on precision agriculture concept is the newly emerging technology. The main reason behind automation of farming processes are saving the time and energy required for performing repetitive farming tasks and increasing the productivity of yield by treating every crop individually using precision farming concept. Designing of such robots is modeled based on particular approach and certain considerations of agriculture environment in which it is going to work. These considerations and different approaches are discussed in this paper. Also, prototype of an autonomous Agriculture Robot is presented which is specifically designed for seed sowing task only. It is a four wheeled vehicle which is controlled by LPC2148 microcontroller. Its working is based on the precision agriculture which enables efficient seed sowing at optimal depth and at optimal distances between crops and their rows, specific for each crop type.",
"title": ""
},
{
"docid": "116fd1ecd65f7ddfdfad6dca09c12876",
"text": "Malicious hardware Trojan circuitry inserted in safety-critical applications is a major threat to national security. In this work, we propose a novel application of a key-based obfuscation technique to achieve security against hardware Trojans. The obfuscation scheme is based on modifying the state transition function of a given circuit by expanding its reachable state space and enabling it to operate in two distinct modes -- the normal mode and the obfuscated mode. Such a modification obfuscates the rareness of the internal circuit nodes, thus making it difficult for an adversary to insert hard-to-detect Trojans. It also makes some inserted Trojans benign by making them activate only in the obfuscated mode. The combined effect leads to higher Trojan detectability and higher level of protection against such attack. Simulation results for a set of benchmark circuits show that the scheme is capable of achieving high levels of security at modest design overhead.",
"title": ""
},
{
"docid": "789fe916396c5a57a0327618d5efc74d",
"text": "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https://github.com/zhaoweicai/cascade-rcnn.",
"title": ""
},
{
"docid": "956541e525760ae663028d7b73d6fb46",
"text": "Regression testing is an important activity but can get expensive for large test suites. Test-suite reduction speeds up regression testing by identifying and removing redundant tests based on a given set of requirements. Traditional research on test-suite reduction is rather diverse but most commonly shares three properties: (1) requirements are defined by a coverage criterion such as statement coverage; (2) the reduced test suite has to satisfy all the requirements as the original test suite; and (3) the quality of the reduced test suites is measured on the software version on which the reduction is performed. These properties make it hard for test engineers to decide how to use reduced test suites. We address all three properties of traditional test-suite reduction: (1) we evaluate test-suite reduction with requirements defined by killed mutants; (2) we evaluate inadequate reduction that does not require reduced test suites to satisfy all the requirements; and (3) we propose evolution-aware metrics that evaluate the quality of the reduced test suites across multiple software versions. Our evaluations allow a more thorough exploration of trade-offs in test-suite reduction, and our evolution-aware metrics show how the quality of reduced test suites can change after the version where the reduction is performed. We compare the trade-offs among various reductions on 18 projects with a total of 261,235 tests over 3,590 commits and a cumulative history spanning 35 years of development. Our results help test engineers make a more informed decision about balancing size, coverage, and fault-detection loss of reduced test suites.",
"title": ""
}
] |
scidocsrr
|
1f4cf2423f05ef835580dd2811cf2555
|
Putting Your Best Face Forward : The Accuracy of Online Dating Photographs
|
[
{
"docid": "34fb2f437c5135297ec2ad52556440e9",
"text": "This study investigates self-disclosure in the novel context of online dating relationships. Using a national random sample of Match.com members (N = 349), the authors tested a model of relational goals, self-disclosure, and perceived success in online dating. The authors’findings provide support for social penetration theory and the social information processing and hyperpersonal perspectives as well as highlight the positive effect of anticipated future face-to-face interaction on online self-disclosure. The authors find that perceived online dating success is predicted by four dimensions of self-disclosure (honesty, amount, intent, and valence), although honesty has a negative effect. Furthermore, online dating experience is a strong predictor of perceived success in online dating. Additionally, the authors identify predictors of strategic success versus self-presentation success. This research extends existing theory on computer-mediated communication, selfdisclosure, and relational success to the increasingly important arena of mixed-mode relationships, in which participants move from mediated to face-to-face communication.",
"title": ""
},
{
"docid": "47aec03cf18dc3abd4d46ee017f25a16",
"text": "Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.",
"title": ""
}
] |
[
{
"docid": "401bad1d0373acb71a855a28d2aeea38",
"text": "mechanobullous epidermolysis bullosa acquisita to combined treatment with immunoadsorption and rituximab (anti-CD20 monoclonal antibodies). Arch Dermatol 2007; 143: 192–198. 6 Sadler E, Schafleitner B, Lanschuetzer C et al. Treatment-resistant classical epidermolysis bullosa acquisita responding to rituximab. Br J Dermatol 2007; 157: 417–419. 7 Crichlow SM, Mortimer NJ, Harman KE. A successful therapeutic trial of rituximab in the treatment of a patient with recalcitrant, high-titre epidermolysis bullosa acquisita. Br J Dermatol 2007; 156: 194–196. 8 Saha M, Cutler T, Bhogal B, Black MM, Groves RW. Refractory epidermolysis bullosa acquisita: successful treatment with rituximab. Clin Exp Dermatol 2009; 34: e979–e980. 9 Kubisch I, Diessenbacher P, Schmidt E, Gollnick H, Leverkus M. Premonitory epidermolysis bullosa acquisita mimicking eyelid dermatitis: successful treatment with rituximab and protein A immunoapheresis. Am J Clin Dermatol 2010; 11: 289–293. 10 Meissner C, Hoefeld-Fegeler M, Vetter R et al. Severe acral contractures and nail loss in a patient with mechano-bullous epidermolysis bullosa acquisita. Eur J Dermatol 2010; 20: 543–544.",
"title": ""
},
{
"docid": "91c0658dbd6f078fdf53e9ae276a6f73",
"text": "Given a photo collection of \"unconstrained\" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.",
"title": ""
},
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "75e9253b7c6333db1aa3cef2ab364f99",
"text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.",
"title": ""
},
{
"docid": "90b6b0ff4b60e109fc111b26aab4a25c",
"text": "Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.",
"title": ""
},
{
"docid": "703696ca3af2a485ac34f88494210007",
"text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.",
"title": ""
},
{
"docid": "2bb535ff25532ccdbf85a301a872c8bd",
"text": "Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "40df4f2d0537bca3cf92dc3005d2b9f3",
"text": "The pages of this Sample Chapter may have slight variations in final published form. H istorically, we talk of first-force psychodynamic, second-force cognitive-behavioral, and third-force existential-humanistic counseling and therapy theories. Counseling and psychotherapy really began with Freud and psychoanalysis. James Watson and, later, B. F. Skinner challenged Freud's emphasis on the unconscious and focused on observable behavior. Carl Rogers, with his person-centered counseling, revolutionized the helping professions by focusing on the importance of nurturing a caring therapist-client relationship in the helping process. All three approaches are still alive and well in the fields of counseling and psychology, as discussed in Chapters 5 through 10. As you reflect on the new knowledge and skills you exercised by reading the preceding chapters and completing the competency-building activities in those chapters, hopefully you part three 319 will see that you have gained a more sophisticated foundational understanding of the three traditional theoretical forces that have shaped the fields of counseling and therapy over the past one hundred years. Efforts in this book have been intended to bring your attention to both the strengths and limitations of psychodynamic, cognitive-behavioral, and existential-humanistic perspectives. With these perspectives in mind, the following chapters examine the fourth major theoretical force that has emerged in the mental health professions over the past 40 years: the multicultural-feminist-social justice counseling world-view. The perspectives of the fourth force challenge you to learn new competencies you will need to acquire to work effectively, respectfully, and ethically in a culturally diverse 21st-century society. Part Three begins by discussing the rise of the feminist counseling and therapy perspective (Chapter 11) and multicultural counseling and therapy (MCT) theories (Chapter 12). To assist you in synthesizing much of the information contained in all of the preceding chapters, Chapter 13 presents a comprehensive and integrative helping theory referred to as developmental counseling and therapy (DCT). Chapter 14 offers a comprehensive examination of family counseling and therapy theories to further extend your knowledge of ways that mental health practitioners can assist entire families in realizing new and untapped dimensions of their collective well-being. Finally Chapter 15 provides guidelines to help you develop your own approach to counseling and therapy that complements a growing awareness of your own values, biases, preferences, and relational compe-tencies as a mental health professional. Throughout, competency-building activities offer you opportunities to continue to exercise new skills associated with the different theories discussed in Part Three. …",
"title": ""
},
{
"docid": "21f45ec969ba3852d731a2e2119fc86e",
"text": "When a large number of people with heterogeneous knowledge and skills run a project together, it is important to use a sensible engineering process. This especially holds for a project building an intelligent autonomously driving car to participate in the 2007 DARPA Urban Challenge. In this article, we present essential elements of a software and systems engineering process for the development of artificial intelligence capable of driving autonomously in complex urban situations. The process includes agile concepts, like test first approach, continuous integration of every software module and a reliable release and configuration management assisted by software tools in integrated development environments. However, the most important ingredients for an efficient and stringent development are the ability to efficiently test the behavior of the developed system in a flexible and modular simulator for urban situations.",
"title": ""
},
{
"docid": "3df76261ff7981794e9c3d1332efe023",
"text": "The complete sequence of the 16,569-base pair human mitochondrial genome is presented. The genes for the 12S and 16S rRNAs, 22 tRNAs, cytochrome c oxidase subunits I, II and III, ATPase subunit 6, cytochrome b and eight other predicted protein coding genes have been located. The sequence shows extreme economy in that the genes have none or only a few noncoding bases between them, and in many cases the termination codons are not coded in the DNA but are created post-transcriptionally by polyadenylation of the mRNAs.",
"title": ""
},
{
"docid": "a412c41fe943120a513ad9b6fb70cb8b",
"text": "Blockchains based on proofs of work (PoW) currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. The security of PoWbased blockchains requires that new transactions are verified, making a proper replication of the blockchain data in the system essential. While existing PoW mining protocols offer considerable incentives for workers to generate blocks, workers do not have any incentives to store the blockchain. This resulted in a sharp decrease in the number of full nodes that store the full blockchain, e.g., in Bitcoin, Litecoin, etc. However, the smaller is the number of replicas or nodes storing the replicas, the higher is the vulnerability of the system against compromises and DoS-attacks. In this paper, we address this problem and propose a novel solution, EWoK (Entangled proofs of WOrk and Knowledge). EWoK regulates in a decentralized-manner the minimum number of replicas that should be stored by tying replication to the only directly-incentivized process in PoW-blockchains—which is PoW itself. EWoK only incurs small modifications to existing PoW protocols, and is fully compliant with the specifications of existing mining hardware—which is likely to increase its adoption by the existing PoW ecosystem. EWoK plugs an efficient in-memory hash-based proof of knowledge and couples them with the standard PoW mechanism. We implemented EWoK and integrated it within commonly used mining protocols, such as GetBlockTemplate and Stratum mining; our results show that EWoK can be easily integrated within existing mining pool protocols and does not impair the mining efficiency.",
"title": ""
},
{
"docid": "f415b38e6d43c8ed81ce97fd924def1b",
"text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a33f962c4a6ea61d3400ca9feea50bd7",
"text": "Now, we come to offer you the right catalogues of book to open. artificial intelligence techniques for rational decision making is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "b41ee70f93fe7c52f4fc74727f43272e",
"text": "It is no secret that pornographic material is now a one-clickaway from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a classifier based on one of the recently flourishing deep learning techniques. Convolutional neural networks contain many layers for both automatic features extraction and classification. The benefit is an easier system to build (no need for hand-crafting features and classifiers). Additionally, our experiments show that it is even more accurate than the state of the art methods on the most recent benchmark dataset.",
"title": ""
},
{
"docid": "ea86e4d0581dc3be3f3671cf25b064ae",
"text": "Transfer learning allows leveraging the knowledge of source domains, available a priori, to help training a classifier for a target domain, where the available data is scarce. The effectiveness of the transfer is affected by the relationship between source and target. Rather than improving the learning, brute force leveraging of a source poorly related to the target may decrease the classifier performance. One strategy to reduce this negative transfer is to import knowledge from multiple sources to increase the chance of finding one source closely related to the target. This work extends the boosting framework for transferring knowledge from multiple sources. Two new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost, are introduced, analyzed, and applied for object category recognition and specific object detection. The experiments demonstrate their improved performance by greatly reducing the negative transfer as the number of sources increases. TaskTrAdaBoost is a fast algorithm enabling rapid retraining over new targets.",
"title": ""
},
{
"docid": "eb34d154a1547db6e0a9612abc0adcf3",
"text": "Soft robots are challenging to model due to their nonlinear behavior. However, their soft bodies make it possible to safely observe their behavior under random control inputs, making them amenable to large-scale data collection and system identification. This paper implements and evaluates a system identification method based on Koopman operator theory. This theory offers a way to represent a nonlinear system as a linear system in the infinite-dimensional space of real-valued functions called observables, enabling models of nonlinear systems to be constructed via linear regression of observed data. The approach does not suffer from some of the shortcomings of other nonlinear system identification methods, which typically require the manual tuning of training parameters and have limited convergence guarantees. A dynamic model of a pneumatic soft robot arm is constructed via this method, and used to predict the behavior of the real system. The total normalized-root-mean-square error (NRMSE) of its predictions over twelve validation trials is lower than that of several other identified models including a neural network, NLARX, nonlinear Hammerstein-Wiener, and linear state space model.",
"title": ""
},
{
"docid": "9634245d2a71804083fa90a6555d13a8",
"text": "In far-field speech recognition systems, training acoustic models with alignments generated from parallel close-talk microphone data provides significant improvements. However it is not practical to assume the availability of large corpora of parallel close-talk microphone data, for training. In this paper we explore methods to reduce the performance gap between far-field ASR systems trained with alignments from distant microphone data and those trained with alignments from parallel close-talk microphone data. These methods include the use of a lattice-free sequence objective function which tolerates minor mis-alignment errors; and the use of data selection techniques to discard badly aligned data. We present results on single distant microphone and multiple distant microphone scenarios of the AMI LVCSR task. We identify prominent causes of alignment errors in AMI data.",
"title": ""
},
{
"docid": "05a35ab061a0d5ce18a3ceea8dde78f6",
"text": "A single feed grid array antenna for 24 GHz Doppler sensor is proposed in this paper. It is designed on 0.787 mm thick substrate made of Rogers Duroid 5880 (ε<sub>r</sub>= 2.2 and tan δ= 0.0009) with 0.017 mm copper claddings. Dimension of the antenna is 60 mm × 60 mm × 0.787 mm. This antenna exhibits 2.08% impedance bandwidth, 6.25% radiation bandwidth and 20.6 dBi gain at 24.2 GHz. The beamwidth is 14°and 16°in yoz and xoz planes, respectively.",
"title": ""
},
{
"docid": "ff18792f352429df42358d6b435ae813",
"text": "Recently, micro-expression recognition has seen an increase of interest from psychological and computer vision communities. As microexpressions are generated involuntarily on a person’s face, and are usually a manifestation of repressed feelings of the person. Most existing works pay attention to either the detection or spotting of micro-expression frames or the categorization of type of micro-expression present in a short video shot. In this paper, we introduced a novel automatic approach to micro-expression recognition from long video that combines both spotting and recognition mechanisms. To achieve this, the apex frame, which provides the instant when the highest intensity of facial movement occurs, is first spotted from the entire video sequence. An automatic eye masking technique is also presented to improve the robustness of apex frame spotting. With the single apex, we describe the spotted micro-expression instant using a state-of-the-art feature extractor before proceeding to classification. This is the first known work that recognizes micro-expressions from a long video sequence without the knowledge of onset and offset frames, which are typically used to determine a cropped sub-sequence containing the micro-expression. We evaluated the spotting and recognition tasks on four spontaneous micro-expression databases comprising only of raw long videos – CASME II-RAW, SMICE-HS, SMIC-E-VIS and SMIC-E-NIR. We obtained compelling results that show the effectiveness of the proposed approach, which outperform most methods that rely on human annotated sub-sequences.",
"title": ""
}
] |
scidocsrr
|
494618e843cad4d38743b862d5b3d3a7
|
Measuring the Lifetime Value of Customers Acquired from Google Search Advertising
|
[
{
"docid": "bfe762fc6e174778458b005be75d8285",
"text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.",
"title": ""
}
] |
[
{
"docid": "5b9488755fb3146adf5b6d8d767b7c8f",
"text": "This paper presents an overview of our activities for spoken and written language resources for Vietnamese implemented at CLIPSIMAG Laboratory and International Research Center MICA. A new methodology for fast text corpora acquisition for minority languages which has been applied to Vietnamese is proposed. The first results of a process of building a large Vietnamese speech database (VNSpeechCorpus) and a phonetic dictionary, which is used for automatic alignment process, are also presented.",
"title": ""
},
{
"docid": "bda892eb6cdcc818284f56b74c932072",
"text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.",
"title": ""
},
{
"docid": "24d0d2a384b2f9cefc6e5162cdc52c45",
"text": "Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2%. Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.",
"title": ""
},
{
"docid": "723f7d157cacfcad4523f7544a9d1c77",
"text": "The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.",
"title": ""
},
{
"docid": "faf83822de9f583bebc120aecbcd107a",
"text": "Relapsed B-cell lymphomas are incurable with conventional chemotherapy and radiation therapy, although a fraction of patients can be cured with high-dose chemoradiotherapy and autologous stemcell transplantation (ASCT). We conducted a phase I/II trial to estimate the maximum tolerated dose (MTD) of iodine 131 (131I)–tositumomab (anti-CD20 antibody) that could be combined with etoposide and cyclophosphamide followed by ASCT in patients with relapsed B-cell lymphomas. Fifty-two patients received a trace-labeled infusion of 1.7 mg/kg 131Itositumomab (185-370 MBq) followed by serial quantitative gamma-camera imaging and estimation of absorbed doses of radiation to tumor sites and normal organs. Ten days later, patients received a therapeutic infusion of 1.7 mg/kg tositumomab labeled with an amount of 131I calculated to deliver the target dose of radiation (20-27 Gy) to critical normal organs (liver, kidneys, and lungs). Patients were maintained in radiation isolation until their total-body radioactivity was less than 0.07 mSv/h at 1 m. They were then given etoposide and cyclophosphamide followed by ASCT. The MTD of 131Itositumomab that could be safely combined with 60 mg/kg etoposide and 100 mg/kg cyclophosphamide delivered 25 Gy to critical normal organs. The estimated overall survival (OS) and progressionfree survival (PFS) of all treated patients at 2 years was 83% and 68%, respectively. These findings compare favorably with those in a nonrandomized control group of patients who underwent transplantation, external-beam total-body irradiation, and etoposide and cyclophosphamide therapy during the same period (OS of 53% and PFS of 36% at 2 years), even after adjustment for confounding variables in a multivariable analysis. (Blood. 2000;96:2934-2942)",
"title": ""
},
{
"docid": "6838d497f81c594cb1760c075b0f5d48",
"text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
"title": ""
},
{
"docid": "ea1d408c4e4bfe69c099412da30949b0",
"text": "The amount of scientific papers in the Molecular Biology field has experienced an enormous growth in the last years, prompting the need of developing automatic Information Extraction (IE) systems. This work is a first step towards the ontology-based domain-independent generalization of a system that identifies Escherichia coli regulatory networks. First, a domain ontology based on the RegulonDB database was designed and populated. After that, the steps of the existing IE system were generalized to use the knowledge contained in the ontology, so that it could be potentially applied to other domains. The resulting system has been tested both with abstract and full articles that describe regulatory interactions for E. coli, obtaining satisfactory results. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b94082787aed8e947ae798b74bdd552",
"text": "AIM\nThe aim of the study was to determine the prevalence of high anxiety and substance use among university students in the Republic of Macedonia.\n\n\nMATERIAL AND METHODS\nThe sample comprised 742 students, aged 18-22 years, who attended the first (188 students) and second year studies at the Medical Faculty (257), Faculty of Dentistry (242), and Faculty of Law (55) within Ss. Cyril and Methodius University in Skopje. As a psychometric test the Beck Anxiety Inventory (BAI) was used. It is a self-rating questionnaire used for measuring the severity of anxiety. A psychiatric interview was performed with students with BAI scores > 25. A self-administered questionnaire consisted of questions on the habits of substance (alcohol, nicotine, sedative-hypnotics, and illicit drugs) use and abuse was also used. For statistical evaluation Statistica 7 software was used.\n\n\nRESULTS\nThe highest mean BAI scores were obtained by first year medical students (16.8 ± 9.8). Fifteen percent of all students and 20% of first year medical students showed high levels of anxiety. Law students showed the highest prevalence of substance use and abuse.\n\n\nCONCLUSION\nHigh anxiety and substance use as maladaptive behaviours among university students are not systematically investigated in our country. The study showed that students show these types of unhealthy reactions, regardless of the curriculum of education. More attention should be paid to students in the early stages of their education. A student counselling service which offers mental health assistance needs to be established within University facilities in R. Macedonia alongside the existing services in our health system.",
"title": ""
},
{
"docid": "d1525fdab295a16d5610210e80fb8104",
"text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.",
"title": ""
},
{
"docid": "7884c51de6f53d379edccac50fd55caa",
"text": "Objective. We analyze the process of changing ethical attitudes over time by focusing on a specific set of ‘‘natural experiments’’ that occurred over an 18-month period, namely, the accounting scandals that occurred involving Enron/Arthur Andersen and insider-trader allegations related to ImClone. Methods. Given the amount of media attention devoted to these ethical scandals, we test whether respondents in a cross-sectional sample taken over 18 months become less accepting of ethically charged vignettes dealing with ‘‘accounting tricks’’ and ‘‘insider trading’’ over time. Results. We find a significant and gradual decline in the acceptance of the vignettes over the 18-month period. Conclusions. Findings presented here may provide valuable insight into potential triggers of changing ethical attitudes. An intriguing implication of these results is that recent highly publicized ethical breaches may not be only a symptom, but also a cause of changing attitudes.",
"title": ""
},
{
"docid": "8d208bb5318dcbc5d941df24906e121f",
"text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "7aa6b9cb3a7a78ec26aff130a1c9015a",
"text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.",
"title": ""
},
{
"docid": "ef9cea211dfdc79f5044a0da606bafb5",
"text": "Gender identity disorder (GID) refers to transsexual individuals who feel that their assigned biological gender is incongruent with their gender identity and this cannot be explained by any physical intersex condition. There is growing scientific interest in the last decades in studying the neuroanatomy and brain functions of transsexual individuals to better understand both the neuroanatomical features of transsexualism and the background of gender identity. So far, results are inconclusive but in general, transsexualism has been associated with a distinct neuroanatomical pattern. Studies mainly focused on male to female (MTF) transsexuals and there is scarcity of data acquired on female to male (FTM) transsexuals. Thus, our aim was to analyze structural MRI data with voxel based morphometry (VBM) obtained from both FTM and MTF transsexuals (n = 17) and compare them to the data of 18 age matched healthy control subjects (both males and females). We found differences in the regional grey matter (GM) structure of transsexual compared with control subjects, independent from their biological gender, in the cerebellum, the left angular gyrus and in the left inferior parietal lobule. Additionally, our findings showed that in several brain areas, regarding their GM volume, transsexual subjects did not differ significantly from controls sharing their gender identity but were different from those sharing their biological gender (areas in the left and right precentral gyri, the left postcentral gyrus, the left posterior cingulate, precuneus and calcarinus, the right cuneus, the right fusiform, lingual, middle and inferior occipital, and inferior temporal gyri). These results support the notion that structural brain differences exist between transsexual and healthy control subjects and that majority of these structural differences are dependent on the biological gender.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected]. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "f740191f7c6d27811bb09bf40e8da021",
"text": "Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that",
"title": ""
},
{
"docid": "af1ddb07f08ad6065c004edae74a3f94",
"text": "Human decisions are prone to biases, and this is no less true for decisions made within data visualizations. Bias mitigation strategies often focus on the person, by educating people about their biases, typically with little success. We focus instead on the system, presenting the first evidence that altering the design of an interactive visualization tool can mitigate a strong bias – the attraction effect. Participants viewed 2D scatterplots where choices between superior alternatives were affected by the placement of other suboptimal points. We found that highlighting the superior alternatives weakened the bias, but did not eliminate it. We then tested an interactive approach where participants completely removed locally dominated points from the view, inspired by the elimination by aspects strategy in the decision-making literature. This approach strongly decreased the bias, leading to a counterintuitive suggestion: tools that allow removing inappropriately salient or distracting data from a view may help lead users to make more rational decisions.",
"title": ""
},
{
"docid": "b141c5a1b7a92856b9dc3e3958a91579",
"text": "Field-programmable analog arrays (FPAAs) provide a method for rapidly prototyping analog systems. Currently available commercial and academic FPAAs are typically based on operational amplifiers (or other similar analog primitives) with only a few computational elements per chip. While their specific architectures vary, their small sizes and often restrictive interconnect designs leave current FPAAs limited in functionality and flexibility. For FPAAs to enter the realm of large-scale reconfigurable devices such as modern field-programmable gate arrays (FPGAs), new technologies must be explored to provide area-efficient accurately programmable analog circuitry that can be easily integrated into a larger digital/mixed-signal system. Recent advances in the area of floating-gate transistors have led to a core technology that exhibits many of these qualities, and current research promises a digitally controllable analog technology that can be directly mated to commercial FPGAs. By leveraging these advances, a new generation of FPAAs is introduced in this paper that will dramatically advance the current state of the art in terms of size, functionality, and flexibility. FPAAs have been fabricated using floating-gate transistors as the sole programmable element, and the results of characterization and system-level experiments on the most recent FPAA are shown.",
"title": ""
},
{
"docid": "3dcce7058de4b41ad3614561832448a4",
"text": "Declarative models play an important role in most software design activities, by allowing designs to be constructed that selectively abstract over complex implementation details. In the user interface setting, Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which declarative models can be constructed and related, as part of the interface design process. However, such declarative models are not usually directly executable, and may be difficult to relate to existing software components. It is therefore important that MB-UIDEs both fit in well with existing software architectures and standards, and provide an effective route from declarative interface specification to running user interfaces. This paper describes how user interface software is generated from declarative descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include its open architecture, which connects directly to existing applications and widget sets, and the generation of executable interface applications in Java. This paper focuses on how Java programs, organized using the model-view-controller pattern (MVC), are generated from the task, domain and presentation models of Teallach.",
"title": ""
}
] |
scidocsrr
|
ea9b364a78fc2387e1dad358f0192471
|
Advances in Clickstream Data Analysis in Marketing
|
[
{
"docid": "6db749b222a44764cf07bde527c230a3",
"text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.",
"title": ""
},
{
"docid": "c02d207ed8606165e078de53a03bf608",
"text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: [email protected]), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: [email protected]), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*",
"title": ""
}
] |
[
{
"docid": "87be04b184d27c006bb06dd9906a9422",
"text": "With the significant growth of the markets for consumer electronics and various embedded systems, flash memory is now an economic solution for storage systems design. Because index structures require intensively fine-grained updates/modifications, block-oriented access over flash memory could introduce a significant number of redundant writes. This might not only severely degrade the overall performance, but also damage the reliability of flash memory. In this paper, we propose a very different approach, which can efficiently handle fine-grained updates/modifications caused by B-tree index access over flash memory. The implementation is done directly over the flash translation layer (FTL); hence, no modifications to existing application systems are needed. We demonstrate that when index structures are adopted over flash memory, the proposed methodology can significantly improve the system performance and, at the same time, reduce both the overhead of flash-memory management and the energy dissipation. The average response time of record insertions and deletions was also significantly reduced.",
"title": ""
},
{
"docid": "742dbd75ad995d5c51c4cbce0cc7f8cc",
"text": "Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used twofingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.",
"title": ""
},
{
"docid": "c02697087e8efd4c1ba9f9a26fa1115b",
"text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.",
"title": ""
},
{
"docid": "74ca823c5dfb41e3566a29549c8137ab",
"text": "\"Experimental realization of quantum algorithm for solving linear systems of equations\" (2014). Many important problems in science and engineering can be reduced to the problem of solving linear equations. The quantum algorithm discovered recently indicates that one can solve an N-dimensional linear equation in O(log N) time, which provides an exponential speedup over the classical counterpart. Here we report an experimental demonstration of the quantum algorithm when the scale of the linear equation is 2 × 2 using a nuclear magnetic resonance quantum information processor. For all sets of experiments, the fidelities of the final four-qubit states are all above 96%. This experiment gives the possibility of solving a series of practical problems related to linear systems of equations and can serve as the basis to realize many potential quantum algorithms.",
"title": ""
},
{
"docid": "c3b07d5c9a88c1f9430615d5e78675b6",
"text": "Two new algorithms and associated neuron-like network architectures are proposed for solving the eigenvalue problem in real-time. The first approach is based on the solution of a set of nonlinear algebraic equations by employing optimization techniques. The second approach employs a multilayer neural network with linear artificial neurons and it exploits the continuous-time error back-propagation learning algorithm. The second approach enables us to find all the eigenvalues and the associated eigenvectors simultaneously by training the network to match some desired patterns, while the first approach is suitable to find during one run only one particular eigenvalue (e.g. an extreme eigenvalue) and the corresponding eigenvector in realtime. In order to find all eigenpairs the optimization process must be repeated in this case many times for different initial conditions. The performance and convergence behaviour of the proposed neural network architectures are investigated by extensive computer simulations.",
"title": ""
},
{
"docid": "2b09ae15fe7756df3da71cfc948e9506",
"text": "Repair of the injured spinal cord by regeneration therapy remains an elusive goal. In contrast, progress in medical care and rehabilitation has resulted in improved health and function of persons with spinal cord injury (SCI). In the absence of a cure, raising the level of achievable function in mobility and self-care will first and foremost depend on creative use of the rapidly advancing technology that has been so widely applied in our society. Building on achievements in microelectronics, microprocessing and neuroscience, rehabilitation medicine scientists have succeeded in developing functional electrical stimulation (FES) systems that enable certain individuals with SCI to use their paralyzed hands, arms, trunk, legs and diaphragm for functional purposes and gain a degree of control over bladder and bowel evacuation. This review presents an overview of the progress made, describes the current challenges and suggests ways to improve further FES systems and make these more widely available.",
"title": ""
},
{
"docid": "79e2e4af34e8a2b89d9439ff83b9fd5a",
"text": "PROBLEM\nThe current nursing workforce is composed of multigenerational staff members creating challenges and at times conflict for managers.\n\n\nMETHODS\nGenerational cohorts are defined and two multigenerational scenarios are presented and discussed using the ACORN imperatives and Hahn's Five Managerial Strategies for effectively managing a multigenerational staff.\n\n\nFINDINGS\nCommunication and respect are the underlying key strategies to understanding and bridging the generational gap in the workplace.\n\n\nCONCLUSION\nEmbracing and respecting generational differences can bring strength and cohesiveness to nursing teams on the managerial or unit level.",
"title": ""
},
{
"docid": "6ad90319d07abce021eda6f3a1d3886e",
"text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "78744205cf17be3ee5a61d12e6a44180",
"text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.",
"title": ""
},
{
"docid": "b266069e91c24120b1732c5576087a90",
"text": "Reactions of organic molecules on Montmorillonite c lay mineral have been investigated from various asp ects. These include catalytic reactions for organic synthesis, chemical evolution, the mechanism of humus-formatio n, and environmental problems. Catalysis by clay minerals has attracted much interest recently, and many repo rts including the catalysis by synthetic or modified cl ays have been published. In this review, we will li mit the review to organic reactions using Montmorillonite clay as cat alyst.",
"title": ""
},
{
"docid": "b9652cf6647d9c7c1f91a345021731db",
"text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "e5a18d6df921ab96da8e106cdb4eeac7",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "7340866fa3965558e1571bcc5294b896",
"text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.",
"title": ""
},
{
"docid": "ad2546a681a3b6bcef689f0bb71636b5",
"text": "Data and computation integrity and security are major concerns for users of cloud computing facilities. Many production-level clouds optimistically assume that all cloud nodes are equally trustworthy when dispatching jobs; jobs are dispatched based on node load, not reputation. This increases their vulnerability to attack, since compromising even one node suffices to corrupt the integrity of many distributed computations. This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds. Hatman dynamically assesses node integrity by comparing job replica outputs for consistency. This yields agreement feedback for a trust manager based on EigenTrust. Low overhead and high scalability is achieved by formulating both consistency-checking and trust management as secure cloud computations; thus, the cloud's distributed computing power is leveraged to strengthen its security. Experiments demonstrate that with feedback from only 100 jobs, Hatman attains over 90% accuracy when 25% of the Hadoop cloud is malicious.",
"title": ""
}
] |
scidocsrr
|
03ace445db37807e2c9f592683978456
|
Filicide-suicide: common factors in parents who kill their children and themselves.
|
[
{
"docid": "5636a228fea893cd48cebe15f72c0bb0",
"text": "A familicide is a multiple-victim homicide incident in which the killer’s spouse and one or more children are slain. National archives of Canadian and British homicides, containing 109 familicide incidents, permit some elucidation of the characteristic and epidemiology of this crime. Familicides were almost exclusively perpetrated by men, unlike other spouse-killings and other filicides. Half the familicidal men killed themselves as well, a much higher rate of suicide than among other uxoricidal or filicidal men. De facto unions were overrepresented, compared to their prevalence in the populations-atlarge, but to a much lesser extent in familicides than in other uxoricides. Stepchildren were overrepresented as familicide victims, compared to their numbers in the populations-at-large, but to a much lesser extent than in other filicides; unlike killers of their genetic offspring, men who killed their stepchildren were rarely suicidal. An initial binary categorization of familicides as accusatory versus despondent is tentatively proposed. @ 19% wiley-Liss, Inc.",
"title": ""
}
] |
[
{
"docid": "773bd34632ce1afe27f994edf906fea3",
"text": "Crossed-guide X-band waveguide couplers with bandwidths of up to 40% and coupling factors of better than 5 dB are presented. The tight coupling and wide bandwidth are achieved by using reduced height waveguide. Design graphs and measured data are presented.",
"title": ""
},
{
"docid": "bc03f442a0785b4179f6eefb2c5d0a35",
"text": "Internet of Things (IoT)-generated data are characterized by its continuous generation, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such IoT-generated data due to the limited processing speed and the significant storage-expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement IoT-generated data repositories. In this paper, we propose a sensor-integrated radio frequency identification (RFID) data repository-implementation model using MongoDB, the most popular big data-savvy document-oriented database system now. First, we devise a data repository schema that can effectively integrate and store the heterogeneous IoT data sources, such as RFID, sensor, and GPS, by extending the event data types in electronic product code information services standard, a de facto standard for the information exchange services for RFID-based traceability. Second, we propose an effective shard key to maximize query speed and uniform data distribution over data servers. Last, through a series of experiments measuring query speed and the level of data distribution, we show that the proposed design strategy, which is based on horizontal data partitioning and a compound shard key, is effective and efficient for the IoT-generated RFID/sensor big data.",
"title": ""
},
{
"docid": "6eb4eb9b80b73bdcd039dfc8e07c3f5a",
"text": "Code duplication or copying a code fragment and then reuse by pasting with or without any modifications is a well known code smell in software maintenance. Several studies show that about 5% to 20% of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modifications. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research. ∗This document represents our initial findings and a further study is being carried on. Reader’s feedback is welcome at [email protected].",
"title": ""
},
{
"docid": "858f15a9fc0e014dd9ffa953ac0e70f7",
"text": "Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.",
"title": ""
},
{
"docid": "767179a47047435dd2d49db15598c2ef",
"text": "We determine when a join/outerjoin query can be expressed unambiguously as a query graph, without an explicit specification of the order of evaluation. To do so, we first characterize the set of expression trees that implement a given join/outerjoin query graph, and investigate the existence of transformations among the various trees. Our main theorem is that a join/outerjoin query is freely reorderable if the query graph derived from it falls within a particular class, every tree that “implements” such a graph evaluates to the same result.\nThe result has applications to language design and query optimization. Languages that generate queries within such a class do not require the user to indicate priority among join operations, and hence may present a simplified syntax. And it is unnecessary to add extensive analyses to a conventional query optimizer in order to generate legal reorderings for a freely-reorderable language.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "b499ded5996db169e65282dd8b65f289",
"text": "For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a",
"title": ""
},
{
"docid": "8a325971d268cafc25845654c8a520cf",
"text": "Lokale onkologische Tumorkontrolle bei malignen Knochentumoren. Erhalt der Arm- und Handfunktion ab Ellenbogen mit der Möglichkeit, die Hand zum Mund zu führen. Vermeiden der Amputation. Stabile Aufhängung des Arms im Schulter-/Neogelenk. Primäre Knochensarkome des proximalen Humerus oder der Skapula mit Gelenkbeteiligung ohne Infiltration der Gefäßnervenstraße bei Primärmanifestation. Knochenmetastasen solider Tumoren mit großen Knochendefekten bei Primärmanifestation in palliativer/kurativer Intention oder im Revisions-/Rezidivfall nach Versagen vorhergehender Versorgungen. Tumorinfiltration der Gefäßnervenstraße. Fehlende Möglichkeit der muskulären Prothesendeckung durch ausgeprägte Tumorinfiltration der Oberarmweichteile. Transdeltoidaler Zugang unter Splitt der Deltamuskulatur. Präparation des tumortragenden Humerus unter langstreckiger Freilegung des Gefäßnervenbündels. Belassen eines onkologisch ausreichenden allseitigen Sicherheitsabstands auf dem Resektat sowohl seitens der Weichteile als auch des knöchernen Absetzungsrands. Zementierte oder zementfreie Implantation der Tumorprothese. Rekonstruktion des Gelenks und Fixation des Arms unter Verwendung eines Anbindungsschlauchs. Ggf. Bildung eines artifiziellen Gelenks bei extraartikulärer Resektion. Möglichst anatomische Refixation der initial abgesetzten Muskulatur auf dem Implantat zur Wiederherstellung der Funktion. Lagerung des Arms im z. B. Gilchrist-Verband für 4–6 Wochen postoperativ. Passive Beübung im Ellenbogengelenk nach 3–4 Wochen. Aktive Beübung der Schulter und des Ellenbogengelenks frühestens nach 4–6 Wochen. Lymphdrainage und Venenpumpe ab dem 1.–2. postoperativen Tag. The aim of the operation is local tumor control in malignant primary and secondary bone tumors of the proximal humerus. Limb salvage and preservation of function with the ability to lift the hand to the mouth. Stable suspension of the arm in the shoulder joint or the artificial joint. Primary malignant bone tumors of the proximal humerus or the scapula with joint infiltration but without involvement of the vessel/nerve bundle. Metastases of solid tumors with osteolytic defects in palliative or curative intention or after failure of primary osteosynthesis. Tumor infiltration of the vessel/nerve bundle. Massive tumor infiltration of the soft tissues without the possibility of sufficient soft tissue coverage of the implant. Transdeltoid approach with splitting of the deltoid muscle. Preparation and removal of the tumor-bearing humerus with exposure of the vessel/nerve bundle. Ensure an oncologically sufficient soft tissue and bone margin in all directions of the resection. Cementless or cemented stem implantation. Reconstruction of the joint capsule and fixation of the prosthesis using a synthetic tube. Soft tissue coverage of the prosthesis with anatomical positioning of the muscle to regain function. Immobilization of the arm/shoulder joint for 4–6 weeks in a Gilchrist bandage. Passive mobilization of the elbow joint after 3–4 weeks. Active mobilization of the shoulder and elbow joint at the earliest after 4–6 weeks.",
"title": ""
},
{
"docid": "befc74d8dc478a67c009894c3ef963d3",
"text": "In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks.",
"title": ""
},
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
},
{
"docid": "86497dcdfd05162804091a3368176ad5",
"text": "This paper reviews the current status and implementation of battery chargers, charging power levels and infrastructure for plug-in electric vehicles and hybrids. Battery performance depends both on types and design of the batteries, and on charger characteristics and charging infrastructure. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical onboard chargers restrict the power because of weight, space and cost constraints. They can be integrated with the electric drive for avoiding these problems. The availability of a charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. While conductive chargers use direct contact, inductive chargers transfer power magnetically. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. These system configurations vary from country to country depending on the source and plug capacity standards. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, effect on the grid, and other factors.",
"title": ""
},
{
"docid": "19937d689287ba81d2d01efd9ce8f2e4",
"text": "We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
},
{
"docid": "14cc42c141a420cb354473a38e755091",
"text": "During software evolution, information about changes between different versions of a program is useful for a number of software engineering tasks. For example, configuration-management systems can use change information to assess possible conflicts among updates from different users. For another example, in regression testing, knowledge about which parts of a program are unchanged can help in identifying test cases that need not be rerun. For many of these tasks, a purely syntactic differencing may not provide enough information for the task to be performed effectively. This problem is especially relevant in the case of object-oriented software, for which a syntactic change can have subtle and unforeseen effects. In this paper, we present a technique for comparing object-oriented programs that identifies both differences and correspondences between two versions of a program. The technique is based on a representation that handles object-oriented features and, thus, can capture the behavior of object-oriented programs. We also present JDiff, a tool that implements the technique for Java programs. Finally, we present the results of four empirical studies, performed on many versions of two medium-sized subjects, that show the efficiency and effectiveness of the technique when used on real programs.",
"title": ""
},
{
"docid": "053b069a59b938c183c19e2938f89e66",
"text": "This paper examines the role and value of information security awareness efforts in defending against social engineering attacks. It categories the different social engineering threats and tactics used in targeting employees and the approaches to defend against such attacks. While we review these techniques, we attempt to develop a thorough understanding of human security threats, with a suitable balance between structured improvements to defend human weaknesses, and efficiently focused security training and awareness building. Finally, the paper shows that a multi-layered shield can mitigate various security risks and minimize the damage to systems and data.",
"title": ""
},
{
"docid": "da476e5448fa34e9f6fd7034dfa53576",
"text": "In this paper we propose a multi-agent approach for traffic-light control. According to this approach, our system consists of agents and their world. In this context, the world consists of cars, road networks, traffic lights, etc. Each of these agents controls all traffic lights at one road junction by an observe-think-act cycle. That is, each agent repeatedly observes the current traffic condition surrounding its junction, and then uses this information to reason with condition-action rules to determine in what traffic condition how the agent can efficiently control the traffic flows at its junction, or collaborate with neighboring agents so that they can efficiently control the traffic flows, at their junctions, in such a way that would affect the traffic flows at its junction. This research demonstrates that a rather complicated problem of traffic-light control on a large road network can be solved elegantly by our rule-based multi-agent approach.",
"title": ""
},
{
"docid": "51505087f5ae1a9f57fe04f5e9ad241e",
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"title": ""
},
{
"docid": "ed3ed757804a423eef8b7394b64a971a",
"text": "This work is part of an eort aimed at developing computer-based systems for language instruction; we address the task of grading the pronunciation quality of the speech of a student of a foreign language. The automatic grading system uses SRI's Decipher continuous speech recognition system to generate phonetic segmentations. Based on these segmentations and probabilistic models we produce dierent pronunciation scores for individual or groups of sentences that can be used as predictors of the pronunciation quality. Dierent types of these machine scores can be combined to obtain a better prediction of the overall pronunciation quality. In this paper we review some of the bestperforming machine scores and discuss the application of several methods based on linear and nonlinear mapping and combination of individual machine scores to predict the pronunciation quality grade that a human expert would have given. We evaluate these methods in a database that consists of pronunciation-quality-graded speech from American students speaking French. With predictors based on spectral match and on durational characteristics, we ®nd that the combination of scores improved the prediction of the human grades and that nonlinear mapping and combination methods performed better than linear ones. Characteristics of the dierent nonlinear methods studied are discussed. Ó 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "e48f1b661691f941ea9c648c2c597b84",
"text": "Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KIF) and influences of content and perception from our results. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "959b487a51ae87b2d993e6f0f6201513",
"text": "The two-wheel differential drive mobile robots, are one of the simplest and most used structures in mobile robotics applications, it consists of a chassis with two fixed and in-line with each other electric motors. This paper presents new models for differential drive mobile robots and some considerations regarding design, modeling and control solutions. The presented models are to be used to help in facing the two top challenges in developing mechatronic mobile robots system; early identifying system level problems and ensuring that all design requirements are met, as well as, to simplify and accelerate Mechatronics mobile robots design process, including proper selection, analysis, integration and verification of the overall system and sub-systems performance throughout the development process.",
"title": ""
}
] |
scidocsrr
|
a67a6049fe809bf7f232ba7aed418aa2
|
Use of SIMD Vector Operations to Accelerate Application Code Performance on Low-Powered ARM and Intel Platforms
|
[
{
"docid": "9200498e7ef691b83bf804d4c5581ba2",
"text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.",
"title": ""
}
] |
[
{
"docid": "39070a1f503e60b8709050fc2a250378",
"text": "Plants in their natural habitats adapt to drought stress in the environment through a variety of mechanisms, ranging from transient responses to low soil moisture to major survival mechanisms of escape by early flowering in absence of seasonal rainfall. However, crop plants selected by humans to yield products such as grain, vegetable, or fruit in favorable environments with high inputs of water and fertilizer are expected to yield an economic product in response to inputs. Crop plants selected for their economic yield need to survive drought stress through mechanisms that maintain crop yield. Studies on model plants for their survival under stress do not, therefore, always translate to yield of crop plants under stress, and different aspects of drought stress response need to be emphasized. The crop plant model rice ( Oryza sativa) is used here as an example to highlight mechanisms and genes for adaptation of crop plants to drought stress.",
"title": ""
},
{
"docid": "d7a143bdb62e4aaeaf18b0aabe35588e",
"text": "BACKGROUND\nShort-acting insulin analogue use for people with diabetes is still controversial, as reflected in many scientific debates.\n\n\nOBJECTIVES\nTo assess the effects of short-acting insulin analogues versus regular human insulin in adults with type 1 diabetes.\n\n\nSEARCH METHODS\nWe carried out the electronic searches through Ovid simultaneously searching the following databases: Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R) (1946 to 14 April 2015), EMBASE (1988 to 2015, week 15), the Cochrane Central Register of Controlled Trials (CENTRAL; March 2015), ClinicalTrials.gov and the European (EU) Clinical Trials register (both March 2015).\n\n\nSELECTION CRITERIA\nWe included all randomised controlled trials with an intervention duration of at least 24 weeks that compared short-acting insulin analogues with regular human insulins in the treatment of adults with type 1 diabetes who were not pregnant.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted data and assessed trials for risk of bias, and resolved differences by consensus. We graded overall study quality using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) instrument. We used random-effects models for the main analyses and presented the results as odds ratios (OR) with 95% confidence intervals (CI) for dichotomous outcomes.\n\n\nMAIN RESULTS\nWe identified nine trials that fulfilled the inclusion criteria including 2693 participants. The duration of interventions ranged from 24 to 52 weeks with a mean of about 37 weeks. The participants showed some diversity, mainly with regard to diabetes duration and inclusion/exclusion criteria. The majority of the trials were carried out in the 1990s and participants were recruited from Europe, North America, Africa and Asia. None of the trials was carried out in a blinded manner so that the risk of performance bias, especially for subjective outcomes such as hypoglycaemia, was present in all of the trials. Furthermore, several trials showed inconsistencies in the reporting of methods and results.The mean difference (MD) in glycosylated haemoglobin A1c (HbA1c) was -0.15% (95% CI -0.2% to -0.1%; P value < 0.00001; 2608 participants; 9 trials; low quality evidence) in favour of insulin analogues. The comparison of the risk of severe hypoglycaemia between the two treatment groups showed an OR of 0.89 (95% CI 0.71 to 1.12; P value = 0.31; 2459 participants; 7 trials; very low quality evidence). For overall hypoglycaemia, also taking into account mild forms of hypoglycaemia, the data were generally of low quality, but also did not indicate substantial group differences. Regarding nocturnal severe hypoglycaemic episodes, two trials reported statistically significant effects in favour of the insulin analogue, insulin aspart. However, due to inconsistent reporting in publications and trial reports, the validity of the result remains questionable.We also found no clear evidence for a substantial effect of insulin analogues on health-related quality of life. However, there were few results only based on subgroups of the trial populations. None of the trials reported substantial effects regarding weight gain or any other adverse events. No trial was designed to investigate possible long-term effects (such as all-cause mortality, diabetic complications), in particular in people with diabetes related complications.\n\n\nAUTHORS' CONCLUSIONS\nOur analysis suggests only a minor benefit of short-acting insulin analogues on blood glucose control in people with type 1 diabetes. To make conclusions about the effect of short acting insulin analogues on long-term patient-relevant outcomes, long-term efficacy and safety data are needed.",
"title": ""
},
{
"docid": "91123d18f56d5aef473394e871c099ec",
"text": "Image-to-Image translation was proposed as a general form of many image learning problems. While generative adversarial networks were successfully applied on many image-to-image translations, many models were limited to specific translation tasks and were difficult to satisfy practical needs. In this work, we introduce a One-to-Many conditional generative adversarial network, which could learn from heterogeneous sources of images. This is achieved by training multiple generators against a discriminator in synthesized learning way. This framework supports generative models to generate images in each source, so output images follow corresponding target patterns. Two implementations, hybrid fake and cascading learning, of the synthesized adversarial training scheme are also proposed, and experimented on two benchmark datasets, UTZap50K and MVOD5K, as well as a new high-quality dataset BehTex7K. We consider five challenging image-to-image translation tasks: edges-to-photo, edges-to-similar-photo translation on UTZap50K, cross-view translation on MVOD5K, and grey-to-color, grey-to-Oil-Paint on BehTex7K. We show that both implementations are able to faithfully translate from an image to another image in edges-to-photo, edges-to-similar-photo, grey-to-color, and grey-to-Oil-Paint translation tasks. The quality of output images in cross-view translation need to be further boosted.",
"title": ""
},
{
"docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0",
"text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.",
"title": ""
},
{
"docid": "20f05b48fa88283d649a3bcadf2ed818",
"text": "A great variety of native and introduced plant species were used as foods, medicines and raw materials by the Rumsen and Mutsun Costanoan peoples of central California. The information presented here has been abstracted from original unpublished field notes recorded during the 1920s and 1930s by John Peabody Harrington, who also directed the collection of some 500 plant specimens. The nature of Harrington’s data and their significance for California ethnobotany are described, followed by a summary of information on the ethnographic uses of each plant.",
"title": ""
},
{
"docid": "6b8be9199593200a58b4d265687fb1ae",
"text": "China is a large agricultural country with the largest population in the world. This creates a high demand for food, which is prompting the study of high quality and high-yielding crops. China's current agricultural production is sufficient to feed the nation; however, compared with developed countries agricultural farming is still lagging behind, mainly due to the fact that the system of growing agricultural crops is not based on maximizing output, the latter would include scientific sowing, irrigation and fertilization. In the past few years many seasonal fruits have been offered for sale in markets, but these crops are grown in traditional backward agricultural greenhouses and large scale changes are needed to modernize production. The reform of small-scale greenhouse agricultural production is relatively easy and could be implemented. The concept of the Agricultural Internet of Things utilizes networking technology in agricultural production, the hardware part of this agricultural IoT include temperature, humidity and light sensors and processors with a large data processing capability; these hardware devices are connected by short-distance wireless communication technology, such as Bluetooth, WIFI or Zigbee. In fact, Zigbee technology, because of its convenient networking and low power consumption, is widely used in the agricultural internet. The sensor network is combined with well-established web technology, in the form of a wireless sensor network, to remotely control and monitor data from the sensors.In this paper a smart system of greenhouse management based on the Internet of Things is proposed using sensor networks and web-based technologies. The system consists of sensor networks and asoftware control system. The sensor network consists of the master control center and various sensors using Zigbee protocols. The hardware control center communicates with a middleware system via serial network interface converters. The middleware communicates with a hardware network using an underlying interface and it also communicates with a web system using an upper interface. The top web system provides users with an interface to view and manage the hardware facilities ; administrators can thus view the status of agricultural greenhouses and issue commands to the sensors through this system in order to remotely manage the temperature, humidity and irrigation in the greenhouses. The main topics covered in this paper are:1. To research the current development of new technologies applicable to agriculture and summarizes the strong points concerning the application of the Agricultural Internet of Things both at home and abroad. Also proposed are some new methods of agricultural greenhouse management.2. An analysis of system requirements, the users’ expectations of the system and the response to needs analysis, and the overall design of the system to determine it’s architecture.3. Using software engineering to ensure that functional modules of the system, as far as possible, meet the requirements of high cohesion and low coupling between modules, also detailed design and implementation of each module is considered.",
"title": ""
},
{
"docid": "0366ab38a45f45a8655f4beb6d11d358",
"text": "BACKGROUND\nDeep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.\n\n\nAIMS\nWe aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.\n\n\nMATERIALS & METHODS\nWe present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).\n\n\nRESULTS\nFrom ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).\n\n\nDISCUSSION/CONCLUSION\nWe proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.",
"title": ""
},
{
"docid": "ca75798a9090810682f99400f6a8ff4e",
"text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.",
"title": ""
},
{
"docid": "a129f0b1c95e17d7e6a587121b267fa9",
"text": "Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications.",
"title": ""
},
{
"docid": "f6feb6789c0c9d2d5c354e73d2aaf9ad",
"text": "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.",
"title": ""
},
{
"docid": "a0f24500f3729b0a2b6e562114eb2a45",
"text": "In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate the ability to produce small and low-cost UWB antennas with inkjet-printing technology which can enable compact, low-cost, and environmentally friendly wireless sensor network.",
"title": ""
},
{
"docid": "35b286999957396e1f5cab6e2370ed88",
"text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.",
"title": ""
},
{
"docid": "013b0ae55c64f322d61e1bf7e8d4c55a",
"text": "Binary neural networks for object recognition are desirable especially for small and embedded systems because of their arithmetic and memory efficiency coming from the restriction of the bit-depth of network weights and activations. Neural networks in general have a tradeoff between the accuracy and efficiency in choosing a model architecture, and this tradeoff matters more for binary networks because of the limited bit-depth. This paper then examines the performance of binary networks by modifying architecture parameters (depth and width parameters) and reports the best-performing settings for specific datasets. These findings will be useful for designing binary networks for practical uses.",
"title": ""
},
{
"docid": "64bcd606e039f731aec7cc4722a4d3cb",
"text": "Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partialinformation setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "568bc5272373a4e3fd38304f2c381e0f",
"text": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.",
"title": ""
},
{
"docid": "8335faee33da234e733d8f6c95332ec3",
"text": "Myanmar script uses no space between words and syllable segmentation represents a significant process in many NLP tasks such as word segmentation, sorting, line breaking and so on. In this study, a rulebased approach of syllable segmentation algorithm for Myanmar text is proposed. Segmentation rules were created based on the syllable structure of Myanmar script and a syllable segmentation algorithm was designed based on the created rules. A segmentation program was developed to evaluate the algorithm. A training corpus containing 32,283 Myanmar syllables was tested in the program and the experimental results show an accuracy rate of 99.96% for segmentation.",
"title": ""
},
{
"docid": "0a967b130a6c4dbc93d6b135eeb3c0db",
"text": "This paper presents a universal ontology for smart environments aiming to overcome the limitations of the existing ontologies. We enrich our ontology by adding new environmental aspects such as the referentiality and environmental change, that can be used to describe domains as well as applications. We show through a case study how our ontology is used and integrated in a self-organising middleware for smart environments.",
"title": ""
},
{
"docid": "999c0785975052bda742f0620e95fe84",
"text": "List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the Java Concurrency Package of JDK 1.6.0. However, Michael’s lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael’s lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael’s lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael’s. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael’s algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java’s RTTI mechanism to create pointers that can be atomically marked).",
"title": ""
},
{
"docid": "7d7db3f70ba6bcb5f9bf615bd8110eba",
"text": "Freshwater and energy are essential commodities for well being of mankind. Due to increasing population growth on the one hand, and rapid industrialization on the other, today’s world is facing unprecedented challenge of meeting the current needs for these two commodities as well as ensuring the needs of future generations. One approach to this global crisis of water and energy supply is to utilize renewable energy sources to produce freshwater from impaired water sources by desalination. Sustainable practices and innovative desalination technologies for water reuse and energy recovery (staging, waste heat utilization, hybridization) have the potential to reduce the stress on the existing water and energy sources with a minimal impact to the environment. This paper discusses existing and emerging desalination technologies and possible combinations of renewable energy sources to drive them and associated desalination costs. It is suggested that a holistic approach of coupling renewable energy sources with technologies for recovery, reuse, and recycle of both energy and water can be a sustainable and environment friendly approach to meet the world’s energy and water needs. High capital costs for renewable energy sources for small-scale applications suggest that a hybrid energy source comprising both grid-powered energy and renewable energy will reduce the desalination costs considering present economics of energy. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
8b36ff5c2e3231681101f569f07189d4
|
Physical Human Activity Recognition Using Wearable Sensors
|
[
{
"docid": "e700afa9064ef35f7d7de40779326cb0",
"text": "Human activity recognition is important for many applications. This paper describes a human activity recognition framework based on feature selection techniques. The objective is to identify the most important features to recognize human activities. We first design a set of new features (called physical features) based on the physical parameters of human motion to augment the commonly used statistical features. To systematically analyze the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is generally improved to 90%, or 8% better than when only statistical features are used. Moreover, we show that the performance is further improved by 3.8% by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.",
"title": ""
},
{
"docid": "931c75847fdfec787ad6a31a6568d9e3",
"text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.",
"title": ""
}
] |
[
{
"docid": "bdffdfe92df254d0b13c1a1c985c0400",
"text": "We propose a model to automatically describe changes introduced in the source code of a program using natural language. Our method receives as input a set of code commits, which contains both the modifications and message introduced by an user. These two modalities are used to train an encoder-decoder architecture. We evaluated our approach on twelve real world open source projects from four different programming languages. Quantitative and qualitative results showed that the proposed approach can generate feasible and semantically sound descriptions not only in standard in-project settings, but also in a cross-project setting.",
"title": ""
},
{
"docid": "d7d0fa6279b356d37c2f64197b3d721d",
"text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.",
"title": ""
},
{
"docid": "24a117cf0e59591514dd8630bcd45065",
"text": "This work presents a coarse-grained distributed genetic algorithm (GA) for RNA secondary structure prediction. This research builds on previous work and contains two new thermodynamic models, INN and INN-HB, which add stacking-energies using base pair adjacencies. Comparison tests were performed against the original serial GA on known structures that are 122, 543, and 784 nucleotides in length on a wide variety of parameter settings. The effects of the new models are investigated, the predicted structures are compared to known structures and the GA is compared against a serial GA with identical models. Both algorithms perform well and are able to predict structures with high accuracy for short sequences.",
"title": ""
},
{
"docid": "bd60ecd918eba443e0772d4edbec6ba4",
"text": "Le ModeÁ le de Culture Fit explique la manieÁ re dont l'environnement socioculturel influence la culture interne au travail et les pratiques de la direction des ressources humaines. Ce modeÁ le a e te teste sur 2003 salarie s d'entreprises prive es dans 10 pays. Les participants ont rempli un questionnaire de 57 items, destine aÁ mesurer les perceptions de la direction sur 4 dimensions socioculturelles, 6 dimensions de culture interne au travail, et les pratiques HRM (Management des Ressources Humaines) dans 3 zones territoiriales. Une analyse ponde re e par re gressions multiples, au niveau individuel, a montre que les directeurs qui caracte risaient leurs environnement socio-culturel de facË on fataliste, supposaient aussi que les employe s n'e taient pas malle ables par nature. Ces directeurs ne pratiquaient pas l'enrichissement des postes et donnaient tout pouvoir au controà le et aÁ la re mune ration en fonction des performances. Les directeurs qui appre ciaient une grande loyaute des APPLIED PSYCHOLOGY: AN INTERNATIONAL REVIEW, 2000, 49 (1), 192±221",
"title": ""
},
{
"docid": "b91833ae4e659fc1a0943eadd5da955d",
"text": "In this paper, we present a factor graph framework to solve both estimation and deterministic optimal control problems, and apply it to an obstacle avoidance task on Unmanned Aerial Vehicles (UAVs). We show that factor graphs allow us to consistently use the same optimization method, system dynamics, uncertainty models and other internal and external parameters, which potentially improves the UAV performance as a whole. To this end, we extended the modeling capabilities of factor graphs to represent nonlinear dynamics using constraint factors. For inference, we reformulate Sequential Quadratic Programming as an optimization algorithm on a factor graph with nonlinear constraints. We demonstrate our framework on a simulated quadrotor in an obstacle avoidance application.",
"title": ""
},
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "525f188960eeb7a66ef9734118609f79",
"text": "Creativity is important for young children learning mathematics. However, much literature has claimed creativity in the learning of mathematics for young children is not adequately supported by teachers in the classroom due to such reasons as teachers’ poor college preparation in mathematics content knowledge, teachers’ negativity toward creative students, teachers’ occupational pressure, and low quality curriculum. The purpose of this grounded theory study was to generate a model that describes and explains how a particular group of early childhood teachers make sense of creativity in the learning of mathematics and how they think they can promote or fail to promote creativity in the classroom. In-depth interviews with 30 Kto Grade-3 teachers, participating in a graduate mathematics specialist certificate program in a medium-sized Midwestern city, were conducted. In contrast to previous findings, these teachers did view mathematics in young children (age 5 to 9) as requiring creativity, in ways that aligned with Sternberg and Lubart’s (1995) investment theory of creativity. Teachers felt they could support creativity in student learning and knew strategies for how to promote creativity in their practices.",
"title": ""
},
{
"docid": "49f0371f84d7874a6ccc6f9dd0779d3b",
"text": "Managing customer satisfaction has become a crucial issue in fast-food industry. This study aims at identifying determinant factor related to customer satisfaction in fast-food restaurant. Customer data are analyzed by using data mining method with two classification techniques such as decision tree and neural network. Classification models are developed using decision tree and neural network to determine underlying attributes of customer satisfaction. Generated rules are beneficial for managerial and practical implementation in fast-food industry. Decision tree and neural network yield more than 80% of predictive accuracy.",
"title": ""
},
{
"docid": "f8f36ef5822446478b154c9d98847070",
"text": "The objective of this research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones. Road surface condition information is seen useful for both travellers and for the road network maintenance. The problem we consider is to detect road surface anomalies that, when left unreported, can cause wear of vehicles, lesser driving comfort and vehicle controllability, or an accident. In this work we developed a pattern recognition system for detecting road condition from accelerometer and GPS readings. We present experimental results from real urban driving data that demonstrate the usefulness of the system. Our contributions are: 1) Performing a throughout spectral analysis of tri-axis acceleration signals in order to get reliable road surface anomaly labels. 2) Comprehensive preprocessing of GPS and acceleration signals. 3) Proposing a speed dependence removal approach for feature extraction and demonstrating its positive effect in multiple feature sets for the road surface anomaly detection task. 4) A framework for visually analyzing the classifier predictions over the validation data and labels.",
"title": ""
},
{
"docid": "d9493bec4d01a39ce230b82a98800bb3",
"text": "Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India’s Aadhaar Program and the United Arab Emirate’s border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. ! 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d956c805ee88d1b0ca33ce3f0f838441",
"text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1",
"title": ""
},
{
"docid": "9f74e665d5ca8c84d7b17806163a16ee",
"text": "‘‘This is really still a nightmare — a German nightmare,’’ asserted Mechtilde Maier, Deutsche Telekom’s head of diversity. A multinational company with offices in about 50 countries, Deutsche Telekom is struggling at German headquarters to bring women into its leadership ranks. It is a startling result; at headquarters, one might expect the greatest degree of compliance to commands on high. With only 13% of its leadership positions represented by women, the headquarters is lagging far behind its offices outside Germany, which average 24%. Even progress has been glacial, with an improvement of a mere 0.5% since 2010 versus a 4% increase among its foreign subsidiaries. The phenomenon at Deutsche Telekom reflects a broader pattern, one that manifests in other organizations, in other nations, and in the highest reaches of leadership, including the boardroom. According to the Deloitte Global Centre for Corporate Governance, only about 12% of boardroom seats in the United States are held by women and less than 10% in the United Kingdom (9%), China (8.5%), and India (5%). In stark contrast, these rates are 2—3 times higher in Bulgaria (30%) and Norway (approximately 40%). Organizations are clearly successful in some nations more than others in promoting women to leadership ranks, but why? Instead of a culture’s wealth, values, or practices, our own research concludes that the emergence of women as leaders can be explained in part by a culture’s tightness. Cultural tightness refers to the degree to which a culture has strong norms and low tolerance for deviance. In a tight culture, people might be arrested for spitting, chewing gum, or jaywalking. In loose cultures, although the same behaviors may be met with disapproving glances or fines, they are not sanctioned to the same degree nor are they necessarily seen as taboo. We discovered that women are more likely to emerge as leaders in loose than tight cultures, but with an important exception. Women can emerge as leaders in tight cultures too. Our discoveries highlight that, to promote women to leadership positions, global leaders need to employ strategies that are compatible with the culture’s tightness. Before presenting our findings and their implications, we first discuss the process by which leaders tend to emerge.",
"title": ""
},
{
"docid": "9983792c37341cca7666e2f0d7b42d2b",
"text": "Domain modeling is an important step in the transition from natural-language requirements to precise specifications. For large systems, building a domain model manually is a laborious task. Several approaches exist to assist engineers with this task, whereby candidate domain model elements are automatically extracted using Natural Language Processing (NLP). Despite the existing work on domain model extraction, important facets remain under-explored: (1) there is limited empirical evidence about the usefulness of existing extraction rules (heuristics) when applied in industrial settings; (2) existing extraction rules do not adequately exploit the natural-language dependencies detected by modern NLP technologies; and (3) an important class of rules developed by the information retrieval community for information extraction remains unutilized for building domain models.\n Motivated by addressing the above limitations, we develop a domain model extractor by bringing together existing extraction rules in the software engineering literature, extending these rules with complementary rules from the information retrieval literature, and proposing new rules to better exploit results obtained from modern NLP dependency parsers. We apply our model extractor to four industrial requirements documents, reporting on the frequency of different extraction rules being applied. We conduct an expert study over one of these documents, investigating the accuracy and overall effectiveness of our domain model extractor.",
"title": ""
},
{
"docid": "8fa721c98dac13157bcc891c06561ec7",
"text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.",
"title": ""
},
{
"docid": "74beaea9eccab976dc1ee7b2ddf3e4ca",
"text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.",
"title": ""
},
{
"docid": "36fbc5f485d44fd7c8726ac0df5648c0",
"text": "We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting : Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.",
"title": ""
},
{
"docid": "d5eb643385b573706c48cbb2cb3262df",
"text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.",
"title": ""
},
{
"docid": "ec9f793761ebd5199c6a2cc8c8215ac4",
"text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.",
"title": ""
},
{
"docid": "b2aad34d91b5c38f794fc2577593798c",
"text": "We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely but is assumed instead to lie between two extreme values min and max These bounds could be inferred from extreme values of the implied volatilities of liquid options or from high low peaks in historical stock or option implied volatilities They can be viewed as de ning a con dence interval for future volatility values We show that the extremal non arbitrageable prices for the derivative asset which arise as the volatility paths vary in such a band can be described by a non linear PDE which we call the Black Scholes Barenblatt equation In this equation the pricing volatility is selected dynamically from the two extreme values min max according to the convexity of the value function A simple algorithm for solving the equation by nite di erencing or a trinomial tree is presented We show that this model captures the importance of diversi cation in managing derivatives positions It can be used systematically to construct e cient hedges using other derivatives in conjunction with the underlying asset y Courant Institute of Mathematical Sciences Mercer st New York NY Institute for Advanced Study Princeton NJ J P Morgan Securities New York NY The uncertain volatility model According to Arbitrage Pricing Theory if the market presents no arbitrage opportunities there exists a probability measure on future scenarios such that the price of any security is the expectation of its discounted cash ows Du e Such a probability is known as a mar tingale measure Harrison and Kreps or a pricing measure Determining the appropriate martingale measure associated with a sector of the security space e g the stock of a company and a riskless short term bond permits the valuation of any contingent claim based on these securities However pricing measures are often di cult to calculate precisely and there may exist more than one measure consistent with a given market It is useful to view the non uniqueness of pricing measures as re ecting the many choices for derivative asset prices that can exist in an uncertain economy For example option prices re ect the market s expectation about the future value of the underlying asset as well as its projection of future volatility Since this projection changes as the market reacts to new information implied volatility uctuates unpredictably In these circumstances fair option values and perfectly replicating hedges cannot be determined with certainty The existence of so called volatility risk in option trading is a concrete manifestation of market incompleteness This paper addresses the issue of derivative asset pricing and hedging in an uncertain future volatility environment For this purpose instead of choosing a pricing model that incorporates a complete view of the forward volatility as a single number or a predetermined function of time and price term structure of volatilities or even a stochastic process with given statistics we propose to operate under the less stringent assumption that that the volatility of future prices is restricted to lie in a bounded set but is otherwise undetermined For simplicity we restrict our discussion to derivative securities based on a single liquidly traded stock which pays no dividends over the contract s lifetime and assume a constant interest rate The basic assumption then reduces to postulating that under all admissible pricing mea sures future volatility paths will be restricted to lie within a band Accordingly we assume that the paths followed by future stock prices are It o processes viz dSt St t dZt t dt where t and t are non anticipative functions such that",
"title": ""
},
{
"docid": "9aefccc6fc6f628d374c1ffccfcc656a",
"text": "Keeping up with rapidly growing research fields, especially when there are multiple interdisciplinary sources, requires substantial effort for researchers, program managers, or venture capital investors. Current theories and tools are directed at finding a paper or website, not gaining an understanding of the key papers, authors, controversies, and hypotheses. This report presents an effort to integrate statistics, text analytics, and visualization in a multiple coordinated window environment that supports exploration. Our prototype system, Action Science Explorer (ASE), provides an environment for demonstrating principles of coordination and conducting iterative usability tests of them with interested and knowledgeable users. We developed an understanding of the value of reference management, statistics, citation context extraction, natural language summarization for single and multiple documents, filters to interactively select key papers, and network visualization to see citation patterns and identify clusters. The three-phase usability study guided our revisions to ASE and led us to improve the testing methods.",
"title": ""
}
] |
scidocsrr
|
38148009e005b5936464f4a362758271
|
Passwords and Perceptions
|
[
{
"docid": "8715a3b9ac7487adbb6d58e8a45ceef6",
"text": "Before the computer age, authenticating a user was a relatively simple process. One person could authenticate another by visual recognition, interpersonal communication, or, more formally, mutually agreed upon authentication methods. With the onset of the computer age, authentication has become more complicated. Face-to-face visual authentication has largely dissipated, with computers and networks intervening. Sensitive information is exchanged daily between humans and computers, and from computer to computer. This complexity demands more formal protection methods; in short, authentication processes to manage our routine interactions with such machines and networks. Authentication is the process of positively verifying identity, be it that of a user, device, or entity in a computer system. Often authentication is the prerequisite to accessing system resources. Positive verification is accomplished by means of matching some indicator of identity, such as a shared secret prearranged at the time a person was authorized to use the system. The most familiar user authenticator in use today is the password. The secure sockets layer (SSL) is an example of machine to machine authentication. Human–machine authentication is known as user authentication and it consists of verifying the identity of a user: is this person really who she claims to be? User authentication is much less secure than machine authentication and is known as the Achilles’ heel of secure systems. This paper introduces various human authenticators and compares them based on security, convenience, and cost. The discussion is set in the context of a larger analysis of security issues, namely, measuring a system’s vulnerability to attack. The focus is kept on remote computer authentication. Authenticators can be categorized into three main types: secrets (what you know), tokens (what you have), and IDs (who you are). A password is a secret word, phrase, or personal identification number. Although passwords are ubiquitously used, they pose vulnerabilities, the biggest being that a short mnemonic password can be guessed or searched by an ambitious attacker, while a longer, random password is difficult for a person to remember. A token is a physical device used to aid authentication. Examples include bank cards and smart cards. A token can be an active device that yields one-time passcodes (time-synchronous or",
"title": ""
}
] |
[
{
"docid": "2fd7cc65c34551c90a72fc3cb4665336",
"text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.",
"title": ""
},
{
"docid": "3d45b63a4643c34c56633afd7e270922",
"text": "In this paper we perform a comparative analysis of three models for feature representation of text documents in the context of document classification. In particular, we consider the most often used family of models bag-of-words, recently proposed continuous space models word2vec and doc2vec, and the model based on the representation of text documents as language networks. While the bag-of-word models have been extensively used for the document classification task, the performance of the other two models for the same task have not been well understood. This is especially true for the network-based model that have been rarely considered for representation of text documents for classification. In this study, we measure the performance of the document classifiers trained using the method of random forests for features generated the three models and their variants. The results of the empirical comparison show that the commonly used bag-of-words model has performance comparable to the one obtained by the emerging continuous-space model of doc2vec. In particular, the low-dimensional variants of doc2vec generating up to 75 features are among the top-performing document representation models. The results finally point out that doc2vec shows a superior performance in the tasks of classifying large Corresponding Author: Department of Informatics, University of Rijeka, Radmile Matejčić 2, 51000 Rijeka, Croatia, +385 51 584 714 Email addresses: [email protected] (Sanda Martinčić-Ipšić), [email protected] (Tanja Miličić), [email protected] (Ljupčo Todorovski) Preprint submitted to ?? July 6, 2017 documents.",
"title": ""
},
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "a57dc1e93116aa99ce00c671208bbd9f",
"text": "According to the IEEE 802.11aj (45 GHz) standard, a millimeter-wave planar substrate-integrated endfire antenna with wide beamwidths in both <italic>E</italic>- and <italic>H</italic>-planes, and good impedance matching over 42.3–48.4 GHz is proposed for the Q-band wireless local area network (WLAN) system. The proposed antenna comprises a printed angled dipole with bilateral symmetrical directors for generating wide-angle radiation, and the beamwidth in both <italic>E</italic>- and <italic>H</italic>-planes can be easily adjusted. The antenna is prototyped using the conventional printed circuit board process with a size of 6 × 26 × 0.508 mm<sup>3</sup>, and achieves beamwidths greater than 120° in two main planes, <inline-formula><tex-math notation=\"LaTeX\">${S}$ </tex-math></inline-formula><sub>11</sub> of less than –12.5 dB, and peak gain of 3.67–5.2 dBi over 42.3–48.4 GHz. The measurements are in good agreement with simulations, which shows that the proposed antenna is very promising for Q-band millimeter-wave WLAN system access-point applications.",
"title": ""
},
{
"docid": "c117da74c302d9e108970854d79e54fd",
"text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.",
"title": ""
},
{
"docid": "7490d342ffb59bd396421e198b243775",
"text": "Antioxidant activities of defatted sesame meal extract increased as the roasting temperature of sesame seed increased, but the maximum antioxidant activity was achieved when the seeds were roasted at 200 °C for 60 min. Roasting sesame seeds at 200 °C for 60 min significantly increased the total phenolic content, radical scavenging activity (RSA), reducing powers, and antioxidant activity of sesame meal extract; and several low-molecularweight phenolic compounds such as 2-methoxyphenol, 4-methoxy-3-methylthio-phenol, 5-amino-3-oxo-4hexenoic acid, 3,4-methylenedioxyphenol (sesamol), 3-hydroxy benzoic acid, 4-hydroxy benzoic acid, vanillic acid, filicinic acid, and 3,4-dimethoxy phenol were newly formed in the sesame meal after roasting sesame seeds at 200 °C for 60 min. These results indicate that antioxidant activity of defatted sesame meal extracts was significantly affected by roasting temperature and time of sesame seeds.",
"title": ""
},
{
"docid": "b32e0f8195780d15a61c9c3cc0213864",
"text": "With access to large datasets, deep neural networks (DNN) have achieved humanlevel accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and many chemical properties of research interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed from the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical properties that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models and other contemporary DNN models. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a general-purpose plug-and-play deep neural network for the prediction of novel small-molecule chemical properties.",
"title": ""
},
{
"docid": "be101b30dd67232c1973b4c4a78c7f98",
"text": "Recently, many colleges and universities have made significant investments in upgraded classrooms and learning centers, incorporating such factors as tiered seating, customized lighting packages, upgraded desk and seat quality, and individual computers. To date, few studies have examined the impact of classroom environment at post-secondary institutions. The purpose of this study is to analyze the impact of classroom environment factors on individual student satisfaction measures and on student evaluation of teaching in the university environment. Two-hundred thirty-seven undergraduate business students were surveyed regarding their perceptions of classroom environment factors and their satisfaction with their classroom, instructor, and course. The results of the study indicate that students do perceive significant differences between standard and upgraded classrooms. Additionally, students express a preference for several aspects of upgraded classrooms, including tiered seating, lighting, and classroom noise control. Finally, students rate course enjoyment, classroom learning, and instructor organization higher in upgraded classrooms than in standard classrooms. The results of this study should benefit administrators who make capital and infrastructure decisions regarding college and university classroom improvements, faculty members who develop and rely upon student evaluations of teaching, and researchers who examine the factors impacting student satisfaction and learning.",
"title": ""
},
{
"docid": "f35b8aec287285d18df656881642eb66",
"text": "We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we propose instead to use more flexible code distributions. These distributions are estimated non-parametrically by reversing the generator map during training. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.",
"title": ""
},
{
"docid": "18c90883c96b85dc8b3ef6e1b84c3494",
"text": "Data Selection is a popular step in Machine Translation pipelines. Feature Decay Algorithms (FDA) is a technique for data selection that has shown a good performance in several tasks. FDA aims to maximize the coverage of n-grams in the test set. However, intuitively, more ambiguous n-grams require more training examples in order to adequately estimate their translation probabilities. This ambiguity can be measured by alignment entropy. In this paper we propose two methods for calculating the alignment entropies for n-grams of any size, which can be used for improving the performance of FDA. We evaluate the substitution of the n-gramspecific entropy values computed by these methods to the parameters of both the exponential and linear decay factor of FDA. The experiments conducted on German-to-English and Czechto-English translation demonstrate that the use of alignment entropies can lead to an increase in the quality of the results of FDA.",
"title": ""
},
{
"docid": "6f942f8ead4684f4943d1c82ea140b9a",
"text": "This paper considers the problem of approximate nearest neighbor search in the compressed domain. We introduce polysemous codes, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance. Their design is inspired by algorithms introduced in the 90’s to construct channel-optimized vector quantizers. At search time, this dual interpretation accelerates the search. Most of the indexed vectors are filtered out with Hamming distance, letting only a fraction of the vectors to be ranked with an asymmetric distance estimator. The method is complementary with a coarse partitioning of the feature space such as the inverted multi-index. This is shown by our experiments performed on several public benchmarks such as the BIGANN dataset comprising one billion vectors, for which we report state-of-the-art results for query times below 0.3 millisecond per core. Last but not least, our approach allows the approximate computation of the k-NN graph associated with the Yahoo Flickr Creative Commons 100M, described by CNN image descriptors, in less than 8 hours on a single machine.",
"title": ""
},
{
"docid": "3cd7c3b3676626440ddd27de43fa5e1f",
"text": "A survey of the use of belief functions to quantify the beliefs held by an agent, and in particular of their interpretation in the transferable belief model.",
"title": ""
},
{
"docid": "545a7a98c79d14ba83766aa26cff0291",
"text": "Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.",
"title": ""
},
{
"docid": "e0f7c82754694084c6d05a2d37be3048",
"text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.",
"title": ""
},
{
"docid": "d164ead192d1ba25472935f517608faa",
"text": "Real-world machine learning applications may require functions to be fast-to-evaluate and interpretable, in particular, guaranteed monotonicity of the learned function can be critical to user trust. We propose meeting these goals for low-dimensional machine learning problems by learning flexible, monotonic functions using calibrated interpolated look-up tables. We extend the structural risk minimization framework of lattice regression to train monotonic functions by solving a convex problem with appropriate linear inequality constraints. In addition, we propose jointly learning interpretable calibrations of each feature to normalize continuous features and handle categorical or missing data, at the cost of making the objective non-convex. We address large-scale learning through parallelization, mini-batching, and propose random sampling of additive regularizer terms. Case studies for six real-world problems with five to sixteen features and thousands to millions of training samples demonstrate the proposed monotonic functions can achieve state-of-the-art accuracy on practical problems while providing greater transparency to users.",
"title": ""
},
{
"docid": "0be92a74f0ff384c66ef88dd323b3092",
"text": "When facing uncertainty, adaptive behavioral strategies demand that the brain performs probabilistic computations. In this probabilistic framework, the notion of certainty and confidence would appear to be closely related, so much so that it is tempting to conclude that these two concepts are one and the same. We argue that there are computational reasons to distinguish between these two concepts. Specifically, we propose that confidence should be defined as the probability that a decision or a proposition, overt or covert, is correct given the evidence, a critical quantity in complex sequential decisions. We suggest that the term certainty should be reserved to refer to the encoding of all other probability distributions over sensory and cognitive variables. We also discuss strategies for studying the neural codes for confidence and certainty and argue that clear definitions of neural codes are essential to understanding the relative contributions of various cortical areas to decision making.",
"title": ""
},
{
"docid": "bf338661988fd28c9bafe7ea1ca59f34",
"text": "We propose a system for landing unmanned aerial vehicles (UAV), specifically an autonomous rotorcraft, in uncontrolled, arbitrary, terrains. We present plans for and progress on a vision-based system for the recovery of the geometry and material properties of local terrain from a mounted stereo rig for the purposes of finding an optimal landing site. A system is developed which integrates motion estimation from tracked features, and an algorithm for approximate estimation of a dense elevation map in a world coordinate system.",
"title": ""
},
{
"docid": "28c0ce094c4117157a27f272dbb94b91",
"text": "This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.",
"title": ""
},
{
"docid": "74d4f8c69938eeae611696727286a1a7",
"text": "AES-GCM(Advanced Encryption Standard with Galois Counter Mode) is an encryption authentication algorithm, which includes two main components: an AES engine and Ghash module. Because of the computation feedback in Ghash operation, the Ghash module limits the performance of the whole AES-GCM system. In this study, an efficient architecture of Ghash is presented. The architecture uses an optimized bit-parallel multiplier. In addition, based on this multiplier, pipelined method is adopted to achieve higher clock rate and throughput. We also introduce a redundant register method, which is never mentioned before, for solving the big fan- out problem derived from the bit-parallel multiplier. In the end, the performance of proposed design is evaluated on Xilinx virtex4 FPGA platform. The experimental results show that our Ghash core has less clock delay and can easily achieve higher throughput, which is up to 40Gbps.",
"title": ""
},
{
"docid": "a442a5fd2ec466cac18f4c148661dd96",
"text": "BACKGROUND\nLong waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients.\n\n\nMETHODS\nData from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site.\n\n\nRESULTS\nA total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001).\n\n\nCONCLUSION\nCompared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.",
"title": ""
}
] |
scidocsrr
|
0519aa1993e289d59e4c9fa9eef00d99
|
Propp's Morphology of the Folk Tale as a Grammar for Generation
|
[
{
"docid": "c5f6a559d8361ad509ec10bbb6c3cc9b",
"text": "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.",
"title": ""
},
{
"docid": "683bad69cfb2c8980020dd1f8bd8cea4",
"text": "BRUTUS is a program that tells stories. The stories are intriguing, they hold a hint of mystery, and—not least impressive—they are written in correct English prose. An example (p. 124) is shown in Figure 1. This remarkable feat is grounded in a complex architecture making use of a number of levels, each of which is parameterized so as to become a locus of possible variation. The specific BRUTUS1 implementation that illustrates the program’s prowess exploits the theme of betrayal, which receives an elaborate analysis, culminating in a set",
"title": ""
}
] |
[
{
"docid": "bc890d9ecf02a89f5979053444daebdf",
"text": "The continued growth of mobile and interactive computing requires devices manufactured with low-cost processes, compatible with large-area and flexible form factors, and with additional functionality. We review recent advances in the design of electronic and optoelectronic devices that use colloidal semiconductor quantum dots (QDs). The properties of materials assembled of QDs may be tailored not only by the atomic composition but also by the size, shape, and surface functionalization of the individual QDs and by the communication among these QDs. The chemical and physical properties of QD surfaces and the interfaces in QD devices are of particular importance, and these enable the solution-based fabrication of low-cost, large-area, flexible, and functional devices. We discuss challenges that must be addressed in the move to solution-processed functional optoelectronic nanomaterials.",
"title": ""
},
{
"docid": "88aed0f7fe9022cfc2e2b95a1ed6d2fb",
"text": "Since the terrorist attacks of September 11, 2001, and the subsequent establishment of the U.S. Department of Homeland Security (DHS), considerable efforts have been made to estimate the risks of terrorism and the cost effectiveness of security policies to reduce these risks. DHS, industry, and the academic risk analysis communities have all invested heavily in the development of tools and approaches that can assist decisionmakers in effectively allocating limited resources across the vast array of potential investments that could mitigate risks from terrorism and other threats to the homeland. Decisionmakers demand models, analyses, and decision support that are useful for this task and based on the state of the art. Since terrorism risk analysis is new, no single method is likely to meet this challenge. In this article we explore a number of existing and potential approaches for terrorism risk analysis, focusing particularly on recent discussions regarding the applicability of probabilistic and decision analytic approaches to bioterrorism risks and the Bioterrorism Risk Assessment methodology used by the DHS and criticized by the National Academies and others.",
"title": ""
},
{
"docid": "96055f0e41d62dc0ef318772fa6d6d9f",
"text": "Building Information Modeling (BIM) has rapidly grown from merely being a three-dimensional (3D) model of a facility to serving as “a shared knowledge resource for information about a facility, forming a reliable basis for decisions during its life cycle from inception onward” [1]. BIM with three primary spatial dimensions (width, height, and depth) becomes 4D BIM when time (construction scheduling information) is added, and 5D BIM when cost information is added to it. Although the sixth dimension of the 6D BIM is often attributed to asset information useful for Facility Management (FM) processes, there is no agreement in the research literature on what each dimension represents beyond the fifth dimension [2]. BIM ultimately seeks to digitize the different stages of a building lifecycle such as planning, design, construction, and operation such that consistent digital information of a building project can be used by stakeholders throughout the building life-cycle [3]. The United States National Building Information Model Standard (NBIMS) initially characterized BIMs as digital representations of physical and functional aspects of a facility. But, in the most recent version released in July 2015, the NBIMS’ definition of BIM includes three separate but linked functions, namely business process, digital representation, and organization and control [4]. A number of national-level initiatives are underway in various countries to formally encourage the adoption of BIM technologies in the Architecture, Engineering, and Construction (AEC) and FM industries. Building SMART, with 18 chapters across the globe, including USA, UK, Australasia, etc., was established in 1995 with the aim of developing and driving the active use of open internationally-recognized standards to support the wider adoption of BIM across the building and infrastructure sectors [5]. The UK BIM Task Group, with experts from industry, government, public sector, institutes, and academia, is committed to facilitate the implementation of ‘collaborative 3D BIM’, a UK Government Construction Strategy initiative [6]. Similarly, the EUBIM Task Group was started with a vision to foster the common use of BIM in public works and produce a handbook containing the common BIM principles, guidance and practices for public contracting entities and policy makers [7].",
"title": ""
},
{
"docid": "580e0cc120ea9fd7aa9bb0a8e2a73cb3",
"text": "In the emerging field of micro-blogging and social communication services, users post millions of short messages every day. Keeping track of all the messages posted by your friends and the conversation as a whole can become tedious or even impossible. In this paper, we presented a study on automatically clustering and classifying Twitter messages, also known as “tweets”, into different categories, inspired by the approaches taken by news aggregating services like Google News. Our results suggest that the clusters produced by traditional unsupervised methods can often be incoherent from a topical perspective, but utilizing a supervised methodology that utilize the hash-tags as indicators of topics produce surprisingly good results. We also offer a discussion on temporal effects of our methodology and training set size considerations. Lastly, we describe a simple method of finding the most representative tweet in a cluster, and provide an analysis of the results.",
"title": ""
},
{
"docid": "0946b5cb25e69f86b074ba6d736cd50f",
"text": "Increase of malware and advanced cyber-attacks are now becoming a serious problem. Unknown malware which has not determined by security vendors is often used in these attacks, and it is becoming difficult to protect terminals from their infection. Therefore, a countermeasure for after infection is required. There are some malware infection detection methods which focus on the traffic data comes from malware. However, it is difficult to perfectly detect infection only using traffic data because it imitates benign traffic. In this paper, we propose malware process detection method based on process behavior in possible infected terminals. In proposal, we investigated stepwise application of Deep Neural Networks to classify malware process. First, we train the Recurrent Neural Network (RNN) to extract features of process behavior. Second, we train the Convolutional Neural Network (CNN) to classify feature images which are generated by the extracted features from the trained RNN. The evaluation result in several image size by comparing the AUC of obtained ROC curves and we obtained AUC= 0:96 in best case.",
"title": ""
},
{
"docid": "22beed9d31913f09e81063dbcb751c42",
"text": "In this paper an approach for 360 degree multi sensor fusion for static and dynamic obstacles is presented. The perception of static and dynamic obstacles is achieved by combining the advantages of model based object tracking and an occupancy map. For the model based object tracking a novel multi reference point tracking system, called best knowledge model, is introduced. The best knowledge model allows to track and describe objects with respect to a best suitable reference point. It is explained how the object tracking and the occupancy map closely interact and benefit from each other. Experimental results of the 360 degree multi sensor fusion system from an automotive test vehicle are shown.",
"title": ""
},
{
"docid": "c4b6df3abf37409d6a6a19646334bffb",
"text": "Classification in imbalanced domains is a recent challenge in data mining. We refer to imbalanced classification when data presents many examples from one class and few from the other class, and the less representative class is the one which has more interest from the point of view of the learning task. One of the most used techniques to tackle this problem consists in preprocessing the data previously to the learning process. This preprocessing could be done through under-sampling; removing examples, mainly belonging to the majority class; and over-sampling, by means of replicating or generating new minority examples. In this paper, we propose an under-sampling procedure guided by evolutionary algorithms to perform a training set selection for enhancing the decision trees obtained by the C4.5 algorithm and the rule sets obtained by PART rule induction algorithm. The proposal has been compared with other under-sampling and over-sampling techniques and the results indicate that the new approach is very competitive in terms of accuracy when comparing with over-sampling and it outperforms standard under-sampling. Moreover, the obtained models are smaller in terms of number of leaves or rules generated and they can considered more interpretable. The results have been contrasted through non-parametric statistical tests over multiple data sets. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "89dcd15d3f7e2f538af4a2654f144dfb",
"text": "E-waste comprises discarded electronic appliances, of which computers and mobile telephones are disproportionately abundant because of their short lifespan. The current global production of E-waste is estimated to be 20-25 million tonnes per year, with most E-waste being produced in Europe, the United States and Australasia. China, Eastern Europe and Latin America will become major E-waste producers in the next ten years. Miniaturisation and the development of more efficient cloud computing networks, where computing services are delivered over the internet from remote locations, may offset the increase in E-waste production from global economic growth and the development of pervasive new technologies. E-waste contains valuable metals (Cu, platinum group) as well as potential environmental contaminants, especially Pb, Sb, Hg, Cd, Ni, polybrominated diphenyl ethers (PBDEs), and polychlorinated biphenyls (PCBs). Burning E-waste may generate dioxins, furans, polycyclic aromatic hydrocarbons (PAHs), polyhalogenated aromatic hydrocarbons (PHAHs), and hydrogen chloride. The chemical composition of E-waste changes with the development of new technologies and pressure from environmental organisations on electronics companies to find alternatives to environmentally damaging materials. Most E-waste is disposed in landfills. Effective reprocessing technology, which recovers the valuable materials with minimal environmental impact, is expensive. Consequently, although illegal under the Basel Convention, rich countries export an unknown quantity of E-waste to poor countries, where recycling techniques include burning and dissolution in strong acids with few measures to protect human health and the environment. Such reprocessing initially results in extreme localised contamination followed by migration of the contaminants into receiving waters and food chains. E-waste workers suffer negative health effects through skin contact and inhalation, while the wider community are exposed to the contaminants through smoke, dust, drinking water and food. There is evidence that E-waste associated contaminants may be present in some agricultural or manufactured products for export.",
"title": ""
},
{
"docid": "e79646606570464bccd27c3316a1f086",
"text": "BACKGROUND\nLower lid blepharoplasty has potential for significant long-lasting complications and marginal aesthetic outcomes if not performed correctly, or if one disregards the anatomical aspects of the orbicularis oculi muscle. This has detracted surgeons from performing the technical maneuvers necessary for optimal periorbital rejuvenation. A simplified, \"five-step\" clinical approach based on sound anatomical principles is presented.\n\n\nMETHODS\nA review of 50 lower lid blepharoplasty patients (each bilateral) using the five-step technique was conducted to delineate the efficacy in improving lower eyelid aesthetics. Digital images from 50 consecutive primary lower blepharoplasty patients (100 lower lids: 37 women and 13 men) were measured using a computer program with standardized data points that were later converted to ratios.\n\n\nRESULTS\nOf the 100 lower eyelid five-step blepharoplasties analyzed, complication rates were low and data points measured demonstrated improvements in all aesthetic parameters. The width and position of the tear trough, position of the lower lid relative to the pupil, and the intercanthal angle were all improved. There were no cases of lower lid malposition.\n\n\nCONCLUSIONS\nAesthetic outcomes in lower lid blepharoplasty can be improved using a five-step technical sequence that addresses all of the anatomical findings. Lower lid blepharoplasty results are improved when (1) the supportive deep malar fat compartment is augmented; (2) lower lid orbicularis oculi muscle is preserved with minimal fat removal (if at all); (3) the main retaining structure (orbicularis retaining ligament) is selectively released; (4) lateral canthal support is established or strengthened (lateral retinacular suspension); and (5) minimal skin is removed.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.",
"title": ""
},
{
"docid": "3c83c75715f1c69d6b2add2560994146",
"text": "Interacting Storytelling systems integrate AI techniques such as planning with narrative representations to generate stories. In this paper, we discuss the use of planning formalisms in Interactive Storytelling from the perspective of story generation and authoring. We compare two different planning formalisms, Hierarchical Task Network (HTN) planning and Heuristic Search Planning (HSP). While HTN provide a strong basis for narrative coherence in the context of interactivity, HSP offer additional flexibility and the generation of stories and the mechanisms for generating comic situations.",
"title": ""
},
{
"docid": "463eb90754d21c43ee61e7e18256c66b",
"text": "A low-profile metamaterial loaded antenna array with anti-interference and polarization reconfigurable features is proposed for base-station communication. Owing to the dual notches etched on the radiating electric dipoles, an impedance bandwidth of 75.6% ranging from 1.68 to 3.72 GHz with a notch band from 2.38 to 2.55 GHz can be achieved. By employing the metamaterial loadings that are arranged in the center of the magnetic dipole, the thickness of the proposed antenna can be decreased from 28 to 20 mm. Furthermore, a serial feeding network that consists of several Wilkinson power dividers and phase shifters is introduced to attain the conversion between dual-linear polarization and triple-circular polarization. Hence, the antenna could meet the demand of the future 5G intelligent application.",
"title": ""
},
{
"docid": "c5e2930d5a0f80a8d4a59a70db64cd68",
"text": "Gamification has become a trend over the last years, especially in non-game environments such as business systems. With the aim to increase the users' engagement and motivation, existing or new information systems are enriched with game design elements. Before the technical implementation, the gamification concept is created. However, creation of such concepts is an informal and error-prone process, i.e., the definition and exchange of game mechanics is done in natural language or using spreadsheets. This becomes especially relevant, if the gamification concept is handed over to the implementation phase in which IT-experts have to manually translate informal to formal concepts without having gamification expertise. In this paper, we describe a novel, declarative, and formal domain-specific language to define gamification concepts. Besides that the language is designed to be readable and partially write able by gamification experts, the language is automatically compilable into gamification platforms without involving IT-experts.",
"title": ""
},
{
"docid": "400be1fdbd0f1aebfb0da220fd62e522",
"text": "Understanding users' interactions with highly subjective content---like artistic images---is challenging due to the complex semantics that guide our preferences. On the one hand one has to overcome `standard' recommender systems challenges, such as dealing with large, sparse, and long-tailed datasets. On the other, several new challenges present themselves, such as the need to model content in terms of its visual appearance, or even social dynamics, such as a preference toward a particular artist that is independent of the art they create. In this paper we build large-scale recommender systems to model the dynamics of a vibrant digital art community, Behance, consisting of tens of millions of interactions (clicks and 'appreciates') of users toward digital art. Methodologically, our main contributions are to model (a) rich content, especially in terms of its visual appearance; (b) temporal dynamics, in terms of how users prefer 'visually consistent' content within and across sessions; and (c) social dynamics, in terms of how users exhibit preferences both towards certain art styles, as well as the artists themselves.",
"title": ""
},
{
"docid": "f9d8954e2061b5466e655552a5e13a24",
"text": "Sports tracking applications are increasingly available on the market, and research has recently picked up this topic. Tracking a user's running track and providing feedback on the performance are among the key features of such applications. However, little attention has been paid to the accuracy of the applications' localization measurements. In evaluating the nine currently most popular running applications, we found tremendous differences in the GPS measurements. Besides this finding, our study contributes to the scientific knowledge base by qualifying the findings of previous studies concerning accuracy with smartphones' GPS components.",
"title": ""
},
{
"docid": "0b407f1f4d771a34e6d0bc59bf2ef4c4",
"text": "Social advertisement is one of the fastest growing sectors in the digital advertisement landscape: ads in the form of promoted posts are shown in the feed of users of a social networking platform, along with normal social posts; if a user clicks on a promoted post, the host (social network owner) is paid a fixed amount from the advertiser. In this context, allocating ads to users is typically performed by maximizing click-through-rate, i.e., the likelihood that the user will click on the ad. However, this simple strategy fails to leverage the fact the ads can propagate virally through the network, from endorsing users to their followers. In this paper, we study the problem of allocating ads to users through the viral-marketing lens. Advertisers approach the host with a budget in return for the marketing campaign service provided by the host. We show that allocation that takes into account the propensity of ads for viral propagation can achieve significantly better performance. However, uncontrolled virality could be undesirable for the host as it creates room for exploitation by the advertisers: hoping to tap uncontrolled virality, an advertiser might declare a lower budget for its marketing campaign, aiming at the same large outcome with a smaller cost. This creates a challenging trade-off: on the one hand, the host aims at leveraging virality and the network effect to improve advertising efficacy, while on the other hand the host wants to avoid giving away free service due to uncontrolled virality. We formalize this as the problem of ad allocation with minimum regret, which we show is NP-hard and inapproximable w.r.t. any factor. However, we devise an algorithm that provides approximation guarantees w.r.t. the total budget of all advertisers. We develop a scalable version of our approximation algorithm, which we extensively test on four real-world data sets, confirming that our algorithm delivers high quality solutions, is scalable, and significantly outperforms several natural baselines.",
"title": ""
},
{
"docid": "f11dbf9c32b126de695801957171465c",
"text": "Continuum robots, which are composed of multiple concentric, precurved elastic tubes, can provide dexterity at diameters equivalent to standard surgical needles. Recent mechanics-based models of these “active cannulas” are able to accurately describe the curve of the robot in free space, given the preformed tube curves and the linear and angular positions of the tube bases. However, in practical applications, where the active cannula must interact with its environment or apply controlled forces, a model that accounts for deformation under external loading is required. In this paper, we apply geometrically exact rod theory to produce a forward kinematic model that accurately describes large deflections due to a general collection of externally applied point and/or distributed wrench loads. This model accommodates arbitrarily many tubes, with each having a general preshaped curve. It also describes the independent torsional deformation of the individual tubes. Experimental results are provided for both point and distributed loads. Average tip error under load was 2.91 mm (1.5% - 3% of total robot length), which is similar to the accuracy of existing free-space models.",
"title": ""
},
{
"docid": "1720517b913ce3974ab92239ff8a177e",
"text": "Honeypot is a closely monitored computer resource that emulates behaviors of production host within a network in order to lure and attract the attackers. The workability and effectiveness of a deployed honeypot depends on its technical configuration. Since honeypot is a resource that is intentionally made attractive to the attackers, it is crucial to make it intelligent and self-manageable. This research reviews at artificial intelligence techniques such as expert system and case-based reasoning, in order to build an intelligent honeypot.",
"title": ""
},
{
"docid": "a30de4a213fe05c606fb16d204b9b170",
"text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD",
"title": ""
},
{
"docid": "ecd144226fdb065c2325a0d3131fd802",
"text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.",
"title": ""
},
{
"docid": "0cb0c5f181ef357cd81d4a290d2cbc14",
"text": "With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well as Eye-to-Hand calibration, to make sure the whole system functions correctly. We present a framework, using a novel combination of well proven methods, allowing a quick automatic calibration for the integration of systems consisting of the robot and a varying number of 3D cameras by using a standard checkerboard calibration grid. Our approach allows a quick camera-to-robot recalibration after any changes to the setup, for example when cameras or robot have been repositioned. Modular design of the system ensures flexibility regarding a number of sensors used as well as different hardware choices. The framework has been proven to work by practical experiments to analyze the quality of the calibration versus the number of positions of the checkerboard used for each of the calibration procedures.",
"title": ""
}
] |
scidocsrr
|
f0f07e5aec207f7edfc75e2136b028a7
|
Author ' s personal copy The role of RFID in agriculture : Applications , limitations and challenges
|
[
{
"docid": "dc67945b32b2810a474acded3c144f68",
"text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.",
"title": ""
}
] |
[
{
"docid": "48168ed93d710d3b85b7015f2c238094",
"text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.",
"title": ""
},
{
"docid": "0685c33de763bdedf2a1271198569965",
"text": "The use of virtual-reality technology in the areas of rehabilitation and therapy continues to grow, with encouraging results being reported for applications that address human physical, cognitive, and psychological functioning. This article presents a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the field of VR rehabilitation and therapy. The SWOT analysis is a commonly employed framework in the business world for analyzing the factors that influence a company's competitive position in the marketplace with an eye to the future. However, the SWOT framework can also be usefully applied outside of the pure business domain. A quick check on the Internet will turn up SWOT analyses for urban-renewal projects, career planning, website design, youth sports programs, and evaluation of academic research centers, and it becomes obvious that it can be usefully applied to assess and guide any organized human endeavor designed to accomplish a mission. It is hoped that this structured examination of the factors relevant to the current and future status of VR rehabilitation will provide a good overview of the key issues and concerns that are relevant for understanding and advancing this vital application area.",
"title": ""
},
{
"docid": "10d8bbea398444a3fb6e09c4def01172",
"text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.",
"title": ""
},
{
"docid": "f47019a78ee833dcb8c5d15a4762ccf9",
"text": "It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.",
"title": ""
},
{
"docid": "2f1ba4ba5cff9a6e614aa1a781bf1b13",
"text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.",
"title": ""
},
{
"docid": "70c6aaf0b0fc328c677d7cb2249b68bf",
"text": "In this paper, we discuss and review how combined multiview imagery from satellite to street level can benefit scene analysis. Numerous works exist that merge information from remote sensing and images acquired from the ground for tasks such as object detection, robots guidance, or scene understanding. What makes the combination of overhead and street-level images challenging are the strongly varying viewpoints, the different scales of the images, their illuminations and sensor modality, and time of acquisition. Direct (dense) matching of images on a per-pixel basis is thus often impossible, and one has to resort to alternative strategies that will be discussed in this paper. For such purpose, we review recent works that attempt to combine images taken from the ground and overhead views for purposes like scene registration, reconstruction, or classification. After the theoretical review, we present three recent methods to showcase the interest and potential impact of such fusion on real applications (change detection, image orientation, and tree cataloging), whose logic can then be reused to extend the use of ground-based images in remote sensing and vice versa. Through this review, we advocate that cross fertilization between remote sensing, computer vision, and machine learning is very valuable to make the best of geographic data available from Earth observation sensors and ground imagery. Despite its challenges, we believe that integrating these complementary data sources will lead to major breakthroughs in Big GeoData. It will open new perspectives for this exciting and emerging field.",
"title": ""
},
{
"docid": "b51fcfa32dbcdcbcc49f1635b44601ed",
"text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.",
"title": ""
},
{
"docid": "2956f80e896a660dbd268f9212e6d00f",
"text": "Writing as a productive skill in EFL classes is outstandingly significant. In writing classes there needs to be an efficient relationship between the teacher and students. The teacher as the only audience in many writing classes responds to students’ writing. In the early part of the 21 century the range of technologies available for use in classes has become very diverse and the ways they are being used in classrooms all over the world might affect the outcome we expect from our classes. As the present generations of students are using new technologies, the application of these recent technologies in classes might be useful. Using technology in writing classes provides opportunities for students to hand their written work to the teacher without the need for any face-to-face interaction. This present study investigates the effect of Edmodo on EFL learners’ writing performance. A quasi-experimental design was used in this study. The participants were 40 female advanced-level students attending advanced writing classes at Irana English Institute, Razan Hamedan. The focus was on the composition writing ability. The students were randomly assigned to two groups, experimental and control. Edmodo was used in the experimental group. Mann-Whitney U test was used for data analysis; the results indicated that the use of Edmodo in writing was more effective on EFL learners’ writing performance participating in this study.",
"title": ""
},
{
"docid": "1d8cd32e2a2748b9abd53cf32169d798",
"text": "Optimizing the weights of Artificial Neural Networks (ANNs) is a great important of a complex task in the research of machine learning due to dependence of its performance to the success of learning process and the training method. This paper reviews the implementation of meta-heuristic algorithms in ANNs’ weight optimization by studying their advantages and disadvantages giving consideration to some meta-heuristic members such as Genetic algorithim, Particle Swarm Optimization and recently introduced meta-heuristic algorithm called Harmony Search Algorithm (HSA). Also, the application of local search based algorithms to optimize the ANNs weights and their benefits as well as their limitations are briefly elaborated. Finally, a comparison between local search methods and global optimization methods is carried out to speculate the trends in the progresses of ANNs’ weight optimization in the current resrearch.",
"title": ""
},
{
"docid": "3ece1c9f619899d5bab03c24fd3cd34a",
"text": "A new technique for obtaining high performance, low power, radio direction finding (RDF) using a single receiver is presented. For man-portable applications, multichannel systems consume too much power, are too expensive, and are too heavy to easily be carried by a single individual. Most single channel systems are not accurate enough or do not provide the capability to listen while direction finding (DF) is being performed. By employing feedback in a pseudo-Doppler system via a vector modulator in the IF of a single receiver and an adaptive algorithm to control it, the accuracy of a pseudoDoppler system can be enhanced to the accuracy of an interferometer based system without the expense of a multichannel receiver. And, it will maintain audio listenthrough while direction finding is being performed all with a single inexpensive low power receiver. The use of these techniques provides performance not attainable by other single channel methods.",
"title": ""
},
{
"docid": "6ac3d776d686f873ab931071c75aeed2",
"text": "GridRPC, which is an RPC mechanism tailored for the Grid, is an attractive programming model for Grid computing. This paper reports on the design and implementation of a GridRPC programming system called Ninf-G. Ninf-G is a reference implementation of the GridRPC API which has been proposed for standardization at the Global Grid Forum. In this paper, we describe the design, implementations and typical usage of Ninf-G. A preliminary performance evaluation in both WAN and LAN environments is also reported. Implemented on top of the Globus Toolkit, Ninf-G provides a simple and easy programming interface based on standard Grid protocols and the API for Grid Computing. The overhead of remote procedure calls in Ninf-G is acceptable in both WAN and LAN environments.",
"title": ""
},
{
"docid": "f152838edb23a40e895dea2e1ee709d1",
"text": "We present two uncommon cases of adolescent girls with hair-thread strangulation of the labia minora. The first 14-year-old girl presented with a painful pedunculated labial lump (Fig. 1). The lesion was covered with exudate. She was examined under sedation and found a coil of long hair forming a tourniquet around a labial segment. Thread removal resulted to immediate relief from pain, and gradual return to normal appearance. Another 10-year-old girl presented with a similar labial swelling. The recent experience of the first case led us straight to the problem. A long hair-thread was found at the neck of the lesion. Hair removal resulted in settling of the pain. The labial swelling subsided in few days.",
"title": ""
},
{
"docid": "ef09bc08cc8e94275e652e818a0af97f",
"text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "38301e7db178d7072baf0226a1747c03",
"text": "We present an algorithm for ray tracing displacement maps that requires no additional storage over the base model. Displacement maps are rarely used in ray tracing due to the cost associated with storing and intersecting the displaced geometry. This is unfortunate because displacement maps allow the addition of large amounts of geometric complexity into models. Our method works for models composed of triangles with normals at the vertices. In addition, we discuss a special purpose displacement that creates a smooth surface that interpolates the triangle vertices and normals of a mesh. The combination allows relatively coarse models to be displacement mapped and ray traced effectively.",
"title": ""
},
{
"docid": "3e8535bc48ce88ba6103a68dd3ad1d5d",
"text": "This letter reports the concept and design of the active-braid, a novel bioinspired continuum manipulator with the ability to contract, extend, and bend in three-dimensional space with varying stiffness. The manipulator utilizes a flexible crossed-link helical array structure as its main supporting body, which is deformed by using two radial actuators and a total of six longitudinal tendons, analogously to the three major types of muscle layers found in muscular hydrostats. The helical array structure ensures that the manipulator behaves similarly to a constant volume structure (expanding while shortening and contracting while elongating). Numerical simulations and experimental prototypes are used in order to evaluate the feasibility of the concept.",
"title": ""
},
{
"docid": "e0f84798289c06abcacd14df1df4a018",
"text": "PARP inhibitors (PARPi), a cancer therapy targeting poly(ADP-ribose) polymerase, are the first clinically approved drugs designed to exploit synthetic lethality, a genetic concept proposed nearly a century ago. Tumors arising in patients who carry germline mutations in either BRCA1 or BRCA2 are sensitive to PARPi because they have a specific type of DNA repair defect. PARPi also show promising activity in more common cancers that share this repair defect. However, as with other targeted therapies, resistance to PARPi arises in advanced disease. In addition, determining the optimal use of PARPi within drug combination approaches has been challenging. Nevertheless, the preclinical discovery of PARPi synthetic lethality and the route to clinical approval provide interesting lessons for the development of other therapies. Here, we discuss current knowledge of PARP inhibitors and potential ways to maximize their clinical effectiveness.",
"title": ""
},
{
"docid": "62d63357923c5a7b1ea21b8448e3cba3",
"text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.",
"title": ""
},
{
"docid": "931f8ada4fdf90466b0b9ff591fb67d1",
"text": "Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.",
"title": ""
}
] |
scidocsrr
|
3cf3840371b5e9515a49b1c4f17bd44e
|
ICT Governance: A Reference Framework
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "33a9c1b32f211ea13a70b1ce577b71dc",
"text": "In this work, we propose a face recognition library, with the objective of lowering the implementation complexity of face recognition features on applications in general. The library is based on Convolutional Neural Networks; a special kind of Neural Network specialized for image data. We present the main motivations for the use of face recognition, as well as the main interface for using the library features. We describe the overall architecture structure of the library and evaluated it on a large scale scenario. The proposed library achieved an accuracy of 98.14% when using a required confidence of 90%, and an accuracy of 99.86% otherwise. Keywords—Artificial Intelligence, CNNs, Face Recognition, Image Recognition, Machine Learning, Neural Networks.",
"title": ""
},
{
"docid": "1876319faa49a402ded2af46a9fcd966",
"text": "One, and two, and three police persons spring out of the shadows Down the corner comes one more And we scream into that city night: \" three plus one makes four! \" Well, they seem to think we're disturbing the peace But we won't let them make us sad 'Cause kids like you and me baby, we were born to add Born To Add, Sesame Street (sung to the tune of Bruce Springsteen's Born to Run) to Ursula Preface In October 1996, I got a position as a research assistant working on the Twenty-One project. The project aimed at providing a software architecture that supports a multilingual community of people working on local Agenda 21 initiatives in exchanging ideas and publishing their work. Local Agenda 21 initiatives are projects of local governments, aiming at sustainable processes in environmental , human, and economic terms. The projects cover themes like combating poverty, protecting the atmosphere, human health, freshwater resources, waste management, education, etc. Documentation on local Agenda 21 initiatives are usually written in the language of the local government, very much unlike documentation on research in e.g. information retrieval for which English is the language of international communication. Automatic cross-language retrieval systems are therefore a helpful tool in the international cooperation between local governments. Looking back, I regret not being more involved in the non-technical aspects of the Twenty-One project. To make up for this loss, many of the examples in this thesis are taken from the project's domain. Working on the Twenty-One project convinced me that solutions to cross-language information retrieval should explicitly combine translation models and retrieval models into one unifying framework. Working in a language technology group, the use of language models seemed a natural choice. A choice that simplifies things considerably for that matter. The use of language models for information retrieval practically reduces ranking to simply adding the occurrences of terms: complex weighting algorithms are no longer needed. \" Born to add \" is therefore the motto of this thesis. By adding out loud, it hopefully annoys-no offence, and with all due respect-some of the well-established information retrieval approaches, like Bruce Stringbean and The Sesame Street Band annoys the Sesame Street police. Acknowledgements The research presented in this thesis is funded in part by the European Union projects Twenty-One, Pop-Eye and Olive, and the Telematics Institute project Druid. I am most grateful to Wessel Kraaij of TNO-TPD …",
"title": ""
},
{
"docid": "8e6efa696b960cf08cf1616efc123cbd",
"text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.",
"title": ""
},
{
"docid": "e6d4d23df1e6d21bd988ca462526fe15",
"text": "Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.",
"title": ""
},
{
"docid": "d58425a613f9daea2677d37d007f640e",
"text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.",
"title": ""
},
{
"docid": "ab2c0a23ed71295ee4aa51baf9209639",
"text": "An expert system to diagnose the main childhood diseases among the tweens is proposed. The diagnosis is made taking into account the symptoms that can be seen or felt. The childhood diseases have many common symptoms and some of them are very much alike. This creates many difficulties for the doctor to reach at a right decision or diagnosis. The proposed system can remove these difficulties and it is having knowledge of many childhood diseases. The proposed expert system is implemented using SWI-Prolog.",
"title": ""
},
{
"docid": "263ac34590609435b2a104a385f296ca",
"text": "Efficient computation of curvature-based energies is important for practical implementations of geometric modeling and physical simulation applications. Building on a simple geometric observation, we provide a version of a curvature-based energy expressed in terms of the Laplace operator acting on the embedding of the surface. The corresponding energy--being quadratic in positions--gives rise to a constant Hessian in the context of isometric deformations. The resulting isometric bending model is shown to significantly speed up common cloth solvers, and when applied to geometric modeling situations built onWillmore flow to provide runtimes which are close to interactive rates.",
"title": ""
},
{
"docid": "d82c11c5a6981f1d3496e0838519704d",
"text": "This paper presents a detailed study of the nonuniform bipolar conduction phenomenon under electrostatic discharge (ESD) events in single-finger NMOS transistors and analyzes its implications for the design of ESD protection for deep-submicron CMOS technologies. It is shown that the uniformity of the bipolar current distribution under ESD conditions is severely degraded depending on device finger width ( ) and significantly influenced by the substrate and gate-bias conditions as well. This nonuniform current distribution is identified as a root cause of the severe reduction in ESD failure threshold current for the devices with advanced silicided processes. Additionally, the concept of an intrinsic second breakdown triggering current ( 2 ) is introduced, which is substrate-bias independent and represents the maximum achievable ESD failure strength for a given technology. With this improved understanding of ESD behavior involved in advanced devices, an efficient design window can be constructed for robust deep submicron ESD protection.",
"title": ""
},
{
"docid": "89513d2cf137e60bf7f341362de2ba84",
"text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
"title": ""
},
{
"docid": "26abfdd9af796a2903b0f7cef235b3b4",
"text": "Argumentation mining is an advanced form of human language understanding by the machine. This is a challenging task for a machine. When sufficient explicit discourse markers are present in the language utterances, the argumentation can be interpreted by the machine with an acceptable degree of accuracy. However, in many real settings, the mining task is difficult due to the lack or ambiguity of the discourse markers, and the fact that a substantial amount of knowledge needed for the correct recognition of the argumentation, its composing elements and their relationships is not explicitly present in the text, but makes up the background knowledge that humans possess when interpreting language. In this article1 we focus on how the machine can automatically acquire the needed common sense and world knowledge. As very few research has been done in this respect, many of the ideas proposed in this article are tentative, but start being researched. We give an overview of the latest methods for human language understanding that map language to a formal knowledge representation that facilitates other tasks (for instance, a representation that is used to visualize the argumentation or that is easily shared in a decision or argumentation support system). Most current systems are trained on texts that are manually annotated. Then we go deeper into the new field of representation learning that nowadays is very much studied in computational linguistics. This field investigates methods for representing language as statistical concepts or as vectors, allowing straightforward methods of compositionality. The methods often use deep learning and its underlying neural network technologies to learn concepts from large text collections in an unsupervised way (i.e., without the need for manual annotations). We show how these methods can help the argumentation mining process, but also demonstrate that these methods need further research to automatically acquire the necessary background knowledge and more specifically common sense and world knowledge. We propose a number of ways to improve the learning of common sense and world knowledge by exploiting textual and visual data, and touch upon how we can integrate the learned knowledge in the argumentation mining process.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "2bf2e36bbbbdd9e091395636fcc2a729",
"text": "An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio.",
"title": ""
},
{
"docid": "6830ca98632f86ef2a0cb4c19183d9b4",
"text": "In success or failure of any firm/industry or organization employees plays the most vital and important role. Airline industry is one of service industry the job of which is to sell seats to their travelers/costumers and passengers; hence employees inspiration towards their work plays a vital part in serving client’s requirements. This research focused on the influence of employee’s enthusiasm and its apparatuses e.g. pay and benefits, working atmosphere, vision of organization towards customer satisfaction and management systems in Pakistani airline industry. For analysis correlation and regression methods were used. Results of the research highlighted that workers motivation and its four major components e.g. pay and benefits, working atmosphere, vision of organization and management systems have a significant positive impact on customer’s gratification. Those employees of the industry who directly interact with client highly impact the client satisfaction level. It is obvious from results of this research that pay and benefits performs a key role in employee’s motivation towards achieving their organizational objectives of greater customer satisfaction.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "6daa1bc00a4701a2782c1d5f82c518e2",
"text": "An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "14a90781132fa3932d41b21b382ba362",
"text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.",
"title": ""
},
{
"docid": "67fb91119ba2464e883616ffd324f864",
"text": "Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
}
] |
scidocsrr
|
0ac76efb44bc30022c891168e76bdec6
|
UNIQ: Uniform Noise Injection for the Quantization of Neural Networks
|
[
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
}
] |
[
{
"docid": "a65b11ebb320e4883229f4a50d51ae2f",
"text": "Vast quantities of text are becoming available in electronic form, ranging from published documents (e.g., electronic dictionaries, encyclopedias, libraries and archives for information retrieval services), to private databases (e.g., marketing information, legal records, medical histories), to personal email and faxes. Online information services are reaching mainstream computer users. There were over 15 million Internet users in 1993, and projections are for 30 million in 1997. With media attention reaching all-time highs, hardly a day goes by without a new article on the National Information Infrastructure, digital libraries, networked services, digital convergence or intelligent agents. This attention is moving natural language processing along the critical path for all kinds of novel applications.",
"title": ""
},
{
"docid": "38382c04e7dc46f5db7f2383dcae11fb",
"text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.",
"title": ""
},
{
"docid": "1cf9b5be1bc849a25a45123c95ac6217",
"text": "In the discipline of accounting, the resource-event-agent (REA) ontology is a well accepted conceptual accounting framework to analyze the economic phenomena within and across enterprises. Accordingly, it seems to be appropriate to use REA in the requirements elicitation to develop an information architecture of accounting and enterprise information systems. However, REA has received comparatively less attention in the field of business informatics and computer science. Some of the reasons may be that the REA ontology despite of its well grounded core concepts is (1) sometimes vague in the definition of the relationships between these core concepts, (2) misses a precise language to describe the models, and (3) does not come with an easy to understand graphical notation. Accordingly, we have started developing a domain specific modeling language specifically dedicated to REA models and corresponding tool support to overcome these limitations. In this paper we present our REA DSL which supports the basic set of REA concepts.",
"title": ""
},
{
"docid": "6534e22a4160d547094c0bb38588b5d5",
"text": "This paper presents the comparative analysis between constant duty cycle and Perturb & Observe (P&O) algorithm for extracting the power from Photovoltaic Array (PVA). Because of nonlinear characteristics of PV cell, the maximum power can be extract under particular voltage condition. Therefore, Maximum Power Point Tracking (MPPT) algorithms are used in PVA to maximize the output power. In this paper the MPPT algorithm is implemented using Ćuk converter. The dynamics of PVA is simulated at different solar irradiance and cell temperature. The P&O MPPT technique is a direct control method enables ease to implement and less complexity.",
"title": ""
},
{
"docid": "d69b8c991e66ff274af63198dba2ee01",
"text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.",
"title": ""
},
{
"docid": "9cf3df49790c1d2107035ef868f8be1e",
"text": "As computational thinking becomes a fundamental skill for the 21st century, K-12 teachers should be exposed to computing principles. This paper describes the implementation and evaluation of a computational thinking module in a required course for elementary and secondary education majors. We summarize the results from open-ended and multiple-choice questionnaires given both before and after the module to assess the students' attitudes toward and understanding of computational thinking. The results suggest that given relevant information about computational thinking, education students' attitudes toward computer science becomes more favorable and they will be more likely to integrate computing principles in their future teaching.",
"title": ""
},
{
"docid": "669c6fee3153c88a8e8a35d6331a11ca",
"text": "We present a method for classifying products into a set of known categories by using supervised learning. That is, given a product with accompanying informational details such as name and descriptions, we group the product into a particular category with similar products, e.g., ‘Electronics’ or ‘Automotive’. To do this, we analyze product catalog information from different distributors on Amazon.com to build features for a classifier. Our implementation results show significant improvement over baseline results. Taking into particular criteria, our implementation is potentially able to substantially increase automation of categorization of products. General Terms Supervised and Unsupervised Learning, E-Commerce",
"title": ""
},
{
"docid": "ccf7390abc2924e4d2136a2b82639115",
"text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.",
"title": ""
},
{
"docid": "339aa2d53be2cf1215caa142ad5c58d2",
"text": "A true random number generator (TRNG) is an important component in cryptographic systems. Designing a fast and secure TRNG in an FPGA is a challenging task. In this paper we analyze the TRNG designed by Sunar et al. based on XOR of the outputs of many oscillator rings. We propose an enhanced TRNG that does not require post-processing to pass statistical tests and with better randomness characteristics on the output. We have shown by experiment that the frequencies of the equal length oscillator rings in the TRNG are not identical but different due to the placement of the inverters in the FPGA. We have implemented our proposed TRNG in an Altera Cyclone II FPGA. Our implementation has passed the NIST and DIEHARD statistical tests with a throughput of 100 Mbps and with a usage of less than 100 logic elements in the FPGA.",
"title": ""
},
{
"docid": "b4352773c64dea1e8d354dad0cd76dfa",
"text": "Objective: to describe the epidemiological and sociodemographic characteristics of patients hospitalized in an ICU. Method: an epidemiological, descriptive and retrospective study. Population: 695 patients admitted from January to December 2011. The data collected were statistically analyzed with both absolute and relative frequency distribution. Results: 61.6% of the patients are male, aged 40 to 69 years, and most of them came from the surgery rooms. The most frequent reason for admission was diseases of the circulatory system (23.3%). At discharge from the ICU, 72.4% of the patients were sent to other units of the same institution, 31.1% to the intermediate care unit, and 20.4% died, of which 24.6% from diseases of the circulatory system. The afternoon shift had 45.8% of the admissions and 53.3% of the discharges. Conclusion: the description of the sociodemographic and epidemiological features guides the planning of nursing actions, providing a better quality service.",
"title": ""
},
{
"docid": "55a1bedc3aa007a4e8bbc77d6f710d7f",
"text": "The purpose of the present study was to develop and validate a self-report instrument that measures the nature of the coach-athlete relationship. Jowett et al.'s (Jowett & Meek, 2000; Jowett, in press) qualitative case studies and relevant literature were used to generate items for an instrument that measures affective, cognitive, and behavioral aspects of the coach-athlete relationship. Two studies were carried out in an attempt to assess content, predictive, and construct validity, as well as internal consistency, of the Coach-Athlete Relationship Questionnaire (CART-Q), using two independent British samples. Principal component analysis and confirmatory factor analysis were used to reduce the number of items, identify principal components, and confirm the latent structure of the CART-Q. Results supported the multidimensional nature of the coach-athlete relationship. The latent structure of the CART-Q was underlined by the latent variables of coaches' and athletes' Closeness (emotions), Commitment (cognitions), and Complementarity (behaviors).",
"title": ""
},
{
"docid": "b3eefd1fa34f0eb02541b598881396f9",
"text": "We present a complete scalable system for 6 d.o.f. camera tracking based on natural features. Crucially, the calculation is based only on pre-captured reference images and previous estimates of the camera pose and is hence suitable for online applications. We match natural features in the current frame to two spatially separated reference images. We overcome the wide baseline matching problem by matching to the previous frame and transferring point positions to the reference images. We then minimize deviations from the two-view and three-view constraints between the reference images and the current frame as a function of the camera position parameters. We stabilize this calculation using a recursive form of temporal regularization that is similar in spirit to the Kalman filter. We can track camera pose over hundreds of frames and realistically integrate virtual objects with only slight jitter.",
"title": ""
},
{
"docid": "109525927d05ea8dcf4e2785204895f3",
"text": "Information network embedding is an effective way for efficient graph analytics. However, it still faces with computational challenges in problems such as link prediction and node recommendation, particularly with increasing scale of networks. Hashing is a promising approach for accelerating these problems by orders of magnitude. However, no prior studies have been focused on seeking binary codes for information networks to preserve high-order proximity. Since matrix factorization (MF) unifies and outperforms several well-known embedding methods with high-order proximity preserved, we propose a MF-based \\underlineI nformation \\underlineN etwork \\underlineH ashing (INH-MF) algorithm, to learn binary codes which can preserve high-order proximity. We also suggest Hamming subspace learning, which only updates partial binary codes each time, to scale up INH-MF. We finally evaluate INH-MF on four real-world information network datasets with respect to the tasks of node classification and node recommendation. The results demonstrate that INH-MF can perform significantly better than competing learning to hash baselines in both tasks, and surprisingly outperforms network embedding methods, including DeepWalk, LINE and NetMF, in the task of node recommendation. The source code of INH-MF is available online\\footnote\\urlhttps://github.com/DefuLian/network .",
"title": ""
},
{
"docid": "7697aa5665f4699f2000779db2b0d24f",
"text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.",
"title": ""
},
{
"docid": "93fad9723826fcf99ae229b4e7298a31",
"text": "In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.",
"title": ""
},
{
"docid": "2d0765e6b695348dea8822f695dcbfa1",
"text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.",
"title": ""
},
{
"docid": "30ae1d2d45e11c8f6212ff0a54abec7a",
"text": "This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on a twopoint and on a five-point ordinal scale, and quantification of the distribution of sentiment towards a topic across a number of tweets: again on a two-point and on a five-point ordinal scale. Compared to 2016, we made two changes: (i) we introduced a new language, Arabic, for all subtasks, and (ii) we made available information from the profiles of the Twitter users who posted the target tweets. The task continues to be very popular, with a total of 48 teams participating this year.",
"title": ""
},
{
"docid": "4d26d3823e3889c22fe517857a49d508",
"text": "As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane. Rather, complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, changes in illumination relative to light sources, and may even become partially or fully occluded. In this paper, we develop an efficient, general framework for object tracking—one which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Throughout, we present experimental results performed on live video sequences demonstrating the effectiveness and efficiency of our methods.",
"title": ""
},
{
"docid": "27316b23e7a7cd163abd40f804caf61b",
"text": "Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.",
"title": ""
}
] |
scidocsrr
|
a29aaf59db2df5191a0b2cf777dd87d5
|
LSDSem 2017 Shared Task: The Story Cloze Test
|
[
{
"docid": "da816b4a0aea96feceefe22a67c45be4",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
}
] |
[
{
"docid": "fadcc9f3d85e6409636b27d4487af309",
"text": "AIM\nThis is to report the case of a ten year old child affected by a numeric dental anomaly showing the pathologic condition characterised by the simultaneous presence of supernumerary and supplemental teeth. The anomaly was analysed to plan the best surgical and orthodontic treatments.\n\n\nCASE REPORT\nDental history, clinical and instrumental examinations were made to perform a correct orthodontic examination and diagnosis. A young patient was affected by numeric dental anomaly in the upper jaw. We observed a high number of teeth, specifically two normally formed supplemental lateral permanent incisors and an unerupted mesiodens placed between the upper central incisors. Firstly, the supplemental lateral teeth were extracted. This surgical therapy and the application of a space maintainer were made to permit the eruption of the permanent canines. Then the mesiodens also underwent surgical treatment (i.e. extraction). Eventually, physiologic eruption of permanent teeth was allowed by the planned surgical-orthodontic treatment.\n\n\nDISCUSSION\nAim of the surgical-orthodontic treatment was extraction of the unerupted supernumerary teeth to obtain the physiologic eruption of the permanent ones. Orthodontic treatment is important to solve malocclusions and maintaining the space for the eruption of permanent teeth.\n\n\nCONCLUSION\nAesthetics and function are two important parameters in modern dentistry. All clinicians should try to make a correct and rational diagnosis for both simple and complex dental pathologies. Particularly in young children, invasive and surgical disinclusive techniques can be substituted by interceptive orthodontic treatments.",
"title": ""
},
{
"docid": "b5e7cabce6982aa3b1a198d76524e0c5",
"text": "BACKGROUND\nAdvancements in technology have always had major impacts in medicine. The smartphone is one of the most ubiquitous and dynamic trends in communication, in which one's mobile phone can also be used for communicating via email, performing Internet searches, and using specific applications. The smartphone is one of the fastest growing sectors in the technology industry, and its impact in medicine has already been significant.\n\n\nOBJECTIVE\nTo provide a comprehensive and up-to-date summary of the role of the smartphone in medicine by highlighting the ways in which it can enhance continuing medical education, patient care, and communication. We also examine the evidence base for this technology.\n\n\nMETHODS\nWe conducted a review of all published uses of the smartphone that could be applicable to the field of medicine and medical education with the exclusion of only surgical-related uses.\n\n\nRESULTS\nIn the 60 studies that were identified, we found many uses for the smartphone in medicine; however, we also found that very few high-quality studies exist to help us understand how best to use this technology.\n\n\nCONCLUSIONS\nWhile the smartphone's role in medicine and education appears promising and exciting, more high-quality studies are needed to better understand the role it will have in this field. We recommend popular smartphone applications for physicians that are lacking in evidence and discuss future studies to support their use.",
"title": ""
},
{
"docid": "dc92e3feb9ea6a20d73962c0905f623b",
"text": "Software maintenance consumes around 70% of the software life cycle. Improving software maintainability could save software developers significant time and money. This paper examines whether the pattern of dependency injection significantly reduces dependencies of modules in a piece of software, therefore making the software more maintainable. This hypothesis is tested with 20 sets of open source projects from sourceforge.net, where each set contains one project that uses the pattern of dependency injection and one similar project that does not use the pattern. The extent of the dependency injection use in each project is measured by a new Number of DIs metric created specifically for this analysis. Maintainability is measured using coupling and cohesion metrics on each project, then performing statistical analysis on the acquired results. After completing the analysis, no correlation was evident between the use of dependency injection and coupling and cohesion numbers. However, a trend towards lower coupling numbers in projects with a dependency injection count of 10% or more was observed.",
"title": ""
},
{
"docid": "5fc29eb195cb68768e8f79a81e64c214",
"text": "Word embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields comparable performance to a state of the art monolingual model trained on five times more training data.",
"title": ""
},
{
"docid": "92daaebd657bda6ea340893d8608f459",
"text": "Many crimes can happen every day in a major city, and figuring out which ones are committed by the same individual or group is an important and difficult data mining challenge. To do this, we propose a pattern detection algorithm called Series Finder, that grows a pattern of discovered crimes from within a database, starting from a “seed” of a few crimes. Series Finder incorporates both the common characteristics of all patterns and the unique aspects of each specific pattern. We compared Series Finder with classic clustering and classification models applied to crime analysis. It has promising results on a decade’s worth of crime pattern data from the Cambridge Police Department.",
"title": ""
},
{
"docid": "b8b1c342a2978f74acd38bed493a77a5",
"text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.",
"title": ""
},
{
"docid": "ef92f3f230a7eedee7555b5fc35f5558",
"text": "Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation (r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.",
"title": ""
},
{
"docid": "738f60fbfe177eec52057c8e5ab43e55",
"text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.",
"title": ""
},
{
"docid": "35d3dcb77620a69388e90318085c744d",
"text": "2-D face recognition in the presence of large pose variations presents a significant challenge. When comparing a frontal image of a face to a near profile image, one must cope with large occlusions, non-linear correspondences, and significant changes in appearance due to viewpoint. Stereo matching has been used to handle these problems, but performance of this approach degrades with large pose changes. We show that some of this difficulty is due to the effect that foreshortening of slanted surfaces has on window-based matching methods, which are needed to provide robustness to lighting change. We address this problem by designing a new, dynamic programming stereo algorithm that accounts for surface slant. We show that on the CMU PIE dataset this method results in significant improvements in recognition performance.",
"title": ""
},
{
"docid": "495be81dda82d3e4d90a34b6716acf39",
"text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.",
"title": ""
},
{
"docid": "71cf493e0026fe057b1100c5ad1118ad",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "22df80330dd228749fdf743ee041138c",
"text": "This paper introduces novel architecture for Radix-10 decimal multiplier. The new generation of high-performance decimal floating-point units (DFUs) is demanding efficient implementations of parallel decimal multiplier. The parallel generation of partial products is performed using signed-digit radix-10 recoding of the multiplier and a simplified set of multiplicand multiples. The reduction of partial products is implemented in a tree structure based on a new algorithm decimal multioperand carry-save addition that uses a unconventional decimal-coded number systems. We further detail these techniques and it significantly improves the area and latency of the previous design, which include: optimized digit recoders, decimal carry-save adders (CSA's) combining different decimal-coded operands, and carry free adders implemented by special designed bit counters. Keywords— Decimal computer arithmetic, parallel decimal multiplication, partial product generation and reduction, Decimal carry-save addition.",
"title": ""
},
{
"docid": "f1a1bfe24eb812a1710a6ef3f2d50dc9",
"text": "BACKGROUND\nDepression is a highly prevalent disorder associated with reduced social functioning, impaired quality of life, and increased mortality. Music therapy has been used in the treatment of a variety of mental disorders, but its impact on those with depression is unclear.\n\n\nOBJECTIVES\nTo examine the efficacy of music therapy with standard care compared to standard care alone among people with depression and to compare the effects of music therapy for people with depression against other psychological or pharmacological therapies.\n\n\nSEARCH STRATEGY\nCCDANCTR-Studies and CCDANCTR-References were searched on 7/11/2007, MEDLINE, PsycINFO, EMBASE, PsycLit, PSYindex, and other relevant sites were searched in November 2006. Reference lists of retrieved articles were hand searched, as well as specialist music and arts therapies journals.\n\n\nSELECTION CRITERIA\nAll randomised controlled trials comparing music therapy with standard care or other interventions for depression.\n\n\nDATA COLLECTION AND ANALYSIS\nData on participants, interventions and outcomes were extracted and entered onto a database independently by two review authors. The methodological quality of each study was also assessed independently by two review authors. The primary outcome was reduction in symptoms of depression, based on a continuous scale.\n\n\nMAIN RESULTS\nFive studies met the inclusion criteria of the review. Marked variations in the interventions offered and the populations studied meant that meta-analysis was not appropriate. Four of the five studies individually reported greater reduction in symptoms of depression among those randomised to music therapy than to those in standard care conditions. The fifth study, in which music therapy was used as an active control treatment, reported no significant change in mental state for music therapy compared with standard care. Dropout rates from music therapy conditions appeared to be low in all studies.\n\n\nAUTHORS' CONCLUSIONS\nFindings from individual randomised trials suggest that music therapy is accepted by people with depression and is associated with improvements in mood. However, the small number and low methodological quality of studies mean that it is not possible to be confident about its effectiveness. High quality trials evaluating the effects of music therapy on depression are required.",
"title": ""
},
{
"docid": "f02b44ff478952f1958ba33d8a488b8e",
"text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.",
"title": ""
},
{
"docid": "6470c8a921a9095adb96afccaa0bf97b",
"text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.",
"title": ""
},
{
"docid": "3eeacf0fb315910975e5ff0ffc4fe800",
"text": "Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classi cation, and clustering. While search and classi cation are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.",
"title": ""
},
{
"docid": "6e837f73398e1f2da537b31d5a696ec6",
"text": "With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs were vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples for image DNNs, research efforts on attacking DNNs for textual applications emerges in recent years. However, existing perturbation methods for images cannot be directly applied to texts as text data is discrete. In this article, we review research works that address this difference and generate textual adversarial examples on DNNs. We collect, select, summarize, discuss and analyze these works in a comprehensive way and cover all the related information to make the article self-contained. Finally, drawing on the reviewed literature, we provide further discussions and suggestions on this topic.",
"title": ""
},
{
"docid": "d934b2894d065e43d75343f40e582e7c",
"text": "BACKGROUND\nDual-antiplatelet therapy with aspirin and a thienopyridine is a cornerstone of treatment to prevent thrombotic complications of acute coronary syndromes and percutaneous coronary intervention.\n\n\nMETHODS\nTo compare prasugrel, a new thienopyridine, with clopidogrel, we randomly assigned 13,608 patients with moderate-to-high-risk acute coronary syndromes with scheduled percutaneous coronary intervention to receive prasugrel (a 60-mg loading dose and a 10-mg daily maintenance dose) or clopidogrel (a 300-mg loading dose and a 75-mg daily maintenance dose), for 6 to 15 months. The primary efficacy end point was death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke. The key safety end point was major bleeding.\n\n\nRESULTS\nThe primary efficacy end point occurred in 12.1% of patients receiving clopidogrel and 9.9% of patients receiving prasugrel (hazard ratio for prasugrel vs. clopidogrel, 0.81; 95% confidence interval [CI], 0.73 to 0.90; P<0.001). We also found significant reductions in the prasugrel group in the rates of myocardial infarction (9.7% for clopidogrel vs. 7.4% for prasugrel; P<0.001), urgent target-vessel revascularization (3.7% vs. 2.5%; P<0.001), and stent thrombosis (2.4% vs. 1.1%; P<0.001). Major bleeding was observed in 2.4% of patients receiving prasugrel and in 1.8% of patients receiving clopidogrel (hazard ratio, 1.32; 95% CI, 1.03 to 1.68; P=0.03). Also greater in the prasugrel group was the rate of life-threatening bleeding (1.4% vs. 0.9%; P=0.01), including nonfatal bleeding (1.1% vs. 0.9%; hazard ratio, 1.25; P=0.23) and fatal bleeding (0.4% vs. 0.1%; P=0.002).\n\n\nCONCLUSIONS\nIn patients with acute coronary syndromes with scheduled percutaneous coronary intervention, prasugrel therapy was associated with significantly reduced rates of ischemic events, including stent thrombosis, but with an increased risk of major bleeding, including fatal bleeding. Overall mortality did not differ significantly between treatment groups. (ClinicalTrials.gov number, NCT00097591 [ClinicalTrials.gov].)",
"title": ""
},
{
"docid": "b34bc241b9bc6260bff92d66715d5651",
"text": "Recently, cross-modal search has attracted considerable attention but remains a very challenging task because of the integration complexity and heterogeneity of the multi-modal data. To address both challenges, in this paper, we propose a novel method termed hetero-manifold regularisation (HMR) to supervise the learning of hash functions for efficient cross-modal search. A hetero-manifold integrates multiple sub-manifolds defined by homogeneous data with the help of cross-modal supervision information. Taking advantages of the hetero-manifold, the similarity between each pair of heterogeneous data could be naturally measured by three order random walks on this hetero-manifold. Furthermore, a novel cumulative distance inequality defined on the hetero-manifold is introduced to avoid the computational difficulty induced by the discreteness of hash codes. By using the inequality, cross-modal hashing is transformed into a problem of hetero-manifold regularised support vector learning. Therefore, the performance of cross-modal search can be significantly improved by seamlessly combining the integrated information of the hetero-manifold and the strong generalisation of the support vector machine. Comprehensive experiments show that the proposed HMR achieve advantageous results over the state-of-the-art methods in several challenging cross-modal tasks.",
"title": ""
}
] |
scidocsrr
|
fc0442dd3ca87c3139a0b3a2baf4daea
|
Gradient Diversity: a Key Ingredient for Scalable Distributed Learning
|
[
{
"docid": "77655e3ed587676df9284c78eb36a438",
"text": "We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.",
"title": ""
}
] |
[
{
"docid": "380838601f3233b01a40b5e0314b507e",
"text": "Effective organizational beaming requires high absorptive capacity, which has two major elements: prior knowledge base and intensity of effort. Hyundai Motor Company, the most dynamic automobile producer in developing countries, pursued a strategy of independence in developing absorptive capacity. In its process of advancing from one phase to the next through the preparation for and acquisition, assimilation, and improvement of foreign technologies, Hyundai acquired migratory knowledge to expand its prior knowledge base and proactively constructed crises as a strategic means of intensifying its beaming effort. Unlike externally evoked crises, proactively constructed internal crises present a clew performance gap, shift beaming orientation from imitation to innovation, and increase the intensity of effort in organizational learning. Such crisis construction is an evocative and galvanizing device in the personal repertoires of proactive top managers. A similar process of opportunistic learning is also evident in other industries in Korea. (Organizational Learning; Absorptive Capacity; Crisis Construction; Knowledge; Catching-up; Hyundai Motor; Korea) Organizational learning and innovation have become crucially important subjects in management. Research on these subjects, however, is concentrated mainly in advanced countries (e.g., Argyris and Schon 1978, Dodgson 1993, Nonaka and Takeuchi 1995, Utterback 1994, von Hippel 1988). Despite the fact that many developing countries have made significant progress in industrial, educational, and technological development, research on learning, capability building, and innovation in those countries is scanty (e.g., Fransman and King 1984, Kim 1997, Kim and Kirn 1985). Models that capture organizational learning and technological change in developing countries are essential to understand the dynamic process of capability building in catching-up in such countries and to extend the theories developed in advanced countries. Understanding the catching-up process is also relevant and important to firins in advanced countries. Not all firrns can be pioneers of novel breakdiroughs, even in those countries. Most firms must invest in second-hand learning to remain competitive. Nevertheless, much less attention is paid to the imitative catching-up process than to the innovative pioneering process. For instance, ABI/Inform, a computerized business database, lists a total of 9,006 articles on the subject of innovation but only 145 on imitation (Schnaars 1994). A crisis is usually regarded as an unpopular, largely negative phenomenon in management. It can, however, be an appropriate metaphor for strategic and technological transformation. Several observers postulate that constructing and then resolving organizational crises can be an effective means of opportunistic learning (e.g., Nonaka 1988, Pitt 1990, Schon 1967, Weick 1988), but no one has clearly linked the construct variable to corresponding empirical evidence. The purpose of this article is to develop a model of organizational beaming in an imitative catching-up process, and at the same time a model of crisis construction and organizational learning, by empirically analysing the history of technological transformation at the Hyundai Motor Company (hereinafter Hyundai), the most dynamic automaker in developing countries, as a case in point. Despite the prediction that none of South Korea's automakers will survive the global shakeout of the 1990s, having been driven out or relegated to niche markets dependent on alliances with leading foreign car producers (Far Eastern Economic Review 1992), Hyundai is determined to become a leading automaker on its own. Unlike most other automobile companies in developing countries, Hyundai followed an explicit policy of maintaining full ownership of all of its 45 subsidiaries, entering the auto industry in 1967 as a latecomer without foreign equity participation. Hyundai has progressed remarkably since then. In quantitative terms, Hyundai increased its production more than tenfold every decade, from 614 cars in 1968, to 7,009 in 1973, to 103,888 in 1983, and to 1,134,611 in 1994, rapidly surpassing other automakers in Korea, and steadily ascending from being the sixteenth-largest producer in the world in 1991 to being the thirteenth largest in 1994. Hyundai is now the largest automobile producer in a developing country. It produced its one millionth car in January 1986, taking 18 years to reach that level of production in contrast to 29 years for Toyota and 43 years for Mazda (Hyun and Lee 1989). In qualitative terms, Hyundai began assembling a Ford compact car on a knockdown basis in 1967. It rapidly assimilated foreign technology and developed sufficient capability to unveil its own designs, Accent and Avante, in 1994 and 1995, respectively. The company thus eliminated the royalty payment on the foreign license and was able to export production and design technology abroad. Hyundai's rapid surge raises several research questions: (1) How did Hyundai acquire the technological capability to transform itself so expeditiously from imitative \"learning by doing\" to innovative \"learning by research\"? (2) How does learning in the catching-up process in a developing country differ from beaming in the pioneering process in advanced countries? (3) Why is crisis construction an effective mechanism for organizational learning? (4) Can Hyundai's learning model be emulated by other catching-up firms? (5) What are the implications of Hyundai's model for future research? The following section briefly reviews theories related to organizational learning and knowledge creation. Then Hyundai is analyzed as a case in point to illustrate how the Korean firm has expedited organizational learning and to answer the research questions. Crises and Organizational Learning Organizational learning, whether to imitate or to innovate, takes place at two levels: the individual and organizational. The prime actors in the process of organizational learning are individuals within the firm. Organizational learning is not, however, the simple sum of individual learning (Hedberg 1981); rather, it is the process whereby knowledge is created, is distributed across the organization, is communicated among organization members, has consensual validity, and is integrated into the strategy and management of the organization (Duncan and Weiss 1978). Individual learning is therefore an indispensable condition for organizational learning but cannot be the sufficient condition. Organizations learn only when individual insights and skills become embodied in organizational routines, practices, and beliefs (Attewell 1992). Only effective organizations can translate individual learning into organizational learning (Hedberg 1980, Kim 1993, Shrivastava 1983). Absorptive Capacity Organizational learning is a function of an organization's absorptive capacity. Absorptive capacity requires learning capability and develops problem-solving skills. Learning capability is the capacity to assimilate knowledge (for innovation), whereas problem-solving skills represent a capacity to create new knowledge (for innovation). Absorptive capacity has two important elements, prior knowledge base and intensity of effort (Cohen and Levinthal 1990). Prior knowledge base consists of individual units of knowledge available within the organization. Accumulated prior knowledge increases the ability to make sense of and to assimilate and use new knowledge. Relevant prior knowledge base comprises basic skills and general knowledge in the case of developing countries, but includes the most recent scientific and technological knowledge in the case of industrially advanced countries. Hence, prior knowledge base should be assessed in relation to task difficulty (Kim 1995). Intensity of effort represents the amount of energy expended by organizational members to solve problems. Exposure of a firm to relevant external knowledge is insufficient unless an effort is made to internalize it. Learning how to solve problems is usually accomplished through many practice trials involving related problems (Harlow 1959). Hence, considerable time and effort must be directed to learning how to solve problems before complex problems can be addressed. Such effort intensifies interaction among organizational members, thus facilitating knowledge conversion and creation at the organizational level. As shown in Figure 1, prior knowledge base and intensity of effort in the organization constitute a 2 X 2 matrix that indicates the level of absorptive capacity. When both are high (quadrant 1), absorptive capacity is high; when both are low (quadrant 4), absorptive capacity is low. Organizations with high prior knowledge in relation to task difficulty and low intensity of effort (quadrant 2) will gradually lose their absorptive capacity, moving rapidly down to quadrant 4, because their prior knowledge base will become obsolete as task-related technology moves along its trajectory. In contrast, organizations with low prior knowledge in relation to task difficulty and high intensity of effort (quadrant 3) will be able to acquire absorptive capacity, moving progressively to quadrant 1, as repeated efforts to learn and solve problems elevate the level of relevant prior knowledge (Kim 1995). Knowledge and Learning Many social scientists have attempted to delineate knowledge dimensions (Garud and Nayyar 1994, Kogut and Zander 1992, Polanyi 1966, Rogers 1983, Winter 1987). Polanyi's two dimensions, explicit and tacit, are the most widely accepted. Explicit knowledge is knowledge that is codified and transmittable in formal, systematic language. It therefore can be acquired in the form of books, technical specifications, and designs, or as embodied in machines. Tacit knowledge, in contrast, is so deeply rooted in the human mind and body that it is difficult to codify and communicate and can be expressed only th",
"title": ""
},
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "6c992cd88e3531abc63b835a2a0fd67f",
"text": "Bitcoin introduces a revolutionary decentralized consensus mechanism. However, Bitcoin-derived consensus mechanisms applied to public blockchain are inadequate for the deployment scenarios of budding consortium blockchain. We propose a new consensus algorithm, Proof of Vote (POV). The consensus is coordinated by the distributed nodes controlled by consortium partners which will come to a decentralized arbitration by voting. The key idea is to establish different security identity for network participants, so that the submission and verification of the blocks are decided by the agencies' voting in the league without the depending on a third-party intermediary or uncontrollable public awareness. Compared with the fully decentralized consensus-Proof of Work (POW), POV has controllable security, convergence reliability, only one block confirmation to achieve the transaction finality, and low-delay transaction verification time.",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "a55b44543510713a7fdc4f7cb8c123b2",
"text": "The mechanisms that allow cancer cells to adapt to the typical tumor microenvironment of low oxygen and glucose and high lactate are not well understood. GPR81 is a lactate receptor recently identified in adipose and muscle cells that has not been investigated in cancer. In the current study, we examined GPR81 expression and function in cancer cells. We found that GPR81 was present in colon, breast, lung, hepatocellular, salivary gland, cervical, and pancreatic carcinoma cell lines. Examination of tumors resected from patients with pancreatic cancer indicated that 94% (148 of 158) expressed high levels of GPR81. Functionally, we observed that the reduction of GPR81 levels using shRNA-mediated silencing had little effect on pancreatic cancer cells cultured in high glucose, but led to the rapid death of cancer cells cultured in conditions of low glucose supplemented with lactate. We also observed that lactate addition to culture media induced the expression of genes involved in lactate metabolism, including monocarboxylase transporters in control, but not in GPR81-silenced cells. In vivo, GPR81 expression levels correlated with the rate of pancreatic cancer tumor growth and metastasis. Cells in which GPR81 was silenced showed a dramatic decrease in growth and metastasis. Implantation of cancer cells in vivo was also observed to lead to greatly elevated levels of GPR81. These data support that GPR81 is important for cancer cell regulation of lactate transport mechanisms. Furthermore, lactate transport is important for the survival of cancer cells in the tumor microenvironment. Cancer Res; 74(18); 5301-10. ©2014 AACR.",
"title": ""
},
{
"docid": "5177263bc8a9b2cc7a5e9848b63e1b17",
"text": "This research introduces an efficient method to obtain three novel frequency selective surface (FSS) slots. The method is based on the modification of Jerusalem cross (JC) slot which is called modified Jerusalem cross (MJC). The modifications of JC slot consist of the element type and geometry, the substrate and superstrate parameters, and inter-element spacing; however the FSSs have same dimension of unit cells and periodicity(Ltlambda). The array of slots which acts as a bandpass filter, were positioned normal to incidence plane wave. The array of metallic patches constitutes a capacitive surface and the inductive surface, which together act as a resonant structure in the path of an incident plane wave. The Si- parameters, resonance frequency, bandwidth and null between two adjacent resonance frequencies for each structure have been compared with the JC FSS. Three type of MJC have been designed and compared with JC slot. The result shows MJC1 has higher resonance frequency and higher bandwidth, MJC2 in the first resonance frequency works at higher resonance frequency and higher bandwidth but in second resonance frequency the bandwidth is lower and resonance frequency is higher than JC, and MJC3 has approximately same resonance frequency and higher band. This work provides new viewpoints to design novel structures for special purposes.",
"title": ""
},
{
"docid": "2937b605179b3a0f7657f7ddf5dbcf1a",
"text": "This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis.",
"title": ""
},
{
"docid": "54e2dfd355e9e082d9a6f8c266c84360",
"text": "The wealth and value of organizations are increasingly based on intellectual capital. Although acquiring talented individuals and investing in employee learning adds value to the organization, reaping the benefits of intellectual capital involves translating the wisdom of employees into reusable and sustained actions. This requires a culture that creates employee commitment, encourages learning, fosters sharing, and involves employees in decision making. An infrastructure to recognize and embed promising and best practices through social networks, evidence-based practice, customization of innovations, and use of information technology results in increased productivity, stronger financial performance, better patient outcomes, and greater employee and customer satisfaction.",
"title": ""
},
{
"docid": "c871bbc85056be37de29093fb6089544",
"text": "Today’s global economy requires increased attention to the issue of business competitiveness. Business information system or Artificial Intelligence and expert system raise the competitiveness of enterprises in the global market. Business intelligence as the basis for the development and application in business information is becoming an important information technology framework that can help organization to manage, develop and communicate their intangible assets, such as information and knowledge based economy. Competency Based Management (CBM) has become vital to any firm’s strategic position and organizational decision making. The role of Artificial intelligence & Expert System (ES) is to provide a knowledge based information system that is expected to have human attributes in order to replicate human capacity in ethical decision making. In this paper a holistic framework is proposed to review the ES approach that will be practically feasible for organizational settings. It will also provide executives and scholars with pragmatic understanding about integrating knowledge management strategy and technologies in business processes for successful performance. It is based on the psychological conceptions of human competence and performance in the workplace. Using an ES approach to Educational Institute with CBM will be able to more effectively use their limited resources to reap the more benefits from their investments in both people and technology. The idea of robotic domestic workers is still farfetched but companies are making progress even here. There is already a Robot Vacuum Cleaner marketed by Electrolux and doubtless improved systems with better functionality.",
"title": ""
},
{
"docid": "0141a40bd055b8e510b6794d6efb005c",
"text": "Individualism appears to have increased over the past several decades, yet most research documenting this shift has been limited to the study of a handful of highly developed countries. Is the world becoming more individualist as a whole? If so, why? To answer these questions, we examined 51 years of data on individualist practices and values across 78 countries. Our findings suggest that individualism is indeed rising in most of the societies we tested. Despite dramatic shifts toward greater individualism around the world, however, cultural differences remain sizable. Moreover, cultural differences are primarily linked to changes in socioeconomic development, and to a lesser extent to shifts in pathogen prevalence and disaster frequency.",
"title": ""
},
{
"docid": "68edeb881fc35f3e3065f6980d57a492",
"text": "Object tracking is one of the major fundamental challenging problems in computer vision applications due to difficulties in tracking of objects can arises due to intrinsic and extrinsic factors like deformation, camera motion, motion blur and occlusion. This paper proposes a literature review on several state -- of -- the-art object detection and tracking algorithms in order to reduce the tracking drift.",
"title": ""
},
{
"docid": "5589cff623a84d33f4bda4b17cd2105b",
"text": "In the past two decades, a great deal of information on the role of endophytic microorganisms in nature has been collected. The capability of colonizing internal host tissues has made endophytes valuable for agriculture as a tool to improve crop performance. In this review, we addressed the major topics concerning the control of insects-pests by endophytic microorganisms. Several examples of insect control are described, notably those involving the interactions between fungi and grazing grasses from temperate countries. The mechanisms by which endophytic fungi control insect attacks are listed and include toxin production as well as the influence of these compounds on plant and livestock and how their production may be affected by genetic and environmental conditions. The importance of endophytic entomopathogenic fungi for insect control is also addressed. As the literature has shown, there is a lack of information on endophytes from tropical hosts, which are more severely affected by pests and diseases. Having this in mind, we have included an updated and extensive literature in this review, concerning new findings from tropical plants, including the characterization of endophytic fungi and bacteria microbiota from several Amazon trees, citrus and",
"title": ""
},
{
"docid": "c5e553148657a26e87f1d20c90b40a1e",
"text": "Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI ) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.",
"title": ""
},
{
"docid": "48ba8ea879ba854e5b38ab187602721e",
"text": "With the advent of video-on-demand services and digital video recorders, the way in which we consume media is undergoing a fundamental change. People today are less likely to watch shows at the same time, let alone the same place. As a result, television viewing, which was once a social activity, has been reduced to a passive and isolated experience. To study this issue, we developed a system called CollaboraTV and demonstrated its ability to support the communal viewing experience through a month-long field study. Our study shows that users understand and appreciate the utility of asynchronous interaction, are enthusiastic about CollaboraTV's engaging social communication primitives and value implicit show recommendations from friends. Our results both provide a compelling demonstration of a social television system and raise new challenges for social television communication modalities.",
"title": ""
},
{
"docid": "7a5edda3bc5b271b6c1305c6a13d50eb",
"text": "Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.",
"title": ""
},
{
"docid": "ca0ede1b7a0f81e3f17f2bb8804b2eeb",
"text": "WiFi in indoor environments exhibits spatio-temporal variations in terms of coverage and interference in typical WLAN deployments with multiple APs, motivating the need for automated monitoring to aid network administrators to adapt the WLAN deployment in order to match the user expectations. We develop Pazl, a mobile crowdsensing based indoor WiFi monitoring system that is enabled by a novel hybrid localization mechanism to locate individual measurements taken from participant phones. The localization mechanism in Pazl integrates the best aspects of two well known localization techniques, pedestrian dead reckoning and WiFi fingerprinting; it also relies on crowdsourcing for constructing the WiFi fingerprint database. Compared to existing WiFi monitoring systems based on static sniffers, Pazl is low cost and provides a user-side perspective. Pazl is significantly more automated than wireless site survey tools such as Ekahau Mobile Survey tool by drastically reducing the manual point-and-click based measurement location determination. We implement Pazl through a combination of Android mobile app and cloud backend application on the Google App Engine. Experimental evaluation of Pazl with a trial set of users shows that it yields similar results to manual site surveys but without the tedium.",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
},
{
"docid": "143111f5fe59b99279d71cf70c588fe2",
"text": "In neural architecture search (NAS), the space of neural network architectures is automatically explored to maximize predictive accuracy for a given task. Despite the success of recent approaches, most existing methods cannot be directly applied to large scale problems because of their prohibitive computational complexity or high memory usage. In this work, we propose a Probabilistic approach to neural ARchitecture SEarCh (PARSEC) that drastically reduces memory requirements while maintaining state-of-the-art computational complexity, making it possible to directly search over more complex architectures and larger datasets. Our approach only requires as much memory as is needed to train a single architecture from our search space. This is due to a memory-efficient sampling procedure wherein we learn a probability distribution over high-performing neural network architectures. Importantly, this framework enables us to transfer the distribution of architectures learnt on smaller problems to larger ones, further reducing the computational cost. We showcase the advantages of our approach in applications to CIFAR-10 and ImageNet, where our approach outperforms methods with double its computational cost and matches the performance of methods with costs that are three orders of magnitude larger.",
"title": ""
}
] |
scidocsrr
|
c93a476afc35a3cc919bc06906b0d5cc
|
Semantic Complex Event Processing for Social Media Monitoring-A Survey
|
[
{
"docid": "2c2be931e456761824920fcc9e4666ec",
"text": "The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers",
"title": ""
}
] |
[
{
"docid": "3b442860310e3617184f9ccc89e5cddc",
"text": "A pneumatic muscle (PM) system was studied to determine whether a three-element model could describe its dynamics. As far as the authors are aware, this model has not been used to describe the dynamics of PM. A new phenomenological model consists of a contractile (force-generating) element, spring element, and damping element in parallel. The PM system was investigated using an apparatus that allowed precise and accurate actuation pressure (P) control by a linear servovalve. Length change of the PM was measured by a linear potentiometer. Spring and damping element functions of P were determined by a static perturbation method at several constant P values. These results indicate that at constant P, PM behaves as a spring and damper in parallel. The contractile element function of P was determined by the response to a step input in P, using values of spring and damping elements from the perturbation study. The study showed that the resulting coefficient functions of the three-element model describe the dynamic response to the step input of P accurately, indicating that the static perturbation results can be applied to the dynamic case. This model is further validated by accurately predicting the contraction response to a triangular P waveform. All three elements have pressure-dependent coefficients for pressure P in the range 207 ⩽ P⩽ 621 kPa (30⩽ P⩽ 90 psi). Studies with a step decrease in P (relaxation of the PM) indicate that the damping element coefficient is smaller during relaxation than contraction.© 2003 Biomedical Engineering Society. PAC2003: 8719Rr, 8719Ff, 8710+e, 8768+z",
"title": ""
},
{
"docid": "06605d7a6538346f3bb0771fd3c92c12",
"text": "Measurements show that the IGBT is able to clamp the collector-emitter voltage to a certain value at short-circuit turn-off despite a very low gate turn-off resistor in combination with a high parasitic inductance is applied. The IGBT itself reduces the turn-off diC/dt by avalanche injection. However, device destructions during fast turn-off were observed which cannot be linked with an overvoltage failure mode. Measurements and semiconductor simulations of high-voltage IGBTs explain the self-clamping mechanism in detail. Possible failures which can be connected with filamentation processes are described. Options for improving the IGBT robustness during short-circuit turn-off are discussed.",
"title": ""
},
{
"docid": "2caf31154811099e68644c3e3e7e1792",
"text": "In this paper, we study the effective semi-supervised hashing method under the framework of regularized learning-based hashing. A nonlinear hash function is introduced to capture the underlying relationship among data points. Thus, the dimensionality of the matrix for computation is not only independent from the dimensionality of the original data space but also much smaller than the one using linear hash function. To effectively deal with the error accumulated during converting the real-value embeddings into the binary code after relaxation, we propose a semi-supervised nonlinear hashing algorithm using bootstrap sequential projection learning which effectively corrects the errors by taking into account of all the previous learned bits holistically without incurring the extra computational overhead. Experimental results on the six benchmark data sets demonstrate that the presented method outperforms the state-of-the-art hashing algorithms at a large margin.",
"title": ""
},
{
"docid": "f5be73d82f441b5f0d6011bbbec8b759",
"text": "Abnormal crowd behavior detection is an important research issue in computer vision. The traditional methods first extract the local spatio-temporal cuboid from video. Then the cuboid is described by optical flow or gradient features, etc. Unfortunately, because of the complex environmental conditions, such as severe occlusion, over-crowding, etc., the existing algorithms cannot be efficiently applied. In this paper, we derive the high-frequency and spatio-temporal (HFST) features to detect the abnormal crowd behaviors in videos. They are obtained by applying the wavelet transform to the plane in the cuboid which is parallel to the time direction. The high-frequency information characterize the dynamic properties of the cuboid. The HFST features are applied to the both global and local abnormal crowd behavior detection. For the global abnormal crowd behavior detection, Latent Dirichlet allocation is used to model the normal scenes. For the local abnormal crowd behavior detection, Multiple Hidden Markov Models, with an competitive mechanism, is employed to model the normal scenes. The comprehensive experiment results show that the speed of detection has been greatly improved using our approach. Moreover, a good accuracy has been achieved considering the false positive and false negative detection rates.",
"title": ""
},
{
"docid": "596ef2efc6d35ba2d507d630945ed3d1",
"text": "The paper presents a high performance system for stepper motor control in a microstepping mode, which was designed and performed with a L292 specialized integrated circuits, made by SGS-THOMSON, Microelectronics Company. The microstepping control system improves the positioning accuracy and eliminates low speed ripple and resonance effects in a stepper motor electrical drive.",
"title": ""
},
{
"docid": "0d00fb427296aff5aa31c88852635ee5",
"text": "OBJECTIVE\nTo examine the relation between milk and calcium intake in midlife and the risk of Parkinson disease (PD).\n\n\nMETHODS\nFindings are based on dietary intake observed from 1965 to 1968 in 7,504 men ages 45 to 68 in the Honolulu Heart Program. Men were followed for 30 years for incident PD.\n\n\nRESULTS\nIn the course of follow-up, 128 developed PD (7.1/10,000 person-years). Age-adjusted incidence of PD increased with milk intake from 6.9/10,000 person-years in men who consumed no milk to 14.9/10,000 person-years in men who consumed >16 oz/day (p = 0.017). After further adjustment for dietary and other factors, there was a 2.3-fold excess of PD (95% CI 1.3 to 4.1) in the highest intake group (>16 oz/day) vs those who consumed no milk. The effect of milk consumption on PD was also independent of the intake of calcium. Calcium from dairy and nondairy sources had no apparent relation with the risk of PD.\n\n\nCONCLUSIONS\nFindings suggest that milk intake is associated with an increased risk of Parkinson disease. Whether observed effects are mediated through nutrients other than calcium or through neurotoxic contaminants warrants further study.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "a25adeae7e1cdc9260c7d059f9fa5f82",
"text": "This work presents a generic computer vision system designed for exploiting trained deep Convolutional Neural Networks (CNN) as a generic feature extractor and mixing these features with more traditional hand-crafted features. Such a system is a single structure that can be used for synthesizing a large number of different image classification tasks. Three substructures are proposed for creating the generic computer vision system starting from handcrafted and non-handcrafter features: i) one that remaps the output layer of a trained CNN to classify a different problem using an SVM; ii) a second for exploiting the output of the penultimate layer of a trained CNN as a feature vector to feed an SVM; and iii) a third for merging the output of some deep layers, applying a dimensionality reduction method, and using these features as the input to an SVM. The application of feature transform techniques to reduce the dimensionality of feature sets coming from the deep layers represents one of the main contributions of this paper. Three approaches are used for the non-handcrafted features: deep",
"title": ""
},
{
"docid": "397d6f645f5607140cf7d16597b8ec83",
"text": "OBJECTIVES\nTo determine if differences between dyslexic and typical readers in their reading scores and verbal IQ are evident as early as first grade and whether the trajectory of these differences increases or decreases from childhood to adolescence.\n\n\nSTUDY DESIGN\nThe subjects were the 414 participants comprising the Connecticut Longitudinal Study, a sample survey cohort, assessed yearly from 1st to 12th grade on measures of reading and IQ. Statistical analysis employed longitudinal models based on growth curves and multiple groups.\n\n\nRESULTS\nAs early as first grade, compared with typical readers, dyslexic readers had lower reading scores and verbal IQ, and their trajectories over time never converge with those of typical readers. These data demonstrate that such differences are not so much a function of increasing disparities over time but instead because of differences already present in first grade between typical and dyslexic readers.\n\n\nCONCLUSIONS\nThe achievement gap between typical and dyslexic readers is evident as early as first grade, and this gap persists into adolescence. These findings provide strong evidence and impetus for early identification of and intervention for young children at risk for dyslexia. Implementing effective reading programs as early as kindergarten or even preschool offers the potential to close the achievement gap.",
"title": ""
},
{
"docid": "560cadfecdf5207851d333b4a122a06d",
"text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].",
"title": ""
},
{
"docid": "df808fcf51612bf81e8fd328d298291d",
"text": "Chemomechanical preparation of the root canal includes both mechanical instrumentation and antibacterial irrigation, and is principally directed toward the elimination of micro-organisms from the root canal system. A variety of instruments and techniques have been developed and described for this critical stage of root canal treatment. Since their introduction in 1988, nickel-titanium (NiTi) rotary instruments have become a mainstay in clinical endodontics because of their exceptional ability to shape root canals with potentially fewer procedural complications. Safe clinical usage of NiTi instruments requires an understanding of basic metallurgy of the alloy including fracture mechanisms and their correlation to canal anatomy. This paper reviews the biologic principles of preparing root canals with an emphasis on correct use of current rotary NiTi instrumentation techniques and systems. The role and properties of contemporary root canal irrigants is also discussed.",
"title": ""
},
{
"docid": "1e4cb8960a99ad69e54e8c44fb21e855",
"text": "Over the last decade, the endocannabinoid system has emerged as a pivotal mediator of acute and chronic liver injury, with the description of the role of CB1 and CB2 receptors and their endogenous lipidic ligands in various aspects of liver pathophysiology. A large number of studies have demonstrated that CB1 receptor antagonists represent an important therapeutic target, owing to beneficial effects on lipid metabolism and in light of its antifibrogenic properties. Unfortunately, the brain-penetrant CB1 antagonist rimonabant, initially approved for the management of overweight and related cardiometabolic risks, was withdrawn because of an alarming rate of mood adverse effects. However, the efficacy of peripherally-restricted CB1 antagonists with limited brain penetrance has now been validated in preclinical models of NAFLD, and beneficial effects on fibrosis and its complications are anticipated. CB2 receptor is currently considered as a promising anti-inflammatory and antifibrogenic target, although clinical development of CB2 agonists is still awaited. In this review, we highlight the latest advances on the impact of the endocannabinoid system on the key steps of chronic liver disease progression and discuss the therapeutic potential of molecules targeting cannabinoid receptors.",
"title": ""
},
{
"docid": "0a20a3c9e4da2b87a6fdc4e4a66fee2d",
"text": "In this paper, we propose a probabilistic survival model derived from the survival analysis theory for measuring aspect novelty. The retrieved documents' query-relevance and novelty are combined at the aspect level for re-ranking. Experiments conducted on the TREC 2006 and 2007 Genomics collections demonstrate the effectiveness of the proposed approach in promoting ranking diversity for biomedical information retrieval.",
"title": ""
},
{
"docid": "107bb53e3ceda3ee29fc348febe87f11",
"text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.",
"title": ""
},
{
"docid": "ac5ba63b30562827a27607fd2b91f5d3",
"text": "Understanding unstructured texts is an essential skill for human beings as it enables knowledge acquisition. Although understanding unstructured texts is easy for we human beings with good education, it is a great challenge for machines. Recently, with the rapid development of artificial intelligence techniques, researchers put efforts to teach machines to understand texts and justify the educated machines by letting them solve the questions upon the given unstructured texts, inspired by the reading comprehension test as we humans do. However, feature effectiveness with respect to different questions significantly hinders the performance of answer selection, because different questions may focus on various aspects of the given text and answer candidates. To solve this problem, we propose a question-oriented feature attention (QFA) mechanism, which learns to weight different engineering features according to the given question, so that important features with respect to the specific question is emphasized accordingly. Experiments on MCTest dataset have well-validated the effectiveness of the proposed method. Additionally, the proposed QFA is applicable to various IR tasks, such as question answering and answer selection. We have verified the applicability on a crawled community-based question-answering dataset.",
"title": ""
},
{
"docid": "25e04f534f2a1d0d3d7e20c3c17ef387",
"text": "Recent techniques enable folding planer sheets to create complex 3D shapes, however, even a small 3D shape can have large 2D unfoldings. The huge dimension of the flattened structure makes fabrication difficult. In this paper, we propose a novel approach for folding a single thick strip into two target shapes: folded 3D shape and stacked shape. The folded shape is an approximation of a complex 3D shape provided by the user. The provided 3D shape may be too large to be fabricated (e.g. 3D-printed) due to limited workspace. Meanwhile, the stacked shape could be the compactest form of the 3D shape which makes its fabrication possible. The compactness of the stacked state also makes packing and transportation easier. The key technical contribution of this work is an efficient method for finding strips for quadrilateral meshes without refinement. We demonstrate our results using both simulation and fabricated models.",
"title": ""
},
{
"docid": "8c086dec1e59a2f0b81d6ce74e92eae7",
"text": "A necessary attribute of a mobile robot planning algorithm is the ability to accurately predict the consequences of robot actions to make informed decisions about where and how to drive. It is also important that such methods are efficient, as onboard computational resources are typically limited and fast planning rates are often required. In this article, we present several practical mobile robot motion planning algorithms for local and global search, developed with a common underlying trajectory generation framework for use in model-predictive control. These techniques all center on the idea of generating informed, feasible graphs at scales and resolutions that respect computational and temporal constraints of the application. Connectivity in these graphs is provided by a trajectory generator that searches in a parameterized space of robot inputs subject to an arbitrary predictive motion model. Local search graphs connect the currently observed state-to-states at or near the planning or perception horizon. Global search graphs repeatedly expand a precomputed trajectory library in a uniformly distributed state lattice to form a recombinant search space that respects differential constraints. In this article, we discuss the trajectory generation algorithm, methods for online or offline calibration of predictive motion models, sampling strategies for local search graphs that exploit global guidance and environmental information for real-time obstacle avoidance and navigation, and methods for efficient design of global search graphs with attention to optimality, feasibility, and computational complexity of heuristic search. The model-invariant nature of our approach to local and global motions planning has enabled a rapid and successful application of these techniques to a variety of platforms. Throughout the article, we also review experiments performed on planetary rovers, field robots, mobile manipulators, and autonomous automobiles and discuss future directions of the article.",
"title": ""
},
{
"docid": "9accdf3edad1e9714282e58758d3c382",
"text": "We present initial results from and quantitative analysis of two leading open source hypervisors, Xen and KVM. This study focuses on the overall performance, performance isolation, and scalability of virtual machines running on these hypervisors. Our comparison was carried out using a benchmark suite that we developed to make the results easily repeatable. Our goals are to understand how the different architectural decisions taken by different hypervisor developers affect the resulting hypervisors, to help hypervisor developers realize areas of improvement for their hypervisors, and to help users make informed decisions about their choice of hypervisor.",
"title": ""
},
{
"docid": "310b8159894bc88b74a907c924277de6",
"text": "We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graph-theoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers.",
"title": ""
},
{
"docid": "acbb1a68d9e0e1768fff8acc8ae42b32",
"text": "The rapid increase in the number of Android malware poses great challenges to anti-malware systems, because the sheer number of malware samples overwhelms malware analysis systems. The classification of malware samples into families, such that the common features shared by malware samples in the same family can be exploited in malware detection and inspection, is a promising approach for accelerating malware analysis. Furthermore, the selection of representative malware samples in each family can drastically decrease the number of malware to be analyzed. However, the existing classification solutions are limited because of the following reasons. First, the legitimate part of the malware may misguide the classification algorithms because the majority of Android malware are constructed by inserting malicious components into popular apps. Second, the polymorphic variants of Android malware can evade detection by employing transformation attacks. In this paper, we propose a novel approach that constructs frequent subgraphs (fregraphs) to represent the common behaviors of malware samples that belong to the same family. Moreover, we propose and develop FalDroid, a novel system that automatically classifies Android malware and selects representative malware samples in accordance with fregraphs. We apply it to 8407 malware samples from 36 families. Experimental results show that FalDroid can correctly classify 94.2% of malware samples into their families using approximately 4.6 sec per app. FalDroid can also dramatically reduce the cost of malware investigation by selecting only 8.5% to 22% representative samples that exhibit the most common malicious behavior among all samples.",
"title": ""
}
] |
scidocsrr
|
ef672d1005138956e24b42c5fa2c62fe
|
A Survey on Internet of Things: Security and Privacy Issues
|
[
{
"docid": "3c778c71f621b2c887dc81e7a919058e",
"text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.",
"title": ""
}
] |
[
{
"docid": "795a4d9f2dc10563dfee28c3b3cd0f08",
"text": "A wide-band probe fed patch antenna with low cross polarization and symmetrical broadside radiation pattern is proposed and studied. By employing a novel meandering probe feed and locating a patch about 0.1/spl lambda//sub 0/ above a ground plane, a patch antenna with 30% impedance bandwidth (SWR<2) and 9 dBi gain is designed. The far field radiation pattern of the antenna is stable across the operating bandwidth. Parametric studies and design guidelines of the proposed feeding structure are provided.",
"title": ""
},
{
"docid": "ef9b5b0fbfd71c8d939bfe947c60292d",
"text": "OBJECTIVE\nSome prolonged and turbulent grief reactions include symptoms that differ from the DSM-IV criteria for major depressive disorder. The authors investigated a new diagnosis that would include these symptoms.\n\n\nMETHOD\nThey developed observer-based definitions of 30 symptoms noted clinically in previous longitudinal interviews of bereaved persons and then designed a plan to investigate whether any combination of these would serve as criteria for a possible new diagnosis of complicated grief disorder. Using a structured diagnostic interview, they assessed 70 subjects whose spouses had died. Latent class model analyses and signal detection procedures were used to calibrate the data against global clinical ratings and self-report measures of grief-specific distress.\n\n\nRESULTS\nComplicated grief disorder was found to be characterized by a smaller set of the assessed symptoms. Subjects elected by an algorithm for these symptoms patterns did not significantly overlap with subjects who received a diagnosis of major depressive disorder.\n\n\nCONCLUSIONS\nA new diagnosis of complicated grief disorder may be indicated. Its criteria would include the current experience (more than a year after a loss) of intense intrusive thoughts, pangs of severe emotion, distressing yearnings, feeling excessively alone and empty, excessively avoiding tasks reminiscent of the deceased, unusual sleep disturbances, and maladaptive levels of loss of interest in personal activities.",
"title": ""
},
{
"docid": "67825e84cb2e636deead618a0868fa4a",
"text": "Image compression is used specially for the compression of images where tolerable degradation is required. With the wide use of computers and consequently need for large scale storage and transmission of data, efficient ways of storing of data have become necessary. With the growth of technology and entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous information can often present difficulties. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. In this paper, we discuss about lossy image compression techniques and reviews of different basic lossy image compression methods are considered. The methods such as JPEG and JPEG2000 are considered. A conclusion is derived on the basis of these methods Keywords— Data compression, Lossy image compression, JPEG, JPEG2000, DCT, DWT",
"title": ""
},
{
"docid": "aeba4012971d339a9a953a7b86f57eb8",
"text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"title": ""
},
{
"docid": "6111427d19826acdd38c80cb7f421405",
"text": "We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks.",
"title": ""
},
{
"docid": "db3bb02dde6c818b173cf12c9c7440b7",
"text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.",
"title": ""
},
{
"docid": "fc522482dbbcdeaa06e3af9a2f82b377",
"text": "Background/Objectives:As rates of obesity have increased throughout much of the world, so too have bias and prejudice toward people with higher body weight (that is, weight bias). Despite considerable evidence of weight bias in the United States, little work has examined its extent and antecedents across different nations. The present study conducted a multinational examination of weight bias in four Western countries with comparable prevalence rates of adult overweight and obesity.Methods:Using comprehensive self-report measures with 2866 individuals in Canada, the United States, Iceland and Australia, the authors assessed (1) levels of explicit weight bias (using the Fat Phobia Scale and the Universal Measure of Bias) and multiple sociodemographic predictors (for example, sex, age, race/ethnicity and educational attainment) of weight-biased attitudes and (2) the extent to which weight-related variables, including participants’ own body weight, personal experiences with weight bias and causal attributions of obesity, play a role in expressions of weight bias in different countries.Results:The extent of weight bias was consistent across countries, and in each nation attributions of behavioral causes of obesity predicted stronger weight bias, as did beliefs that obesity is attributable to lack of willpower and personal responsibility. In addition, across all countries the magnitude of weight bias was stronger among men and among individuals without family or friends who had experienced this form of bias.Conclusions:These findings offer new insights and important implications regarding sociocultural factors that may fuel weight bias across different cultural contexts, and for targets of stigma-reduction efforts in different countries.",
"title": ""
},
{
"docid": "e473e6b4c5d825582f3a5afe00a005de",
"text": "This paper explores and quantifies garbage collection behavior for three whole heap collectors and generational counterparts: copying semi-space, mark-sweep, and reference counting, the canonical algorithms from which essentially all other collection algorithms are derived. Efficient implementations in MMTk, a Java memory management toolkit, in IBM's Jikes RVM share all common mechanisms to provide a clean experimental platform. Instrumentation separates collector and program behavior, and performance counters measure timing and memory behavior on three architectures.Our experimental design reveals key algorithmic features and how they match program characteristics to explain the direct and indirect costs of garbage collection as a function of heap size on the SPEC JVM benchmarks. For example, we find that the contiguous allocation of copying collectors attains significant locality benefits over free-list allocators. The reduced collection costs of the generational algorithms together with the locality benefit of contiguous allocation motivates a copying nursery for newly allocated objects. These benefits dominate the overheads of generational collectors compared with non-generational and no collection, disputing the myth that \"no garbage collection is good garbage collection.\" Performance is less sensitive to the mature space collection algorithm in our benchmarks. However the locality and pointer mutation characteristics for a given program occasionally prefer copying or mark-sweep. This study is unique in its breadth of garbage collection algorithms and its depth of analysis.",
"title": ""
},
{
"docid": "ac0119255806976213d61029247b14f1",
"text": "Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator's highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training-evaluation in a more realistic setting may be necessary.",
"title": ""
},
{
"docid": "f2640838cfc3938d1a717229e77b3afc",
"text": "Defenders of enterprise networks have a critical need to quickly identify the root causes of malware and data leakage. Increasingly, USB storage devices are the media of choice for data exfiltration, malware propagation, and even cyber-warfare. We observe that a critical aspect of explaining and preventing such attacks is understanding the provenance of data (i.e., the lineage of data from its creation to current state) on USB devices as a means of ensuring their safe usage. Unfortunately, provenance tracking is not offered by even sophisticated modern devices. This work presents ProvUSB, an architecture for fine-grained provenance collection and tracking on smart USB devices. ProvUSB maintains data provenance by recording reads and writes at the block layer and reliably identifying hosts editing those blocks through attestation over the USB channel. Our evaluation finds that ProvUSB imposes a one-time 850 ms overhead during USB enumeration, but approaches nearly-bare-metal runtime performance (90% of throughput) on larger files during normal execution, and less than 0.1% storage overhead for provenance in real-world workloads. ProvUSB thus provides essential new techniques in the defense of computer systems and USB storage devices.",
"title": ""
},
{
"docid": "edcdae3f9da761cedd52273ccd850520",
"text": "Extracting information from Web pages requires the ability to work at Web scale in terms of the number of documents, the number of domains and domain complexity. Recent approaches have used existing knowledge bases to learn to extract information with promising results. In this paper we propose the use of distant supervision for relation extraction from the Web. Distant supervision is a method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains, as well as extracting relations across sentence boundaries. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. Our experiments show that using a more robust entity recognition approach and expanding the scope of relation extraction results in about 8 times the number of extractions, and that strategically selecting training data can result in an error reduction of about 30%.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "a58cbbff744568ae7abd2873d04d48e9",
"text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.",
"title": ""
},
{
"docid": "df808fcf51612bf81e8fd328d298291d",
"text": "Chemomechanical preparation of the root canal includes both mechanical instrumentation and antibacterial irrigation, and is principally directed toward the elimination of micro-organisms from the root canal system. A variety of instruments and techniques have been developed and described for this critical stage of root canal treatment. Since their introduction in 1988, nickel-titanium (NiTi) rotary instruments have become a mainstay in clinical endodontics because of their exceptional ability to shape root canals with potentially fewer procedural complications. Safe clinical usage of NiTi instruments requires an understanding of basic metallurgy of the alloy including fracture mechanisms and their correlation to canal anatomy. This paper reviews the biologic principles of preparing root canals with an emphasis on correct use of current rotary NiTi instrumentation techniques and systems. The role and properties of contemporary root canal irrigants is also discussed.",
"title": ""
},
{
"docid": "f48f55963cf3beb43170df96a463feba",
"text": "This article proposes and implements a class of chaotic motors for electric compaction. The key is to develop a design approach for the permanent magnets PMs of doubly salient PM DSPM motors in such a way that chaotic motion can be naturally produced. The bifurcation diagram is employed to derive the threshold of chaoization in terms of PM flux, while the corresponding phase-plane trajectories are used to characterize the chaotic motion. A practical three-phase 12/8-pole DSPM motor is used for exemplification. The proposed chaotic motor is critically assessed for application to a vibratory soil compactor, which is proven to offer better compaction performance than its counterparts. Both computer simulation and experimental results are given to illustrate the proposed chaotic motor. © 2006 American Institute of Physics. DOI: 10.1063/1.2165783",
"title": ""
},
{
"docid": "b82b46fc0d886e3e87b757a6ca14d4bb",
"text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.",
"title": ""
},
{
"docid": "e9b8787e5bb1f099e914db890e04dc23",
"text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.",
"title": ""
},
{
"docid": "d8b3eb944d373741747eb840a18a490b",
"text": "Natural scenes contain large amounts of geometry, such as hundreds of thousands or even millions of tree leaves and grass blades. Subtle lighting effects present in such environments usually include a significant amount of occlusion effects and lighting variation. These effects are important for realistic renderings of such natural environments; however, plausible lighting and full global illumination computation come at prohibitive costs especially for interactive viewing. As a solution to this problem, we present a simple approximation to integrated visibility over a hemisphere (ambient occlusion) that allows interactive rendering of complex and dynamic scenes. Based on a set of simple assumptions, we show that our method allows the rendering of plausible variation in lighting at modest additional computation and little or no precomputation, for complex and dynamic scenes.",
"title": ""
},
{
"docid": "96f42b3a653964cffa15d9b3bebf0086",
"text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,",
"title": ""
},
{
"docid": "aa0d6d4fb36c2a1d18dac0930e89179e",
"text": "The interest in biomass is increasing in the light of the growing concern about global warming and the resulting climate change. The emission of the greenhouse gas CO2 can be reduced when 'green' biomass-derived transportation fuels are used. One of the most promising routes to produce green fuels is the combination of biomass gasification (BG) and Fischer-Tropsch (FT) synthesis, wherein biomass is gasified and after cleaning the biosyngas is used for FT synthesis to produce long-chain hydrocarbons that are converted into ‘green diesel’. To demonstrate this route, a small FT unit based on Shell technology was operated for in total 650 hours on biosyngas produced by gasification of willow. In the investigated system, tars were removed in a high-temperature tar cracker and other impurities, like NH3 and H2S were removed via wet scrubbing followed by active-carbon and ZnO filters. The experimental work and the supporting system analysis afforded important new insights on the desired gas cleaning and the optimal line-up for biomass gasification processes with a maximised conversion to FT liquids. Two approaches were considered: a front-end approach with reference to the (small) scale of existing CFB gasifiers (1-100 M Wth) and a back-end approach with reference to the desired (large) scale for FT synthesis (500-1000 MWth). In general, the sum of H2 and CO in the raw biosyngas is an important parameter, whereas the H2/CO ratio is less relevant. BTX (i.e . benzene, toluene, and xylenes) are the design guideline for the gas cleaning and with this the tar issue is de-facto solved (as tars are easier to remove than BTX). To achieve high yields of FT products the presence of a tar cracker in the system is required. Oxygen gasification allows a further increase in yield of FT products as a N2-free gas is required for off-gas recycling. The scale of the BG-FT installation determines the line-up of the gas cleaning and the integrated process. It is expected that the future of BG-FT systems will be large plants with pressurised oxygen blown gasifiers and maximised Fischer-Tropsch synthesis.",
"title": ""
}
] |
scidocsrr
|
ae4925716c46b95a6ffa2b4b2307cc67
|
Detecting authorship deception: a supervised machine learning approach using author writeprints
|
[
{
"docid": "7f652be9bde8f47d166e7bbeeb3a535b",
"text": "One of the problems often associated with online anonymity is that it hinders social accountability, as substantiated by the high levels of cybercrime. Although identity cues are scarce in cyberspace, individuals often leave behind textual identity traces. In this study we proposed the use of stylometric analysis techniques to help identify individuals based on writing style. We incorporated a rich set of stylistic features, including lexical, syntactic, structural, content-specific, and idiosyncratic attributes. We also developed the Writeprints technique for identification and similarity detection of anonymous identities. Writeprints is a Karhunen-Loeve transforms-based technique that uses a sliding window and pattern disruption algorithm with individual author-level feature sets. The Writeprints technique and extended feature set were evaluated on a testbed encompassing four online datasets spanning different domains: email, instant messaging, feedback comments, and program code. Writeprints outperformed benchmark techniques, including SVM, Ensemble SVM, PCA, and standard Karhunen-Loeve transforms, on the identification and similarity detection tasks with accuracy as high as 94% when differentiating between 100 authors. The extended feature set also significantly outperformed a baseline set of features commonly used in previous research. Furthermore, individual-author-level feature sets generally outperformed use of a single group of attributes.",
"title": ""
}
] |
[
{
"docid": "8f78f2efdd2fecaf32fbc7f5ffa79218",
"text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.",
"title": ""
},
{
"docid": "b07cbf3da9e3ff9691dcb49040c7e6a5",
"text": "Few years ago, the information flow in library was relatively simple and the application of technology was limited. However, as we progress into a more integrated world where technology has become an integral part of the business processes, the process of transfer of information has become more complicated. Today, one of the biggest challenges that libraries face is the explosive growth of library data and to use this data to improve the quality of managerial decisions. Data mining techniques are analytical tools that can be used to extract meaningful knowledge from large data sets. This paper addresses the applications of data mining in library to extract useful information from the huge data sets and providing analytical tool to view and use this information for decision making processes by taking real life examples.",
"title": ""
},
{
"docid": "b591667db2fd53ac9332464b4babd877",
"text": "Health Insurance fraud is a major crime that imposes significant financial and personal costs on individuals, businesses, government and society as a whole. So there is a growing concern among the insurance industry about the increasing incidence of abuse and fraud in health insurance. Health Insurance frauds are driving up the overall costs of insurers, premiums for policyholders, providers and then intern countries finance system. It encompasses a wide range of illicit practices and illegal acts. This paper provides an approach to detect and predict potential frauds by applying big data, hadoop environment and analytic methods which can lead to rapid detection of claim anomalies. The solution is based on a high volume of historical data from various insurance company data and hospital data of a specific geographical area. Such sources are typically voluminous, diverse, and vary significantly over the time. Therefore, distributed and parallel computing tools collectively termed big data have to be developed. Paper demonstrate the effectiveness and efficiency of the open-source predictive modeling framework we used, describe the results from various predictive modeling techniques .The platform is able to detect erroneous or suspicious records in submitted health care data sets and gives an approach of how the hospital and other health care data is helpful for the detecting health care insurance fraud by implementing various data analytic module such as decision tree, clustering and naive Bayesian classification. Aim is to build a model that can identify the claim is a fraudulent or not by relating data from hospitals and insurance company to make health insurance more efficient and to ensure that the money is spent on legitimate causes. Critical objectives included the development of a fraud detection engine with an aim to help those in the health insurance business and minimize the loss of funds to fraud.",
"title": ""
},
{
"docid": "d1041afcb50a490034740add2cce3f0d",
"text": "Inverse synthetic aperture radar imaging of moving targets with a stepped frequency waveform presents unique challenges. Intra-step target motion introduces phase discontinuities between frequency bands, which in turn produce degraded range side lobes. Frequency stitching of the stepped-frequency waveform to emulate a contiguous bandwidth can dramatically reduce the effective pulse repetition frequency, which then may impact the maximize target size that can be unambiguously measured and imaged via ISAR. This paper analyzes these effects and validates results via simulated data.",
"title": ""
},
{
"docid": "885281566381b396594a7508e5f255c8",
"text": "The last decade has witnessed the emergence and aesthetic maturation of amateur multimedia on an unprecedented scale, from video podcasts to machinima, and Flash animations to user-created metaverses. Today, especially in academic circles, this pop culture phenomenon is little recognized and even less understood. This paper explores creativity in amateur multimedia using three theorizations of creativity—those of HCI, postructuralism, and technological determinism. These theorizations frame a semiotic analysis of numerous commonly used multimedia authoring platforms, which demonstrates a deep convergence of multimedia authoring tool strategies that collectively project a conceptualization and practice of digital creativity. This conceptualization of digital creativity in authoring tools is then compared with hundreds of amateur-created artifacts. These analyses reveal relationships among emerging amateur multimedia aesthetics, common software authoring tools, and the three theorizations of creativity discussed.",
"title": ""
},
{
"docid": "13c2dea57aed95f7b937a9d329dd5af8",
"text": "Understanding topic hierarchies in text streams and their evolution patterns over time is very important in many applications. In this paper, we propose an evolutionary multi-branch tree clustering method for streaming text data. We build evolutionary trees in a Bayesian online filtering framework. The tree construction is formulated as an online posterior estimation problem, which considers both the likelihood of the current tree and conditional prior given the previous tree. We also introduce a constraint model to compute the conditional prior of a tree in the multi-branch setting. Experiments on real world news data demonstrate that our algorithm can better incorporate historical tree information and is more efficient and effective than the traditional evolutionary hierarchical clustering algorithm.",
"title": ""
},
{
"docid": "6f854ac470ce9ffb615b5457bad2dcad",
"text": "Efficient CNN designs like ResNets and DenseNet were proposed to improve accuracy vs efficiency trade-offs. They essentially increased the connectivity, allowing efficient information flow across layers. Inspired by these techniques, we propose to model connections between filters of a CNN using graphs which are simultaneously sparse and well connected. Sparsity results in efficiency while well connectedness can preserve the expressive power of the CNNs. We use a well-studied class of graphs from theoretical computer science that satisfies these properties known as Expander graphs. Expander graphs are used to model connections between filters in CNNs to design networks called X-Nets. We present two guarantees on the connectivity of X-Nets: Each node influences every node in a layer in logarithmic steps, and the number of paths between two sets of nodes is proportional to the product of their sizes. We also propose efficient training and inference algorithms, making it possible to train deeper and wider X-Nets effectively. Expander based models give a 4% improvement in accuracy on MobileNet over grouped convolutions, a popular technique, which has the same sparsity but worse connectivity. X-Nets give better performance trade-offs than the original ResNet and DenseNet-BC architectures. We achieve model sizes comparable to state-of-the-art pruning techniques using our simple architecture design, without any pruning. We hope that this work motivates other approaches to utilize results from graph theory to develop efficient network architectures.",
"title": ""
},
{
"docid": "51ddbc18a9e5a460038676b7d5dc6f10",
"text": "The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.",
"title": ""
},
{
"docid": "9697137a72f41fb4fb841e4e1b41be62",
"text": "Cast shadows are an informative cue to the shape of objects. They are particularly valuable for discovering object’s concavities which are not available from other cues such as occluding boundaries. We propose a new method for recovering shape from shadows which we call shadow carving. Given a conservative estimate of the volume occupied by an object, it is possible to identify and carve away regions of this volume that are inconsistent with the observed pattern of shadows. We prove a theorem that guarantees that when these regions are carved away from the shape, the shape still remains conservative. Shadow carving overcomes limitations of previous studies on shape from shadows because it is robust with respect to errors in shadows detection and it allows the reconstruction of objects in the round, rather than just bas-reliefs. We propose a reconstruction system to recover shape from silhouettes and shadow carving. The silhouettes are used to reconstruct the initial conservative estimate of the object’s shape and shadow carving is used to carve out the concavities. We have simulated our reconstruction system with a commercial rendering package to explore the design parameters and assess the accuracy of the reconstruction. We have also implemented our reconstruction scheme in a table-top system and present the results of scanning of several objects.",
"title": ""
},
{
"docid": "7f53f16a4806d8179725cd9aa4537800",
"text": "Corpus linguistics is one of the fastest-growing methodologies in contemporary linguistics. In a conversational format, this article answers a few questions that corpus linguists regularly face from linguists who have not used corpus-based methods so far. It discusses some of the central assumptions (‘formal distributional differences reflect functional differences’), notions (corpora, representativity and balancedness, markup and annotation), and methods of corpus linguistics (frequency lists, concordances, collocations), and discusses a few ways in which the discipline still needs to mature. At a recent LSA meeting ... [with an obvious bow to Frederick Newmeyer] Question: So, I hear you’re a corpus linguist. Interesting, I get to see more and more abstracts and papers and even job ads where experience with corpus-based methods are mentioned, but I actually know only very little about this area. So, what’s this all about? Answer: Yes, it’s true, it’s really an approach that’s gaining more and more prominence in the field. In an editorial of the flagship journal of the discipline, Joseph (2004:382) actually wrote ‘we seem to be witnessing as well a shift in the way some linguists find and utilize data – many papers now use corpora as their primary data, and many use internet data’. Question: My impression exactly. Now, you say ‘approach’, but that’s something I’ve never really understood. Corpus linguistics – is that a theory or model or a method or what? Answer: Good question and, as usual, people differ in their opinions. One well-known corpus linguist, for example, considers corpus linguistics – he calls it computer corpus linguistics – a ‘new philosophical approach [...]’ Leech (1992:106). Many others, including myself, consider it a method(ology), no more, but also no less (cf. McEnery et al. 2006:7f ). However, I don’t think this difference would result in many practical differences. Taylor (2008) discusses this issue in more detail, and for an amazingly comprehensive overview of how huge and diverse the field has become, cf. Lüdeling and Kytö (2008, 2009). Question: Hm ... But if you think corpus linguistics is a methodology, .... Well, let me ask you this: usually, linguists try to interpret the data they investigate against the background of some theory. Generative grammarians interpret their acceptability judgments within Government and Binding Theory or the Minimalist Program; some psycholinguists interpret their reaction time data within, for example, a connectionist interactive Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x a 2009 The Author Journal Compilation a 2009 Blackwell Publishing Ltd activation model – now if corpus linguistics is only a methodology, then what is the theory within which you interpret your findings? Answer: Again as usual, there’s no simple answer to this question; it depends .... There are different perspectives one can take. One is that many corpus linguists would perhaps even say that for them, linguistic theory is not of the same prime importance as it is in, for example, generative approaches. Correspondingly, I think it’s fair to say that a large body of corpus-linguistic work has a rather descriptive or applied focus and does actually not involve much linguistic theory. Another one is that corpus linguistic methods are a method just as acceptability judgments, experimental data, etc. and that linguists of every theoretical persuasion can use corpus data. If a linguist investigates how lexical items become more and more used as grammatical markers in a corpus, then the results are descriptive and ⁄ or most likely interpreted within some form of grammaticalization theory. If a linguist studies how German second language learners of English acquire the formation of complex clauses, then he will either just describe what he finds or interpret it within some theory of second language acquisition and so on... . There’s one other, more general way to look at it, though. I can of course not speak for all corpus linguists, but I myself think that a particular kind of linguistic theory is actually particularly compatible with corpus-linguistic methods. These are usage-based cognitive-linguistic theories, and they’re compatible with corpus linguistics in several ways. (You’ll find some discussion in Schönefeld 1999.) First, the units of language assumed in cognitive linguistics and corpus linguistics are very similar: what is a unit in probably most versions of cognitive linguistics or construction grammar is a symbolic unit or a construction, which is an element that covers morphemes, words, etc. Such symbolic units or constructions are often defined broadly enough to match nearly all of the relevant corpus-linguistic notions (cf. Gries 2008a): collocations, colligations, phraseologisms, .... Lastly, corpus-linguistic analyses are always based on the evaluation of some kind of frequencies, and frequency as well as its supposed mental correlate of cognitive entrenchment is one of several central key explanatory mechanisms within cognitively motivated approaches (cf., e.g. Bybee and Hopper 1997; Barlow and Kemmer 2000; Ellis 2002a,b; Goldberg 2006). Question: Wait a second – ‘corpus-linguistic analyses are always based on the evaluation of some kind of frequencies?’ What does that mean? I mean, most linguistic research I know is not about frequencies at all – if corpus linguistics is all about frequencies, then what does corpus linguistics have to contribute? Answer: Well, many corpus linguists would probably not immediately agree to my statement, but I think it’s true anyway. There are two things to be clarified here. First, frequency of what? The answer is, there are no meanings, no functions, no concepts in corpora – corpora are (usually text) files and all you can get out of such files is distributional (or quantitative ⁄ statistical) information: ) frequencies of occurrence of linguistic elements, i.e. how often morphemes, words, grammatical patterns etc. occur in (parts of) a corpus, etc.; this information is usually represented in so-called frequency lists; ) frequencies of co-occurrence of these elements, i.e. how often morphemes occur with particular words, how often particular words occur in a certain grammatical 2 Stefan Th. Gries a 2009 The Author Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x Journal Compilation a 2009 Blackwell Publishing Ltd construction, etc.; this information is mostly shown in so-called concordances in which all occurrences of, say, the word searched for are shown in their respective contexts. Figure 1 is an example. As a linguist, you don’t just want to talk about frequencies or distributional information, which is why corpus linguists must make a particular fundamental assumption or a conceptual leap, from frequencies to the things linguists are interested in, but frequencies is where it all starts. Second, what kind of frequency? The answer is that the notion frequency doesn’t presuppose that the relevant linguistic phenomenon occurs in a corpus 100 or 1000 times – the notion of frequency also includes phenomena that occur only once or not at all. For example, there are statistical methods and models out there that can handle non-occurrence or estimate frequencies of unseen items. Thus, corpus linguistics is concerned with whether ) something (an individual element or the co-occurrence of more than one individual element) is attested in corpora; i.e. whether the observed frequency (of occurrence or co-occurrence) is 0 or larger; ) something is attested in corpora more often than something else; i.e. whether an observed frequency is larger than the observed frequency of something else; ) something is observed more or less often than you would expect by chance [this is a more profound issue than it may seem at first; Stefanowitsch (2006) discusses this in more detail]. This also implies that statistical methods can play a large part in corpus linguistics, but this is one area where I think the discipline must still mature or evolve. Fig. 1. A concordance output from AntConc 3.2.2w. What is Corpus Linguistics? 3 a 2009 The Author Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x Journal Compilation a 2009 Blackwell Publishing Ltd Question: What do you mean? Answer: Well, this is certainly a matter of debate, but I think that a field that developed in part out of a dissatisfaction concerning methods and data in linguistics ought to be very careful as far as its own methods and data are concerned. It is probably fair to say that many linguists turned to corpus data because they felt there must be more to data collection than researchers intuiting acceptability judgments about what one can say and what one cannot; cf. Labov (1975) and, say, Wasow and Arnold (2005:1485) for discussion and exemplification of the mismatch between the reliability of judgment data by prominent linguists of that time and the importance that was placed on them, as well as McEnery and Wilson (2001: Ch. 1), Sampson (2001: Chs 2, 8, and 10), and the special issue of Corpus Linguistics and Linguistic Theory (CLLT ) 5.1 (2008) on corpus linguistic positions regarding many of Chomsky’s claims in general and the method of acceptability judgments in particular. However, since corpus data only provide distributional information in the sense mentioned earlier, this also means that corpus data must be evaluated with tools that have been designed to deal with distributional information and the discipline that provides such tools is statistics. And this is actually completely natural: psychologists and psycholinguists undergo comprehensive training in experimental methods and the statistical tools relevant to these methods so it’s only fair that corpus linguists do the same in their domain. After all, it would be kind of a double standard to on the one hand bash many theoretical li",
"title": ""
},
{
"docid": "be8c7050c87ad0344f8ddd10c7832bb4",
"text": "A novel method of map matching using the Global Positioning System (GPS) has been developed for civilian use, which uses digital mapping data to infer the <100 metres systematic position errors which result largely from “selective availability” (S/A) imposed by the U.S. military. The system tracks a vehicle on all possible roads (road centre-lines) in a computed error region, then uses a method of rapidly detecting inappropriate road centre-lines from the set of all those possible. This is called the Road Reduction Filter (RRF) algorithm. Point positioning is computed using C/A code pseudorange measurements direct from a GPS receiver. The least squares estimation is performed in the software developed for the experiment described in this paper. Virtual differential GPS (VDGPS) corrections are computed and used from a vehicle’s previous positions, thus providing an autonomous alternative to DGPS for in-car navigation and fleet management. Height aiding is used to augment the solution and reduce the number of satellites required for a position solution. Ordnance Survey (OS) digital map data was used for the experiment, i.e. OSCAR 1m resolution road centre-line geometry and Land Form PANORAMA 1:50,000, 50m-grid digital terrain model (DTM). Testing of the algorithm is reported and results are analysed. Vehicle positions provided by RRF are compared with the “true” position determined using high precision (cm) GPS carrier phase techniques. It is shown that height aiding using a DTM and the RRF significantly improve the accuracy of position provided by inexpensive single frequency GPS receivers. INTRODUCTION The accurate location of a vehicle on a highway network model is fundamental to any in-car-navigation system, personal navigation assistant, fleet management system, National Mayday System (Carstensen, 1998) and many other applications that provide a current vehicle location, a digital map and perhaps directions or route guidance. A great",
"title": ""
},
{
"docid": "27ad33ce5672964e11e499d0c5b3fe0f",
"text": "In this research, we investigated whether a learning process has unique information searching characteristics. The results of this research show that information searching is a learning process with unique searching characteristics specific to particular learning levels. In a laboratory experiment, we studied the searching characteristics of 72 participants engaged in 426 searching tasks. We classified the searching tasks according to Anderson and Krathwohl’s taxonomy of the cognitive learning domain. Research results indicate that applying and analyzing, the middle two of the six categories, generally take the most searching effort in terms of queries per session, topics searched per session, and total time searching. Interestingly, the lowest two learning categories, remembering and understanding, exhibit searching characteristics similar to the highest order learning categories of evaluating and creating. Our results suggest the view of Web searchers having simple information needs may be incorrect. Instead, we discovered that users applied simple searching expressions to support their higher-level information needs. It appears that searchers rely primarily on their internal knowledge for evaluating and creating information needs, using search primarily for fact checking and verification. Overall, results indicate that a learning theory may better describe the information searching process than more commonly used paradigms of decision making or problem solving. The learning style of the searcher does have some moderating effect on exhibited searching characteristics. The implication of this research is that rather than solely addressing a searcher’s expressed information need, searching systems can also address the underlying learning need of the user. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5d2ab1a4f28aa9286a3ef19c2c822af1",
"text": "Stray current control is essential in direct current (DC) mass transit systems where the rail insulation is not of sufficient quality to prevent a corrosion risk to the rails, supporting and third-party infrastructure. This paper details the principles behind the need for stray current control and examines the relationship between the stray current collection system design and its efficiency. The use of floating return rails is shown to provide a reduction in stray current level in comparison to a grounded system, significantly reducing the corrosion level of the traction system running rails. An increase in conductivity of the stray current collection system or a reduction in the soil resistivity surrounding the traction system is shown to decrease the corrosion risk to the supporting and third party infrastructure.",
"title": ""
},
{
"docid": "ccd5f02b97643b3c724608a4e4a67fdb",
"text": "Modular robotic systems that integrate distally with commercially available endoscopic equipment have the potential to improve the standard-of-care in therapeutic endoscopy by granting clinicians with capabilities not present in commercial tools, such as precision dexterity and feedback sensing. With the desire to integrate both sensing and actuation distally for closed-loop position control in fully deployable, endoscope-based robotic modules, commercial sensor and actuator options that acquiesce to the strict form-factor requirements are sparse or nonexistent. Herein, we describe a proprioceptive angle sensor for potential closed-loop position control applications in distal robotic modules. Fabricated monolithically using printed-circuit MEMS, the sensor employs a kinematic linkage and the principle of light intensity modulation to sense the angle of articulation with a high degree of fidelity. Onboard temperature and environmental irradiance measurements, coupled with linear regression techniques, provide robust angle measurements that are insensitive to environmental disturbances. The sensor is capable of measuring $\\pm$45 degrees of articulation with an RMS error of 0.98 degrees. An ex vivo demonstration shows that the sensor can give real-time proprioceptive feedback when coupled with an actuator module, opening up the possibility of fully distal closed-loop control.",
"title": ""
},
{
"docid": "261e71f72d0901d732dae974afdf2559",
"text": "The authors developed and validated a measure of teachers’ motivation toward specific work tasks: The Work Tasks Motivation Scale for Teachers (WTMST). The WTMST is designed to assess five motivational constructs toward six work tasks (e.g., class preparation, teaching). The authors conducted a preliminary (n = 42) and a main study among elementary and high school teachers (n = 609) to develop and validate the scale. Overall, results from the main study reveal that the WTMST is composed of 30 reliable and valid factors reflecting five types of motivation among six work tasks carried out by teachers. Results based on an extension of the multitrait–multimethod approach provide very good support for assessing teachers’ motivation toward various work tasks. Support for the invariance of the WTMST over gender and teaching levels was also obtained. Results are discussed in light of self-determination theory and the multitask perspective.",
"title": ""
},
{
"docid": "64dc61e853f41654dba602c7362546b5",
"text": "This paper introduces our work on the communication stack of wireless sensor networks. We present the IPv6 approach for wireless sensor networks called 6LoWPAN in its IETF charter. We then compare the different implementations of 6LoWPAN subsets for several sensor nodes platforms. We present our approach for the 6LoWPAN implementation which aims to preserve the advantages of modularity while keeping a small memory footprint and a good efficiency.",
"title": ""
},
{
"docid": "6a2b9761b745f4ece1bba3fab9f5d8b1",
"text": "Driven by the evolution of consumer-to-consumer (C2C) online marketplaces, we examine the role of communication tools (i.e., an instant messenger, internal message box and a feedback system), in facilitating dyadic online transactions in the Chinese C2C marketplace. Integrating the Chinese concept of guanxi with theories of social translucence and social presence, we introduce a structural model that explains how rich communication tools influence a website’s interactivity and presence, subsequently building trust and guanxi among buyers and sellers, and ultimately predicting buyers’ repurchase intentions. The data collected from 185 buyers in TaoBao, China’s leading C2C online marketplace, strongly support the proposed model. We believe that this research is the first formal study to show evidence of guanxi in online C2C marketplaces, and it is attributed to the role of communication tools to enhance a website’s interactivity and presence.",
"title": ""
},
{
"docid": "d52a178526eac0438757c20c5a91e51e",
"text": "Recent convolutional neural networks, especially end-to-end disparity estimation models, achieve remarkable performance on stereo matching task. However, existed methods, even with the complicated cascade structure, may fail in the regions of non-textures, boundaries and tiny details. Focus on these problems, we propose a multi-task network EdgeStereo that is composed of a backbone disparity network and an edge sub-network. Given a binocular image pair, our model enables end-to-end prediction of both disparity map and edge map. Basically, we design a context pyramid to encode multi-scale context information in disparity branch, followed by a compact residual pyramid for cascaded refinement. To further preserve subtle details, our EdgeStereo model integrates edge cues by feature embedding and edge-aware smoothness loss regularization. Comparative results demonstrates that stereo matching and edge detection can help each other in the unified model. Furthermore, our method achieves state-of-art performance on both KITTI Stereo and Scene Flow benchmarks, which proves the effectiveness of our design.",
"title": ""
},
{
"docid": "16709c54458167634803100605a4f4a5",
"text": "Automatic Web page segmentation is the basis to adaptive Web browsing on mobile devices. It breaks a large page into smaller blocks, in which contents with coherent semantics are keeping together. Then, various adaptations like single column and thumbnail view can be developed. However, page segmentation remains a challenging task, and its poor result directly yields a frustrating user experience. As human usually understand the Web page well, in this paper, we start from Gestalt theory, a psychological theory that can explain human's visual perceptive processes. Four basic laws, proximity, similarity, closure, and simplicity, are drawn from Gestalt theory and then implemented in a program to simulate how human understand the layout of Web pages. The experiments show that this method outperforms existing methods.",
"title": ""
}
] |
scidocsrr
|
e0f6fd7e65776ae72bd68fa542266bef
|
A Hybrid Approach for Music Recommendation
|
[
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "1736f9feebc2b6568bbc617a210a0494",
"text": "Power and bandwidth requirements have become more stringent for DRAMs in recent years. This is largely because mobile devices (such as smart phones) are more intensively relying on the use of graphics. Current DDR memory I/Os operate at 5Gb/s with a power efficiency of 17.4mW/Gb/s (i.e., 17.4pJ/b)[1], and graphic DRAM I/Os operate at 7Gb/s/pin [3] with a power efficiency worse than that of DDR. High-speed serial links [5], with a better power efficiency of ∼1mW/Gb/s, would be favored for mobile memory I/O interface. However, serial links typically require long initialization time (∼1000 clock cycles), and do not meet mobile DRAM I/O requirements for fast switching between active, standby, self-refresh and power-down operation modes [4]. Also, traditional baseband-only (or BB-only) signaling tends to consume power super-linearly [4] for extended bandwidth due to the need of power hungry pre-emphasis, and equalization circuits.",
"title": ""
},
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
},
{
"docid": "93043b729dc5f46860847e1ffb6a7b0c",
"text": "This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills.",
"title": ""
},
{
"docid": "ce6e5532c49b02988588f2ac39724558",
"text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
},
{
"docid": "2b0aadf1f4000a630d96f85880af4a03",
"text": "The visualization community has developed to date many intuitions and understandings of how to judge the quality of views in visualizing data. The computation of a visualization’s quality and usefulness ranges from measuring clutter and overlap, up to the existence and perception of specific (visual) patterns. This survey attempts to report, categorize and unify the diverse understandings and aims to establish a common vocabulary that will enable a wide audience to understand their differences and subtleties. For this purpose, we present a commonly applicable quality metric formalization that should detail and relate all constituting parts of a quality metric. We organize our corpus of reviewed research papers along the data types established in the information visualization community: multiand high-dimensional, relational, sequential, geospatial and text data. For each data type, we select the visualization subdomains in which quality metrics are an active research field and report their findings, reason on the underlying concepts, describe goals and outline the constraints and requirements. One central goal of this survey is to provide guidance on future research opportunities for the field and outline how different visualization communities could benefit from each other by applying or transferring knowledge to their respective subdomain. Additionally, we aim to motivate the visualization community to compare computed measures to the perception of humans.",
"title": ""
},
{
"docid": "002d6e5a13bc605746b4c8a6b9ecd498",
"text": "The properties of the so-called time dependent dielectric breakdown (TDDB) of silicon dioxide-based gate dielectric for microelectronics technology have been investigated and reviewed. Experimental data covering a wide range of oxide thickness, stress voltage, temperature, and for the two bias polarities were gathered using structures with a wide range of gate oxide areas, and over very long stress times. Thickness dependence of oxide breakdown was shown to be in excellent agreement with statistical models founded in the percolation theory which explain the drastic reduction of the time-to-breakdown with decreasing oxide thickness. The voltage dependence of time-to-breakdown was found to follow a power-law behavior rather than an exponential law as commonly assumed. Our investigation on the inter-relationship between voltage and temperature dependencies of oxide breakdown reveals that a strong temperature activation with non-Arrhenius behavior is consistent with the power-law voltage dependence. The power-law voltage dependence in combination with strong temperature activation provides the most important reliability relief in compensation for the strong decrease of time-to-breakdown resulting from the reduction of the oxide thickness. Using the maximum energy of injected electrons at the anode interface as breakdown variable, we have resolved the polarity gap of timeand charge-to-breakdown (TBD and QBD), confirming that the fluency and the electron energy at anode interface are the fundamental quantities controlling oxide breakdown. Combining this large database with a recently proposed cell-based analytical version of the percolation model, we extract the defect generation efficiency responsible for breakdown. Following a review of different breakdown mechanisms and models, we discuss how the release of hydrogen through the coupling between vibrational and electronic degrees of freedom can explain the power-law dependence of defect generation efficiency. On the basis of these results, a unified and global picture of oxide breakdown is constructed and the resulting model is applied to project reliability limits. In this regard, it is concluded that SiO2-based dielectrics can provide reliable gate dielectric, even to a thickness of 1 nm, and that CMOS scaling may well be viable for the 50 nm technology node. 2005 Elsevier Ltd. All rights reserved. 0026-2714/$ see front matter 2005 Elsevier Ltd. All rights reserv doi:10.1016/j.microrel.2005.04.004 * Corresponding author. Tel.: +1 802 769 1217; fax: +1 802 769 1220. E-mail address: [email protected] (E.Y. Wu).",
"title": ""
},
{
"docid": "f00a35dbc463b7d46bab88fcdb8df2c9",
"text": "The quantum CP model is in the confining (or unbroken) phase with a full mass gap in an infinite space, while it is in the Higgs (broken or deconfinement) phase accompanied with Nambu-Goldstone modes in a finite space such as a ring or finite interval smaller than a certain critical size. We find a new self-consistent exact solution describing a soliton in the Higgs phase of the CP model in the large-N limit on a ring. We call it a confining soliton. We show that all eigenmodes have real and positive energy and thus it is stable.",
"title": ""
},
{
"docid": "026a0651177ee631a80aaa7c63a1c32f",
"text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.",
"title": ""
},
{
"docid": "2ae53bfe80e74c27ea9ed5e5efadfbe7",
"text": "The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.",
"title": ""
},
{
"docid": "f9bc2b91d31b3aa8ccbdfbfdae363fd8",
"text": "Motor control is the study of how organisms make accurate goal-directed movements. Here we consider two problems that the motor system must solve in order to achieve such control. The first problem is that sensory feedback is noisy and delayed, which can make movements inaccurate and unstable. The second problem is that the relationship between a motor command and the movement it produces is variable, as the body and the environment can both change. A solution is to build adaptive internal models of the body and the world. The predictions of these internal models, called forward models because they transform motor commands into sensory consequences, can be used to both produce a lifetime of calibrated movements, and to improve the ability of the sensory system to estimate the state of the body and the world around it. Forward models are only useful if they produce unbiased predictions. Evidence shows that forward models remain calibrated through motor adaptation: learning driven by sensory prediction errors.",
"title": ""
},
{
"docid": "ad7862047259112ac01bfa68950cf95b",
"text": "In deep learning, depth, as well as nonlinearity, create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models.",
"title": ""
},
{
"docid": "885542ef60e8c2dbcfe73d7158244f82",
"text": "Three decades of active research on the teaching of introductory programming has had limited effect on classroom practice. Although relevant research exists across several disciplines including education and cognitive science, disciplinary differences have made this material inaccessible to many computing educators. Furthermore, computer science instructors have not had access to a comprehensive survey of research in this area. This paper collects and classifies this literature, identifies important work and mediates it to computing educators and professional bodies.\n We identify research that gives well-supported advice to computing academics teaching introductory programming. Limitations and areas of incomplete coverage of existing research efforts are also identified. The analysis applies publication and research quality metrics developed by a previous ITiCSE working group [74].",
"title": ""
},
{
"docid": "b4b4af6eeb22c23475047a2f3c36cba1",
"text": "Workflow systems are gaining importance as an infrastructure for automating inter-organizational interactions, such as those in Electronic Commerce. Execution of inter-organiz-ational workflows may raise a number of security issues including those related to conflict-of-interest among competing organizations. Moreover, in such an environment, a centralized Workflow Management System is not desirable because: (i) it can be a performance bottleneck, and (ii) the systems are inherently distributed, heterogeneous and autonomous in nature. In this paper, we propose an approach to realize decentralized workflow execution, in which the workflow is divided into partitions called self-describing workflows, and handled by a light weight workflow management component, called workflow stub, located at each organizational agent. We argue that placing the task execution agents that belong to the same conflict-of-interest class in one self-describing workflow may lead to unfair, and in some cases, undesirable results, akin to being on the wrong side of the Chinese wall. We propose a Chinese wall security model for the decentralized workflow environment to resolve such problems, and a restrictive partitioning solution to enforce the proposed model.",
"title": ""
},
{
"docid": "c19f986d747f4d6a3448607f76d961ab",
"text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"title": ""
},
{
"docid": "9fdf625f46c227c819cec1e4c00160b1",
"text": "Employment of ground-based positioning systems has been consistently growing over the past decades due to the growing number of applications that require location information where the conventional satellite-based systems have limitations. Such systems have been successfully adopted in the context of wireless emergency services, tactical military operations, and various other applications offering location-based services. In current and previous generation of cellular systems, i.e., 3G, 4G, and LTE, the base stations, which have known locations, have been assumed to be stationary and fixed. However, with the possibility of having mobile relays in 5G networks, there is a demand for novel algorithms that address the challenges that did not exist in the previous generations of localization systems. This paper includes a review of various fundamental techniques, current trends, and state-of-the-art systems and algorithms employed in wireless position estimation using moving receivers. Subsequently, performance criteria comparisons are given for the aforementioned techniques and systems. Moreover, a discussion addressing potential research directions when dealing with moving receivers, e.g., receiver's movement pattern for efficient and accurate localization, non-line-of-sight problem, sensor fusion, and cooperative localization, is briefly given.",
"title": ""
},
{
"docid": "f1eb96dd2109aad21ac1bccfe8dcd012",
"text": "In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations. Existing imitation learning algorithms typically involve solving a sequence of planning or reinforcement learning problems. Such algorithms are therefore not directly applicable to large, high-dimensional environments, and their performance can significantly degrade if the planning problems are not solved to optimality. Under the apprenticeship learning formalism, we develop alternative model-free algorithms for finding a parameterized stochastic policy that performs at least as well as an expert policy on an unknown cost function, based on sample trajectories from the expert. Our approach, based on policy gradients, scales to large continuous environments with guaranteed convergence to local minima.",
"title": ""
},
{
"docid": "f57fddbff1acaf3c4c58f269b6221cf7",
"text": "PURPOSE OF REVIEW\nCry-fuss problems are among the most common clinical presentations in the first few months of life and are associated with adverse outcomes for some mothers and babies. Cry-fuss behaviour emerges out of a complex interplay of cultural, psychosocial, environmental and biologic factors, with organic disturbance implicated in only 5% of cases. A simplistic approach can have unintended consequences. This article reviews recent evidence in order to update clinical management.\n\n\nRECENT FINDINGS\nNew research is considered in the domains of organic disturbance, feed management, maternal health, sleep management, and sensorimotor integration. This transdisciplinary approach takes into account the variable neurodevelopmental needs of healthy infants, the effects of feeding management on the highly plastic neonatal brain, and the bi-directional brain-gut-enteric microbiota axis. An individually tailored, mother-centred and family-centred approach is recommended.\n\n\nSUMMARY\nThe family of the crying baby requires early intervention to assess for and manage potentially treatable problems. Cross-disciplinary collaboration is often necessary if outcomes are to be optimized.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
}
] |
scidocsrr
|
d88f5b18c0573f44b72e4e888aa499bf
|
Trusted 5G Vehicular Networks: Blockchains and Content-Centric Networking
|
[
{
"docid": "1913c6ce69e543a3ae9a90b73c9efddd",
"text": "Cooperative Intelligent Transportation Systems, mainly represented by vehicular ad hoc networks (VANETs), are among the key components contributing to the Smart City and Smart World paradigms. Based on the continuous exchange of both periodic and event triggered messages, smart vehicles can enhance road safety, while also providing support for comfort applications. In addition to the different communication protocols, securing such communications and establishing a certain trustiness among vehicles are among the main challenges to address, since the presence of dishonest peers can lead to unwanted situations. To this end, existing security solutions are typically divided into two main categories, cryptography and trust, where trust appeared as a complement to cryptography on some specific adversary models and environments where the latter was not enough to mitigate all possible attacks. In this paper, we provide an adversary-oriented survey of the existing trust models for VANETs. We also show when trust is preferable to cryptography, and the opposite. In addition, we show how trust models are usually evaluated in VANET contexts, and finally, we point out some critical scenarios that existing trust models cannot handle, together with some possible solutions.",
"title": ""
}
] |
[
{
"docid": "6af7bb1d2a7d8d44321a5b162c9781a2",
"text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.",
"title": ""
},
{
"docid": "a1bff389a9a95926a052ded84c625a9e",
"text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.",
"title": ""
},
{
"docid": "7e7739bfddbae8cfa628d67eb582c121",
"text": "When firms implement enterprise resource planning, they need to redesign their business processes to make information flow smooth within organizations. ERP thus results in changes in processes and responsibilities. Firms cannot realize expected returns from ERP investments unless these changes are effectively managed after ERP systems are put into operation. This research proposes a conceptual framework to highlight the importance of the change management after firms implement ERP systems. Our research model is empirically tested using data collected from over 170 firms that had used ERP systems for more than one year. Our analysis reveals that the eventual success of ERP systems depends on effective change management after ERP implementation, supporting the existence of the valley of despair.",
"title": ""
},
{
"docid": "17812cae7547ba46d7170b99f6be1efc",
"text": "Developing supernumerary limbs is a rare congenital condition that only a few cases have been documented. Depending on the cause and developmental conditions, they may be single, multiple or complicated, and occur as a syndrome or associated with other anomalies. Polymelia is defined as the presence of extra limb(s) which have been reported in human, mouse, chicken, calf and lamb. It seems that the precise mechanism regulating this type of congenital malformations is not yet clearly understood. While hereditary trait of some limb anomalies was proven in human and the responsible genetic impairments were found, this has not been confirmed in the other animals especially the birds. Regarding the different susceptibilities of various vertebrate species to the environmental and genetic factors in embryonic period, the probable cause of an embryonic defect in one species cannot be generalized to the all other species class. The present study reports a case of polymelia in an Iranian indigenous young fowl and discusses its possible causes.",
"title": ""
},
{
"docid": "7526ae3542d1e922bd73be0da7c1af72",
"text": "Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agents’ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.",
"title": ""
},
{
"docid": "9082dc8e8d60b05255487232fdbec189",
"text": "Energy harvesting has been widely investigated as a promising method of providing power for ultra-low-power applications. Such energy sources include solar energy, radio-frequency (RF) radiation, piezoelectricity, thermal gradients, etc. However, the power supplied by these sources is highly unreliable and dependent upon ambient environment factors. Hence, it is necessary to develop specialized systems that are tolerant to this power variation, and also capable of making forward progress on the computation tasks. The simulation platform in this paper is calibrated using measured results from a fabricated nonvolatile processor and used to explore the design space for a nonvolatile processor with different architectures, different input power sources, and policies for maximizing forward progress.",
"title": ""
},
{
"docid": "66c493b14b7ab498e67f6d29cf91733a",
"text": "A digitally controlled low-dropout voltage regulator (LDO) that can perform fast-transient and autotuned voltage is introduced in this paper. Because there are still several arguments regarding the digital implementation on the LDOs, pros and cons of the digital control are first discussed in this paper to illustrate its opportunity in the LDO applications. Following that, the architecture and configuration of the digital scheme are demonstrated. The working principles and design flows of the functional algorithms are also illustrated and then verified by the simulation before the circuit implementation. The proposed LDO was implemented by the 0.18-μm manufacturing process for the performance test. Experimental results show that the LDO's output voltage Vout can accurately perform the dynamic voltage scaling function at various Vout levels (1/2, 5/9, 2/3, and 5/6 of the input voltage VDD) from a wide VDD range (from 1.8 to 0.9 V). The transient time is within 2 μs and the voltage spikes are within 50 mV when a 1-μF output capacitor is used. Test of the autotuning algorithm shows that the proposed LDO is able to work at its optimal performance under various uncertain conditions.",
"title": ""
},
{
"docid": "4ba81ce5756f2311dde3fa438f81e527",
"text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.",
"title": ""
},
{
"docid": "2f336490567c50c0b59ebae2aa1d2903",
"text": "Psychosomatic medicine, with its prevailing biopsychosocial model, aims to integrate human and exact sciences with their divergent conceptual models. Therefore, its own conceptual foundations, which often remain implicit and unknown, may be critically relevant. We defend the thesis that choosing between different metaphysical views on the 'mind-body problem' may have important implications for the conceptual foundations of psychosomatic medicine, and therefore potentially also for its methods, scientific status and relationship with the scientific disciplines it aims to integrate: biomedical sciences (including neuroscience), psychology and social sciences. To make this point, we introduce three key positions in the philosophical 'mind-body' debate (emergentism, reductionism, and supervenience physicalism) and investigate their consequences for the conceptual basis of the biopsychosocial model in general and its 'psycho-biological' part ('mental causation') in particular. Despite the clinical merits of the biopsychosocial model, we submit that it is conceptually underdeveloped or even flawed, which may hamper its use as a proper scientific model.",
"title": ""
},
{
"docid": "250f83a255cdd13bcbe0347b3092f44b",
"text": "Current state-of-the-art remote photoplethysmography (rPPG) algorithms are capable of extracting a clean pulse signal in ambient light conditions using a regular color camera, even when subjects move significantly. In this study, we investigate the feasibility of rPPG in the (near)-infrared spectrum, which broadens the scope of applications for rPPG. Two camera setups are investigated: one setup consisting of three monochrome cameras with different optical filters, and one setup consisting of a single RGB camera with a visible light blocking filter. Simulation results predict the monochrome setup to be more motion robust, but this simulation neglects parallax. To verify this, a challenging benchmark dataset consisting of 30 videos is created with various motion scenarios and skin tones. Experiments show that both camera setups are capable of accurate pulse extraction in all motion scenarios, with an average SNR of +6.45 and +7.26 dB, respectively. The single camera setup proves to be superior in scenarios involving scaling, likely due to parallax of the multicamera setup. To further improve motion robustness of the RGB camera, dedicated LED illumination with two distinct wavelengths is proposed and verified. This paper demonstrates that accurate rPPG measurements in infrared are feasible, even with severe subject motion.",
"title": ""
},
{
"docid": "914f41b9f3c0d74f888c7dd83e226468",
"text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.",
"title": ""
},
{
"docid": "a37498a6fbaabd220bad848d440e889b",
"text": "Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closelyrelated tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation improves performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.",
"title": ""
},
{
"docid": "28352dd6b60b511ff812820f4e712cde",
"text": "Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called \"AnnexML\". At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have larger a label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, which is a state-of-the-art embedding-based method.",
"title": ""
},
{
"docid": "9c8204510362de8a5362400fc4d26e24",
"text": "We focus on predicting sleep stages from radio measurements without any attached sensors on subjects. We introduce a new predictive model that combines convolutional and recurrent neural networks to extract sleep-specific subjectinvariant features from RF signals and capture the temporal progression of sleep. A key innovation underlying our approach is a modified adversarial training regime that discards extraneous information specific to individuals or measurement conditions, while retaining all information relevant to the predictive task. We analyze our game theoretic setup and empirically demonstrate that our model achieves significant improvements over state-of-the-art solutions.",
"title": ""
},
{
"docid": "a9f9f918d0163e18cf6df748647ffb05",
"text": "In previous work, we have shown that using terms from around citations in citing papers to index the cited paper, in addition to the cited paper's own terms, can improve retrieval effectiveness. Now, we investigate how to select text from around the citations in order to extract good index terms. We compare the retrieval effectiveness that results from a range of contexts around the citations, including no context, the entire citing paper, some fixed windows and several variations with linguistic motivations. We conclude with an analysis of the benefits of more complex, linguistically motivated methods for extracting citation index terms, over using a fixed window of terms. We speculate that there might be some advantage to using computational linguistic techniques for this task.",
"title": ""
},
{
"docid": "04b7ad51d2464052ebd3d32baeb5b57b",
"text": "Rob Antrobus Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected] Sylvain Frey Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected] Benjamin Green Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected]",
"title": ""
},
{
"docid": "4d7b93ee9c6036c5915dd1166c9ae2f8",
"text": "In this paper, we present a developed NS-3 based emulation platform for evaluating and optimizing the performance of the LTE networks. The developed emulation platform is designed to provide real-time measurements. Thus it eliminates the need for the high cost spent on real equipment. The developed platform consists of three main parts, which are video server, video client(s), and NS-3 based simulation environment for LTE network. Using the developed platform, the server streams video clips to the existing clients going through the LTE simulated network. We utilize this setup to evaluate multiple cases such as mobility and handover. Moreover, we use it for evaluating multiple streaming protocols such as UDP, RTP, and Dynamic Adaptive Streaming over HTTP (DASH). Keywords-DASH, Emulation, LTE, NS-3, Real-time, RTP, UDP.",
"title": ""
},
{
"docid": "b6043969fad2b2fd195a069fcf003ca1",
"text": "In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JRS.11.042609]",
"title": ""
},
{
"docid": "e9aac361f8ca1bb8f10409859aef718d",
"text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.",
"title": ""
},
{
"docid": "c5b2655f44471f007c02af03a41eec06",
"text": "Case reports of conjoined twins (\"Siamese twins\") in wild mammals are scarce. Most published reports of conjoined twins in mammals concern cases in man and domestic mammals. This article describes a case of cephalopagus conjoined twins in a leopard cat (Prionailurus bengalensis) collected on the island of Sumatra, Indonesia, in the period 1873-76. A review of known cases of conjoined twinning in wild mammals is given.",
"title": ""
}
] |
scidocsrr
|
93bf5d3133f6fafbf53ff522208ac54e
|
A Low-Power Broad-Bandwidth Noise Cancellation VLSI Circuit Design for In-Ear Headphones
|
[
{
"docid": "d0a086d03ffeaebede1dd3779de9c449",
"text": "In this paper, we present design and real-time implementation of a single-channel adaptive feedback active noise control (AFANC) headset for audio and communication applications. Several important design and implementation considerations, such as the ideal position of error microphone, training signal used, selection of adaptive algorithms and structures will be addressed in this paper. Real-time measurements and comparisons are also carried out with the latest commercial headset to evaluate its performance. In addition, several new extensions to the AFANC headset are described and evaluated.",
"title": ""
}
] |
[
{
"docid": "c8a2ba8f47266d0a63281a5abb5fa47f",
"text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.",
"title": ""
},
{
"docid": "e1d1ccf5d257340aa87b6f4f246565fa",
"text": "Genes vs environment The rising prevalence of childhood obesity is largely driven by recent changes in diet and levels of physical activity; however, there is strong evidence to suggest that like height, weight is a highly heritable trait (40–70% heritability). It is very likely that the ability to store fat in times of nutritional abundance was a positive trait selected over thousands of years of evolution only to emerge recently on a large scale as a result of changes in our environment. There is increasing recognition that studies aimed at identifying these polygenic or oligogenic influences on weight gain in childhood are needed and a number of loci have been identified in genome-wide scans in different populations, although as yet few have been replicated. As well as a detectable shift in the mean BMI of children and adults in most populations, we are seeing a greater proportion of patients of all ages with severe obesity. It is clear that these individuals have a certain genetic propensity to store excessive caloric intake as fat and it is important to have a practical approach to the investigation and management of these vulnerable patients who have considerably increased morbidity and mortality. Although there is no accepted definition for severe or morbid obesity in childhood, a BMI s.d. 42.5 (weight off the chart) is often used in Specialist Centres and the crossing of major growth percentile lines upward is an early indication of risk of severe obesity.",
"title": ""
},
{
"docid": "a09cb533a0a90a056857d597213efdf2",
"text": "一 引言 图像的边缘是图像的重要的特征,它给出了图像场景中物体的轮廓特征信息。当要对图 像中的某一个物体进行识别时,边缘信息是重要的可以利用的信息,例如在很多系统中采用 的模板匹配的识别算法。基于此,我们设计了一套基于 PCI Bus和 Vision Bus的可重构的机 器人视觉系统[3]。此系统能够实时的对图像进行采集,并可以通过系统实时的对图像进行 边缘的提取。 对于图像的边缘提取,采用二阶的边缘检测算子处理后要进行过零点检测,计算量很大 而且用硬件实现资源占用大且速度慢,所以在我们的视觉系统中,卷积器中选择的是一阶的 边缘检测算子。采用一阶的边缘检测算子进行卷积运算之后,仅仅需要对卷积得到的图像进 行阈值处理就可以得到图像的边缘,而阈值处理的操作用硬件实现占用资源少且速度快。由 于本视觉系统要求与应用环境下的精密装配机器人配合使用,系统的实时性要求非常高。因 此,如何对实时采集图像进行快速实时的边缘提取阈值的自动选取,是我们必须要考虑的问 题。 遗传算法是一种仿生物系统的基因进化的迭代搜索算法,其基本思想是由美国Michigan 大学的 J.Holland 教授提出的。由于遗传算法的整体寻优策略以及优化计算时不依赖梯度信 息,所以它具有很强的全局搜索能力,即对于解空间中的全局最优解有着很强的逼近能力。 它适用于问题结构不是十分清楚,总体很大,环境复杂的场合,而对于实时采集的图像进行 边缘检测阈值的选取就是此类问题。本文在对传统的遗传算法进行改进的基础上,提出了一 种对于实时采集图像进行边缘检测的阈值的自动选取方法。",
"title": ""
},
{
"docid": "72e7e5ab98cb660921c3479c5682dc10",
"text": "In this paper we adopt general sum stochas tic games as a framework for multiagent re inforcement learning Our work extends pre vious work by Littman on zero sum stochas tic games to a broader framework We de sign a multiagent Q learning method under this framework and prove that it converges to a Nash equilibrium under speci ed condi tions This algorithm is useful for nding the optimal strategy when there exists a unique Nash equilibrium in the game When there exist multiple Nash equilibria in the game this algorithm should be combined with other learning techniques to nd optimal strategies",
"title": ""
},
{
"docid": "47fb3483c8f4a5c0284fec3d3a309c09",
"text": "The Knowledge Base Population (KBP) track at the Text Analysis Conference 2010 marks the second year of this important information extraction evaluation. This paper describes the design and implementation of LCC’s systems which participated in the tasks of Entity Linking, Slot Filling, and the new task of Surprise Slot Filling. For the entity linking task, our top score was achieved through a robust context modeling approach which incorporates topical evidence. For slot filling, we used the output of the entity linking system together with a combination of different types of relation extractors. For surprise slot filling, our customizable extraction system was extremely useful due to the time sensitive nature of the task.",
"title": ""
},
{
"docid": "c8e029658bf4c298cb6e77128d19eac0",
"text": "Cloud Computing Business Framework (CCBF) is proposed to help organisations achieve good Cloud design, deployment, migration and services. While organisations adopt Cloud Computing for Web Services, technical and business challenges emerge and one of these includes the measurement of Cloud business performance. Organisational Sustainability Modelling (OSM) is a new way to measure Cloud business performance quantitatively and accurately. It combines statistical computation and 3D Visualisation to present the Return on Investment arising from the adoption of Cloud Computing by organisations. 3D visualisation simplifies the review process and is an innovative way for Return of Investment (ROI) valuation. Two detailed case studies with SAP and Vodafone have been presented, where OSM has analysed the business performance and explained how CCBF offers insights, which are relatively helpful for WS and Grid businesses. Comparisons and discussions between CCBF and other approaches related to WS are presented, where lessons learned are useful for Web Services, Cloud and Grid communities.",
"title": ""
},
{
"docid": "29e500aa57f82d63596ae13639d46cbf",
"text": "In this paper we present a intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition) system. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM (One-Class Support Vector Machine) is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automate SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detect anomalies in the system real time. The module is part of an IDS (Intrusion Detection System) system developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF (Intrusion Detection Message Exchange Format) messages that carry information about the source of the incident, the time and a classification of the alarm.",
"title": ""
},
{
"docid": "54722f4851707c2bf51d18910728a31c",
"text": "Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals. To this aim, we use specific suitable fragments from the Datalog± family of languages, and we introduce the vadalog system, which puts these swift logics into action. This system exploits the theoretical underpinning of relevant Datalog± languages and combines it with existing and novel techniques from database and AI practice.",
"title": ""
},
{
"docid": "a7af0135b2214ca88883fe136bb13e70",
"text": "ITIL is one of the most used frameworks for IT service management. Implementing ITIL processes through an organization is not an easy task and present many difficulties. This paper explores the ITIL implementation's challenges and tries to experiment how Business Process Management Systems can help organization overtake those challenges.",
"title": ""
},
{
"docid": "5afb121d5e4a5ab8daa80580c8bd8253",
"text": "In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.",
"title": ""
},
{
"docid": "84845323a1dcb318bb01fef5346c604d",
"text": "This paper introduced a centrifugal impeller-based wall-climbing robot with the μCOS-II System. Firstly, the climber's basic configurations of mechanical were described. Secondly, the mechanic analyses of walking mechanism was presented, which was essential to the suction device design. Thirdly, the control system including the PC remote control system and the STM32 master slave system was designed. Finally, an experiment was conducted to test the performance of negative pressure generating system and general abilities of wall-climbing robot.",
"title": ""
},
{
"docid": "bf48f9ac763b522b8d43cfbb281fbffa",
"text": "We present a declarative framework for collective deduplication of entity references in the presence of constraints. Constraints occur naturally in many data cleaning domains and can improve the quality of deduplication. An example of a constraint is \"each paper has a unique publication venue''; if two paper references are duplicates, then their associated conference references must be duplicates as well. Our framework supports collective deduplication, meaning that we can dedupe both paper references and conference references collectively in the example above. Our framework is based on a simple declarative Datalog-style language with precise semantics. Most previous work on deduplication either ignoreconstraints or use them in an ad-hoc domain-specific manner. We also present efficient algorithms to support the framework. Our algorithms have precise theoretical guarantees for a large subclass of our framework. We show, using a prototype implementation, that our algorithms scale to very large datasets. We provide thoroughexperimental results over real-world data demonstrating the utility of our framework for high-quality and scalable deduplication.",
"title": ""
},
{
"docid": "8bea1f9e107cfcebc080bc62d7ac600d",
"text": "The introduction of wireless transmissions into the data center has shown to be promising in improving cost effectiveness of data center networks DCNs. For high transmission flexibility and performance, a fundamental challenge is to increase the wireless availability and enable fully hybrid and seamless transmissions over both wired and wireless DCN components. Rather than limiting the number of wireless radios by the size of top-of-rack switches, we propose a novel DCN architecture, Diamond, which nests the wired DCN with radios equipped on all servers. To harvest the gain allowed by the rich reconfigurable wireless resources, we propose the low-cost deployment of scalable 3-D ring reflection spaces RRSs which are interconnected with streamlined wired herringbone to enable large number of concurrent wireless transmissions through high-performance multi-reflection of radio signals over metal. To increase the number of concurrent wireless transmissions within each RRS, we propose a precise reflection method to reduce the wireless interference. We build a 60-GHz-based testbed to demonstrate the function and transmission ability of our proposed architecture. We further perform extensive simulations to show the significant performance gain of diamond, in supporting up to five times higher server-to-server capacity, enabling network-wide load balancing, and ensuring high fault tolerance.",
"title": ""
},
{
"docid": "767d61512bb9d2db5d6bb7b3afb7b150",
"text": "Recent advances in deep generative models have shown promising potential in image inpanting, which refers to the task of predicting missing pixel values of an incomplete image using the known context. However, existing methods can be slow or generate unsatisfying results with easily detectable flaws. In addition, there is often perceivable discontinuity near the holes and require further post-processing to blend the results. We present a new approach to address the difficulty of training a very deep generative model to synthesize high-quality photo-realistic inpainting. Our model uses conditional generative adversarial networks (conditional GANs) as the backbone, and we introduce a novel block-wise procedural training scheme to stabilize the training while we increase the network depth. We also propose a new strategy called adversarial loss annealing to reduce the artifacts. We further describe several losses specifically designed for inpainting and show their effectiveness. Extensive experiments and user-study show that our approach outperforms existing methods in several tasks such as inpainting, face completion and image harmonization. Finally, we show our framework can be easily used as a tool for interactive guided inpainting, demonstrating its practical value to solve common real-world challenges.",
"title": ""
},
{
"docid": "23bc28928a00ba437660efcb1d93c1a8",
"text": "Mental disorders occur in people in all countries, societies and in all ethnic groups, regardless socio-economic order with more frequent anxiety disorders. Through the process of time many treatment have been applied in order to address this complex mental issue. People with anxiety disorders can benefit from a variety of treatments and services. Following an accurate diagnosis, possible treatments include psychological treatments and mediation. Complementary and alternative medicine (CAM) plays a significant role in health care systems. Patients with chronic pain conditions, including arthritis, chronic neck and backache, headache, digestive problems and mental health conditions (including insomnia, depression, and anxiety) were high users of CAM therapies. Aromatherapy is a holistic method of treatment, using essential oils. There are several essential oils that can help in reducing anxiety disorders and as a result the embodied events that they may cause.",
"title": ""
},
{
"docid": "de4e2e131a0ceaa47934f4e9209b1cdd",
"text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.",
"title": ""
},
{
"docid": "e12410e92e3f4c0f9c78bc5988606c93",
"text": "Semiarid environments are known for climate extremes such as high temperatures, low humidity, irregular precipitations, and apparent resource scarcity. We aimed to investigate how a small neotropical primate (Callithrix jacchus; the common marmoset) manages to survive under the harsh conditions that a semiarid environment imposes. The study was carried out in a 400-ha area of Caatinga in the northeast of Brazil. During a 6-month period (3 months of dry season and 3 months of wet season), we collected data on the diet of 19 common marmosets (distributed in five groups) and estimated their behavioral time budget during both the dry and rainy seasons. Resting significantly increased during the dry season, while playing was more frequent during the wet season. No significant differences were detected regarding other behaviors. In relation to the diet, we recorded the consumption of prey items such as insects, spiders, and small vertebrates. We also observed the consumption of plant items, including prickly cladodes, which represents a previously undescribed food item for this species. Cladode exploitation required perceptual and motor skills to safely access the food resource, which is protected by sharp spines. Our findings show that common marmosets can survive under challenging conditions in part because of adjustments in their behavior and in part because of changes in their diet.",
"title": ""
},
{
"docid": "2910fe6ac9958d9cbf9014c5d3140030",
"text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.",
"title": ""
},
{
"docid": "b0901a572ecaaeb1233b92d5653c2f12",
"text": "This qualitative study offers a novel exploration of the links between social media, virtual intergroup contact, and empathy by examining how empathy is expressed through interactions on a popular social media blog. Global leaders are encouraging individuals to engage in behaviors and support policies that provide basic social foundations. It is difficult to motivate people to undertake such actions. However, research shows that empathy intensifies motivation to help others. It can cause individuals to see the world from the perspective of stigmatized group members and increase positive feelings. Social media offers a new pathway for virtual intergroup contact, providing opportunities to increase conversation about disadvantaged others and empathy. We examined expressions of empathy within a popular blog, Humans of New York (HONY), and engaged in purposeful case selection by focusing on (1) events where specific prosocial action was taken corresponding to interactions on the HONY blog and (2) presentation of people in countries other than the United States. Nine overarching themes; (1) perspective taking, (2) fantasy, (3) empathic concern, (4) personal distress, (5) relatability, (6) prosocial action, (7) community appreciation, (8) anti-empathy, and (9) rejection of anti-empathy, exemplify how the HONY community expresses and shares empathic thoughts and feelings.",
"title": ""
},
{
"docid": "f11ff738aaf7a528302e6ec5ed99c43c",
"text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.",
"title": ""
}
] |
scidocsrr
|
693e578c14483342cffaa27440f71599
|
Syntax highlighting in business process models
|
[
{
"docid": "8ec4ffa9226b9e6357ba64918f7659e9",
"text": "Purpose – This paper summarizes typical pitfalls as they can be observed in larger process modeling projects. Design/methodology/approach – The identified pitfalls have been derived from a series of focus groups and semi-structured interviews with business process analysts and managers of process management and modeling projects. Findings – The paper provides a list of typical characteristics of unsuccessful process modeling. It covers six pitfalls related to strategy and governance (1-3) and the involved stakeholders (4-6). Further issues related to tools and related requirements (7-10), the practice of modeling (11-16), the way we design to-be models (17-19), and how we deal with success of modeling and maintenance issues (19-21) will be discussed in the second part of this paper. Research limitations/implications – This paper is a personal viewpoint, and does not report on the outcomes of a structured qualitative research project. Practical implications – The provided list of total 22 pitfalls increases the awareness for the main challenges related to process modeling and helps to identify common mistakes. Originality/value – This paper is one of the very few contributions in the area of challenges related to process modeling.",
"title": ""
}
] |
[
{
"docid": "24151cf5d4481ba03e6ffd1ca29f3441",
"text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.",
"title": ""
},
{
"docid": "bb03f7d799b101966b4ea6e75cd17fea",
"text": "Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"title": ""
},
{
"docid": "b43e14cdca5bb58633a8f1530068d9ac",
"text": "Oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) are essential reactions for energy-storage and -conversion devices relying on oxygen electrochemistry. High-performance, nonprecious metal-based hybrid catalysts are developed from postsynthesis integration of dual-phase spinel MnCo2O4 (dp-MnCo2O4) nanocrystals with nanocarbon materials, e.g., carbon nanotube (CNT) and nitrogen-doped reduced graphene oxide (N-rGO). The synergic covalent coupling between dp-MnCo2O4 and nanocarbons effectively enhances both the bifunctional ORR and OER activities of the spinel/nanocarbon hybrid catalysts. The dp-MnCo2O4/N-rGO hybrid catalysts exhibited comparable ORR activity and superior OER activity compared to commercial 30 wt % platinum supported on carbon black (Pt/C). An electrically rechargeable zinc-air battery using dp-MnCo2O4/CNT hybrid catalysts on the cathode was successfully operated for 64 discharge-charge cycles (or 768 h equivalent), significantly outperforming the Pt/C counterpart, which could only survive up to 108 h under similar conditions.",
"title": ""
},
{
"docid": "0742314b8099dce0eadaa12f96579209",
"text": "Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
},
{
"docid": "65031bb814a4812e499a8906d3a67fc4",
"text": "The training process in industries is assisted with computer solutions to reduce costs. Normally, computer systems created to simulate assembly or machine manipulation are implemented with traditional Human-Computer interfaces (keyboard, mouse, etc). But, this usually leads to systems that are far from the real procedures, and thus not efficient in term of training. Two techniques could improve this procedure: mixed-reality and haptic feedback. We propose in this paper to investigate the integration of both of them inside a single framework. We present the hardware used to design our training system. A feasibility study allows one to establish testing protocol. The results of these tests convince us that such system should not try to simulate realistically the interaction between real and virtual objects as if it was only real objects.",
"title": ""
},
{
"docid": "d11d6df22b5c6212b27dad4e3ed96826",
"text": "We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad, are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks.",
"title": ""
},
{
"docid": "17c5f3ca9171cabddc13a6c0ad00e040",
"text": "Contingency planning is the first stage in developing a formal set of production planning and control activities for the reuse of products obtained via return flows in a closed-loop supply chain. The paper takes a contingency approach to explore the factors that impact production planning and control for closed-loop supply chains that incorporate product recovery. A series of three cases are presented, and a framework developed that shows the common activities required for all remanufacturing operations. To build on the similarities and illustrate and integrate the differences in closed-loop supply chains, Hayes and Wheelwright’s product–process matrix is used as a foundation to examine the three cases representing Remanufacture-to-Stock (RMTS), Reassemble-to-Order (RATO), and Remanufacture-to-Order (RMTO). These three cases offer end-points and an intermediate point for closed-loop supply operations. Since they represent different positions on the matrix, characteristics such as returns volume, timing, quality, product complexity, test and evaluation complexity, and remanufacturing complexity are explored. With a contingency theory for closed-loop supply chains that incorporate product recovery in place, past cases can now be reexamined and the potential for generalizability of the approach to similar types of other problems and applications can be assessed and determined. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f82a9c15e88ba24dbf8f5d4678b8dffd",
"text": "Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut-50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "609bbd3b066cf7a56d11ea545c0b0e71",
"text": "Subgingival margins are often required for biologic, mechanical, or esthetic reasons. Several investigations have demonstrated that their use is associated with adverse periodontal reactions, such as inflammation or recession. The purpose of this prospective randomized clinical study was to determine if two different subgingival margin designs influence the periodontal parameters and patient perception. Deep chamfer and feather-edge preparations were compared on 58 patients with 6 months follow-up. Statistically significant differences were present for bleeding on probing, gingival recession, and patient satisfaction. Feather-edge preparation was associated with increased bleeding on probing and deep chamfer with increased recession; improved patient comfort was registered with chamfer margin design. Subgingival margins are technique sensitive, especially when feather-edge design is selected. This margin design may facilitate soft tissue stability but can expose the patient to an increased risk of gingival inflammation.",
"title": ""
},
{
"docid": "c8cd0f14edee76888e4f1fd0ccc72dfa",
"text": "BACKGROUND\nTotal hip and total knee arthroplasties are well accepted as reliable and suitable surgical procedures to return patients to function. Health-related quality-of-life instruments have been used to document outcomes in order to optimize the allocation of resources. The objective of this study was to review the literature regarding the outcomes of total hip and knee arthroplasties as evaluated by health-related quality-of-life instruments.\n\n\nMETHODS\nThe Medline and EMBASE medical literature databases were searched, from January 1980 to June 2003, to identify relevant studies. Studies were eligible for review if they met the following criteria: (1). the language was English or French, (2). at least one well-validated and self-reported health-related quality of life instrument was used, and (3). a prospective cohort study design was used.\n\n\nRESULTS\nOf the seventy-four studies selected for the review, thirty-two investigated both total hip and total knee arthroplasties, twenty-six focused on total hip arthroplasty, and sixteen focused on total knee arthroplasty exclusively. The most common diagnosis was osteoarthritis. The duration of follow-up ranged from seven days to seven years, with the majority of studies describing results at six to twelve months. The Short Form-36 and the Western Ontario and McMaster University Osteoarthritis Index, the most frequently used instruments, were employed in forty and twenty-eight studies, respectively. Seventeen studies used a utility index. Overall, total hip and total knee arthroplasties were found to be quite effective in terms of improvement in health-related quality-of-life dimensions, with the occasional exception of the social dimension. Age was not found to be an obstacle to effective surgery, and men seemed to benefit more from the intervention than did women. When improvement was found to be modest, the role of comorbidities was highlighted. Total hip arthroplasty appears to return patients to function to a greater extent than do knee procedures, and primary surgery offers greater improvement than does revision. Patients who had poorer preoperative health-related quality of life were more likely to experience greater improvement.\n\n\nCONCLUSIONS\nHealth-related quality-of-life data are valuable, can provide relevant health-status information to health professionals, and should be used as a rationale for the implementation of the most adequate standard of care. Additional knowledge and scientific dissemination of surgery outcomes should help to ensure better management of patients undergoing total hip or total knee arthroplasty and to optimize the use of these procedures.",
"title": ""
},
{
"docid": "82f8bfc9bb01105ccab46005d3df18d7",
"text": "This paper presents a comparative study of different classification methodologies for the task of fine-art genre classification. 2-level comparative study is performed for this classification problem. 1st level reviews the performance of discriminative vs. generative models while 2nd level touches the features aspect of the paintings and compares semantic-level features vs low-level and intermediate level features present in the painting.",
"title": ""
},
{
"docid": "5fb640a9081f72fcf994b1691470d7bc",
"text": "Omnidirectional cameras are widely used in such areas as robotics and virtual reality as they provide a wide field of view. Their images are often processed with classical methods, which might unfortunately lead to non-optimal solutions as these methods are designed for planar images that have different geometrical properties than omnidirectional ones. In this paper we study image classification task by taking into account the specific geometry of omnidirectional cameras with graph-based representations. In particular, we extend deep learning architectures to data on graphs; we propose a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions. Our experiments show that the proposed method outperforms current techniques for the omnidirectional image classification problem.",
"title": ""
},
{
"docid": "0e5187e6d72082618bd5bda699adab93",
"text": "Many applications of mobile deep learning, especially real-time computer vision workloads, are constrained by computation power. This is particularly true for workloads running on older consumer phones, where a typical device might be powered by a singleor dual-core ARMv7 CPU. We provide an open-source implementation and a comprehensive analysis of (to our knowledge) the state of the art ultra-low-precision (<4 bit precision) implementation of the core primitives required for modern deep learning workloads on ARMv7 devices, and demonstrate speedups of 4x-20x over our additional state-of-the-art float32 and int8 baselines.",
"title": ""
},
{
"docid": "e4f26f4ed55e51fb2a9a55fd0f04ccc0",
"text": "Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning",
"title": ""
},
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "6649a5635cffce83cc32887e2f6b0b04",
"text": "Alexander Serenko is a Professor at Faculty of Business Administration, Lakehead University, Thunder Bay, Canada. Nick Bontis is an Associate Professor at DeGroote School of Business, McMaster University, Hamilton, Canada. Abstract Purpose – The purpose of this paper is to investigate the impact of exchange modes – negotiated, reciprocal, generalized, and productive – on inter-employee knowledge sharing. Design/methodology/approach – Based on the affect theory of social exchange, a theoretical model was developed and empirically tested using a survey of 691 employees from 15 North American credit unions. Findings – The negotiated mode of knowledge exchange, i.e. when a knowledge contributor explicitly establishes reciprocation conditions with a recipient, develops negative knowledge sharing attitude. The reciprocal mode, i.e. when a knowledge donor assumes that a receiver will reciprocate, has no effect on knowledge sharing attitude. The generalized exchange form, i.e. when a knowledge contributor believes that other organizational members may reciprocate, is weakly related to knowledge sharing attitude. The productive exchange mode, i.e. when a knowledge provider assumes he or she is a responsible citizen within a cooperative enterprise, strongly facilitates the development of knowledge sharing attitude, which, in turn, leads to knowledge sharing intentions. Practical implications – To facilitate inter-employee knowledge sharing, managers should focus on the development of positive knowledge sharing culture when all employees believe they contribute to a common good instead of expecting reciprocal benefits. Originality/value – This is one of the first studies to apply the affect theory of social exchange to study knowledge sharing.",
"title": ""
},
{
"docid": "1e6ea96d9aafb244955ff38423562a1c",
"text": "Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing is demonstrated on the four econometric problems.",
"title": ""
},
{
"docid": "64770c350dc1d260e24a43760d4e641b",
"text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.",
"title": ""
}
] |
scidocsrr
|
49c482e85fce56d255b2c45da53b4a69
|
New neutrosophic approach to image segmentation
|
[
{
"docid": "67aac8ddbd97ea2aeb56a954fcf099f3",
"text": "Image segmentation is very essential and critical to image processing and pattern recognition. This survey provides a summary of color image segmentation techniques available now. Basically, color segmentation approaches are based on monochrome segmentation approaches operating in di!erent color spaces. Therefore, we \"rst discuss the major segmentation approaches for segmenting monochrome images: histogram thresholding, characteristic feature clustering, edge detection, region-based methods, fuzzy techniques, neural networks, etc.; then review some major color representation methods and their advantages/disadvantages; \"nally summarize the color image segmentation techniques using di!erent color representations. The usage of color models for image segmentation is also discussed. Some novel approaches such as fuzzy method and physics-based method are investigated as well. 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0160d50bdd4bef68e4bc23e362283b0f",
"text": "Segmentation is a fundamental step in image description or classi1cation. In recent years, several computational models have been used to implement segmentation methods but without establishing a single analytic solution. However, the intrinsic properties of neural networks make them an interesting approach, despite some measure of ine5ciency. This paper presents a clustering approach for image segmentation based on a modi1ed fuzzy approach for image segmentation (ART) model. The goal of the proposed approach is to 1nd a simple model able to instance a prototype for each cluster avoiding complex post-processing phases. Results and comparisons with other similar models presented in the literature (like self-organizing maps and original fuzzy ART) are also discussed. Qualitative and quantitative evaluations con1rm the validity of the approach proposed. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
},
{
"docid": "68990d2cb2ed45e1c8d30b2d7cb45926",
"text": "Methods for histogram thresholding based on the minimization of a threshold-dependent criterion function might not work well for images having multimodal histograms. We propose an approach to threshold the histogram according to the similarity between gray levels. Such a similarity is assessed through a fuzzy measure. In this way, we overcome the local minima that affect most of the conventional methods. The experimental results demonstrate the effectiveness of the proposed approach for both bimodal and multimodal histograms.",
"title": ""
}
] |
[
{
"docid": "2476c8b7f6fe148ab20c29e7f59f5b23",
"text": "A high temperature, wire-bondless power electronics module with a double-sided cooling capability is proposed and successfully fabricated. In this module, a low-temperature co-fired ceramic (LTCC) substrate was used as the dielectric and chip carrier. Conducting vias were created on the LTCC carrier to realize the interconnection. The absent of a base plate reduced the overall thermal resistance and also improved the fatigue life by eliminating a large-area solder layer. Nano silver paste was used to attach power devices to the DBC substrate as well as to pattern the gate connection. Finite element simulations were used to compare the thermal performance to several reported double-sided power modules. Electrical measurements of a SiC MOSFET and SiC diode switching position demonstrated the functionality of the module.",
"title": ""
},
{
"docid": "94f39416ba9918e664fb1cd48732e3ae",
"text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.",
"title": ""
},
{
"docid": "33cc033661cd680d11dfa14d5fe74d31",
"text": "Authentication and authorization are essential parts of basic security processes and are sorely needed in the Internet of Things (IoT). The emergence of edge and fog computing creates new opportunities for security and trust management in the IoT. In this article, the authors discuss existing solutions to establish and manage trust in networked systems and argue that these solutions face daunting challenges when scaled to the IoT. They give a vision of efficient and scalable trust management for the IoT based on locally centralized, globally distributed trust management using an open source infrastructure with local authentication and authorization entities to be deployed on edge devices.",
"title": ""
},
{
"docid": "43ac7e674624615c9906b2bd58b72b7b",
"text": "OBJECTIVE\nTo develop a method enabling human-like, flexible supervisory control via delegation to automation.\n\n\nBACKGROUND\nReal-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance.\n\n\nMETHOD\nWe review problems with static and adaptive (as opposed to \"adaptable\") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a \"level of automation\" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described.\n\n\nRESULTS\nOn the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's \"playbook.\" Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles.\n\n\nCONCLUSION\nDelegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level.\n\n\nAPPLICATION\nMost applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.",
"title": ""
},
{
"docid": "723f2a824bba1167b462b528a34b4b72",
"text": "The Korea Advanced Institute of Science and Technology (KAIST) humanoid robot 1 (KHR-1) was developed for the purpose of researching the walking action of bipeds. KHR-1, which has no hands or head, has 21 degrees of freedom (DOF): 12 DOF in the legs, 1 DOF in the torso, and 8 DOF in the arms. The second version of this humanoid robot, KHR-2, (which has 41 DOF) can walk on a living-room floor; it also moves and looks like a human. The third version, KHR-3 (HUBO), has more human-like features, a greater variety of movements, and a more human-friendly character. We present the mechanical design of HUBO, including the design concept, the lower body design, the upper body design, and the actuator selection of joints. Previously we developed and published details of KHR-1 and KHR-2. The HUBO platform, which is based on KHR-2, has 41 DOF, stands 125 cm tall, and weighs 55 kg. From a mechanical point of view, HUBO has greater mechanical stiffness and a more detailed frame design than KHR-2. The stiffness of the frame was increased and the detailed design around the joints and link frame were either modified or fully redesigned. We initially introduced an exterior art design concept for KHR-2, and that concept was implemented in HUBO at the mechanical design stage.",
"title": ""
},
{
"docid": "00575265d0a6338e3eeb23d234107206",
"text": "We introduce the concept of mode-k generalized eigenvalues and eigenvectors of a tensor and prove some properties of such eigenpairs. In particular, we derive an upper bound for the number of equivalence classes of generalized tensor eigenpairs using mixed volume. Based on this bound and the structures of tensor eigenvalue problems, we propose two homotopy continuation type algorithms to solve tensor eigenproblems. With proper implementation, these methods can find all equivalence classes of isolated generalized eigenpairs and some generalized eigenpairs contained in the positive dimensional components (if there are any). We also introduce an algorithm that combines a heuristic approach and a Newton homotopy method to extract real generalized eigenpairs from the found complex generalized eigenpairs. A MATLAB software package TenEig has been developed to implement these methods. Numerical results are presented to illustrate the effectiveness and efficiency of TenEig for computing complex or real generalized eigenpairs.",
"title": ""
},
{
"docid": "8adb0817a437ceebc40fabc05f168d0d",
"text": "Internet of Things (IoT) has been a major research topic for almost a decade now, where physical objects would be interconnected as a result of convergence of various existing technologies. IoT is rapidly developing; however there are uncertainties about its security and privacy which could affect its sustainable development. This paper analyzes the security issues and challenges and provides a well defined security architecture as a confidentiality of the user's privacy and security which could result in its wider adoption by masses.",
"title": ""
},
{
"docid": "53440b741e4fa53f786fd96aaa96bb58",
"text": "To investigate and develop unmanned vehicle systems technologies for autonomous multiagent mission platforms, we are using an indoor multivehicle testbed called real-time indoor autonomous vehicle test environment (RAVEN) to study long-duration multivehicle missions in a controlled environment. Normally, demonstrations of multivehicle coordination and control technologies require that multiple human operators simultaneously manage flight hardware, navigation, control, and vehicle tasking. However, RAVEN simplifies all of these issues to allow researchers to focus, if desired, on the algorithms associated with high-level tasks. Alternatively, RAVEN provides a facility for testing low-level control algorithms on both fixed- and rotary-wing aerial platforms. RAVEN is also being used to analyze and implement techniques for embedding the fleet and vehicle health state (for instance, vehicle failures, refueling, and maintenance) into UAV mission planning. These characteristics facilitate the rapid prototyping of new vehicle configurations and algorithms without requiring a redesign of the vehicle hardware. This article describes the main components and architecture of RAVEN and presents recent flight test results illustrating the applications discussed above.",
"title": ""
},
{
"docid": "eeff8eeb391e789a40cb8f900fa241e3",
"text": "We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of the variational autoencoder that has a latent representation with stochastic dimensionality. We experimentally demonstrate that the SB-VAE, and a semisupervised variant, learn highly discriminative latent representations that often outperform the Gaussian VAE’s.",
"title": ""
},
{
"docid": "e15405f1c0fb52be154e79a2976fbb6d",
"text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.",
"title": ""
},
{
"docid": "0a3988fd53a4634853b4ab7e6522f870",
"text": "DBSCAN is a well-known density based clustering algorithm capable of discovering arbitrary shaped clusters and eliminating noise data. However, parallelization of Dbscan is challenging as it exhibits an inherent sequential data access order. Moreover, existing parallel implementations adopt a master-slave strategy which can easily cause an unbalanced workload and hence result in low parallel efficiency.\n We present a new parallel Dbscan algorithm (Pdsdbscan) using graph algorithmic concepts. More specifically, we employ the disjoint-set data structure to break the access sequentiality of Dbscan. In addition, we use a tree-based bottom-up approach to construct the clusters. This yields a better-balanced workload distribution. We implement the algorithm both for shared and for distributed memory.\n Using data sets containing up to several hundred million high-dimensional points, we show that Pdsdbscan significantly outperforms the master-slave approach, achieving speedups up to 25.97 using 40 cores on shared memory architecture, and speedups up to 5,765 using 8,192 cores on distributed memory architecture.",
"title": ""
},
{
"docid": "362301e0a25d8e14054b2eee20d9ba31",
"text": "Preterm birth is “a birth which takes place after at least 20, but less than 37, completed weeks of gestation. This includes both live births, and stillbirths” [15]. Preterm birth may cause problems such as perinatal mortality, serious neonatal morbidity and moderate to severe childhood disability. Between 6-10% of all births in Western countries are preterm and preterm deaths are the cause for more than two-third of all perinatal deaths [9]. While the recent advances in neonatal medicine has greatly increase the chance of survival of infants born after 20 weeks of gestation, these infants still frequently suffer from lifelong handicaps, and their care can exceed a million dollars during the first year of life [5 as cited in 6]. As a first step for preventing preterm birth, decision support tools are needed to help doctors predict preterm birth [6].",
"title": ""
},
{
"docid": "cdf6ad3e7846c510620a2b9184058200",
"text": "In this paper, a method for the parallel operation of inverters in an ac-distributed system is proposed. The paper explores the control of active and reactive power flow through the analysis of the output impedance of the inverters and its impact on the power sharing. As a result, adaptive virtual output impedance is proposed in order to achieve a proper reactive power sharing regardless of the line impedance unbalances. A soft-start operation is also included, avoiding the initial current peak, which results in a seamless hot-swap operation. Active power sharing is achieved by adjusting the frequency in load transient situations only, thanks to which the proposed method obtains constant steady-state frequency and amplitude. As opposed to the conventional droop method, the transient response can be modified by acting on the main control parameters. Linear and nonlinear loads can be properly shared due to the addition of a current harmonic loop in the control strategy. Experimental results are presented from a two 6-kVA parallel-connected inverters system, showing the feasibility of the proposed approach.",
"title": ""
},
{
"docid": "c618caa277af7a0a64dd676bffab9cd3",
"text": "Theoretical and empirical research documents a negative relation between the cross-section of stock returns and individual skewness. Individual skewness has been de
ned with coskewness, industry groups, predictive models, and even with options skewness. However, measures of skewness computed only from stock returns, such as historical skewness, do not con
rm this negative relation. In this paper, we propose a model-free measure of individual stock skewness directly obtained from high-frequency intraday prices, which we call realized skewness. We hypothesize that realized skewness predicts future stock returns. To test this hypothesis, we sort stocks every week according to realized skewness, form
ve portfolios and analyze subsequent weekly returns. We
nd a negative relation between realized skewness and stock returns in the cross section. A trading strategy that buys stocks in the lowest realized skewness quintile and sells stocks in the highest realized skewness quintile generates an average raw return of 38 basis points per week with a t-statistic of 9.15. This result is robust to di¤erent market periods, portfolio weightings,
rm characteristics and is not explained by linear factor models. Comments are welcome. We both want to thank IFM for
nancial support. Any remaining inadequacies are ours alone. Correspondence to: Aurelio Vasquez, Faculty of Management, McGill University, 1001 Sherbrooke Street West, Montreal, Quebec, Canada, H3A 1G5; Tel: (514) 398-4000 x.00231; E-mail: [email protected].",
"title": ""
},
{
"docid": "24615e8513ce50d229b64eecaa5af8c8",
"text": "Driver's gaze direction is a critical information in understanding driver state. In this paper, we present a distributed camera framework to estimate driver's coarse gaze direction using both head and eye cues. Coarse gaze direction is often sufficient in a number of applications, however, the challenge is to estimate gaze direction robustly in naturalistic real-world driving. Towards this end, we propose gaze-surrogate features estimated from eye region via eyelid and iris analysis. We present a novel iris detection computational framework. We are able to extract proposed features robustly and determine driver's gaze zone effectively. We evaluated the proposed system on a dataset, collected from naturalistic on-road driving in urban streets and freeways. A human expert annotated driver's gaze zone ground truth using information from the driver's eyes and the surrounding context. We conducted two experiments to compare the performance of the gaze zone estimation with and without eye cues. The head-alone experiment has a reasonably good result for most of the gaze zones with an overall 79.8% of weighted accuracy. By adding eye cues, the experimental result shows that the overall weighted accuracy is boosted to 94.9%, and all the individual gaze zones have a better true detection rate especially between the adjacent zones. Therefore, our experimental evaluations show efficacy of the proposed features and very promising results for robust gaze zone estimation.",
"title": ""
},
{
"docid": "febf797870da28d6492885095b92ef1f",
"text": "Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.",
"title": ""
},
{
"docid": "c0840aaf0ad2124d9411d1a718b0a624",
"text": "To secure the network communication for the smart grid, it is important that a secure key management scheme is needed. The focus of this paper is on the secure key distribution for the smart grid. In this paper, we first overview the key management scheme recently proposed by Wu-Zhou and show it is vulnerable to the man-in-the-middle attack. Then we propose a new key distribution protocol and demonstrate it is secure and efficient for smart grid network. Applying traditional PKI to the smart grid requires significant work and maintenance of the public key. By using Kerberos to smart grid may lose authentication from the third party due to power outages. Therefore, we propose a scheme for a smart grid using a trusted third party which not only has no issue on key revocation, but also the third party can be easily duplicated in case power outages occur.",
"title": ""
},
{
"docid": "2d0c16376e71989031b99f3e5d79025c",
"text": "In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.",
"title": ""
},
{
"docid": "6a2c7d43cde643f295ace71f5681285f",
"text": "Quantum mechanics and information theory are among the most important scientific discoveries of the last century. Although these two areas initially developed separately, it has emerged that they are in fact intimately related. In this review the author shows how quantum information theory extends traditional information theory by exploring the limits imposed by quantum, rather than classical, mechanics on information storage and transmission. The derivation of many key results differentiates this review from the usual presentation in that they are shown to follow logically from one crucial property of relative entropy. Within the review, optimal bounds on the enhanced speed that quantum computers can achieve over their classical counterparts are outlined using information-theoretic arguments. In addition, important implications of quantum information theory for thermodynamics and quantum measurement are intermittently discussed. A number of simple examples and derivations, including quantum superdense coding, quantum teleportation, and Deutsch’s and Grover’s algorithms, are also included.",
"title": ""
},
{
"docid": "3b09a6442c408601bf65078910c1ff46",
"text": "Eukaryotic cells respond to unfolded proteins in their endoplasmic reticulum (ER stress), amino acid starvation, or oxidants by phosphorylating the alpha subunit of translation initiation factor 2 (eIF2alpha). This adaptation inhibits general protein synthesis while promoting translation and expression of the transcription factor ATF4. Atf4(-/-) cells are impaired in expressing genes involved in amino acid import, glutathione biosynthesis, and resistance to oxidative stress. Perk(-/-) cells, lacking an upstream ER stress-activated eIF2alpha kinase that activates Atf4, accumulate endogenous peroxides during ER stress, whereas interference with the ER oxidase ERO1 abrogates such accumulation. A signaling pathway initiated by eIF2alpha phosphorylation protects cells against metabolic consequences of ER oxidation by promoting the linked processes of amino acid sufficiency and resistance to oxidative stress.",
"title": ""
}
] |
scidocsrr
|
b495f664e9f2408e4a338e5dc3c14456
|
Machine learning based handover management for improved QoE in LTE
|
[
{
"docid": "4c50dd5905ce7e1f772e69673abe1094",
"text": "The wireless industry has been experiencing an explosion of data traffic usage in recent years and is now facing an even bigger challenge, an astounding 1000-fold data traffic increase in a decade. The required traffic increase is in bits per second per square kilometer, which is equivalent to bits per second per Hertz per cell × Hertz × cell per square kilometer. The innovations through higher utilization of the spectrum (bits per second per Hertz per cell) and utilization of more bandwidth (Hertz) are quite limited: spectral efficiency of a point-to-point link is very close to the theoretical limits, and utilization of more bandwidth is a very costly solution in general. Hyper-dense deployment of heterogeneous and small cell networks (HetSNets) that increase cells per square kilometer by deploying more cells in a given area is a very promising technique as it would provide a huge capacity gain by bringing small base stations closer to mobile devices. This article presents a holistic view on hyperdense HetSNets, which include fundamental preference in future wireless systems, and technical challenges and recent technological breakthroughs made in such networks. Advancements in modeling and analysis tools for hyper-dense HetSNets are also introduced with some additional interference mitigation and higher spectrum utilization techniques. This article ends with a promising view on the hyper-dense HetSNets to meet the upcoming 1000× data challenge.",
"title": ""
}
] |
[
{
"docid": "a94d8b425aed0ade657aa1091015e529",
"text": "Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.",
"title": ""
},
{
"docid": "49880a6cad6b00b9dfbd517c6675338e",
"text": "Associations between large cavum septum pellucidum and functional psychosis disorders, especially schizophrenia, have been reported. We report a case of late-onset catatonia associated with enlarged CSP and cavum vergae. A 66-year-old woman was presented with altered mental status and stereotypic movement. She was initially treated with aripiprazole and lorazepam. After 4 weeks, she was treated with electroconvulsive therapy. By 10 treatments, echolalia vanished, and catatonic behavior was alleviated. Developmental anomalies in the midline structure may increase susceptibility to psychosis, even in the elderly.",
"title": ""
},
{
"docid": "dd5a45464936906e7b4c987274c66839",
"text": "Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this paper we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.",
"title": ""
},
{
"docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06",
"text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.",
"title": ""
},
{
"docid": "93c928adef35a409acaa9b371a1498f3",
"text": "The acquisition of a new motor skill is characterized first by a short-term, fast learning stage in which performance improves rapidly, and subsequently by a long-term, slower learning stage in which additional performance gains are incremental. Previous functional imaging studies have suggested that distinct brain networks mediate these two stages of learning, but direct comparisons using the same task have not been performed. Here we used a task in which subjects learn to track a continuous 8-s sequence demanding variable isometric force development between the fingers and thumb of the dominant, right hand. Learning-associated changes in brain activation were characterized using functional MRI (fMRI) during short-term learning of a novel sequence, during short-term learning after prior, brief exposure to the sequence, and over long-term (3 wk) training in the task. Short-term learning was associated with decreases in activity in the dorsolateral prefrontal, anterior cingulate, posterior parietal, primary motor, and cerebellar cortex, and with increased activation in the right cerebellar dentate nucleus, the left putamen, and left thalamus. Prefrontal, parietal, and cerebellar cortical changes were not apparent with short-term learning after prior exposure to the sequence. With long-term learning, increases in activity were found in the left primary somatosensory and motor cortex and in the right putamen. Our observations extend previous work suggesting that distinguishable networks are recruited during the different phases of motor learning. While short-term motor skill learning seems associated primarily with activation in a cortical network specific for the learned movements, long-term learning involves increased activation of a bihemispheric cortical-subcortical network in a pattern suggesting \"plastic\" development of new representations for both motor output and somatosensory afferent information.",
"title": ""
},
{
"docid": "088df7d8d71c00f7129d5249844edbc5",
"text": "Intense multidisciplinary research has provided detailed knowledge of the molecular pathogenesis of Alzheimer disease (AD). This knowledge has been translated into new therapeutic strategies with putative disease-modifying effects. Several of the most promising approaches, such as amyloid-β immunotherapy and secretase inhibition, are now being tested in clinical trials. Disease-modifying treatments might be at their most effective when initiated very early in the course of AD, before amyloid plaques and neurodegeneration become too widespread. Thus, biomarkers are needed that can detect AD in the predementia phase or, ideally, in presymptomatic individuals. In this Review, we present the rationales behind and the diagnostic performances of the core cerebrospinal fluid (CSF) biomarkers for AD, namely total tau, phosphorylated tau and the 42 amino acid form of amyloid-β. These biomarkers reflect AD pathology, and are candidate markers for predicting future cognitive decline in healthy individuals and the progression to dementia in patients who are cognitively impaired. We also discuss emerging plasma and CSF biomarkers, and explore new proteomics-based strategies for identifying additional CSF markers. Furthermore, we outline the roles of CSF biomarkers in drug discovery and clinical trials, and provide perspectives on AD biomarker discovery and the validation of such markers for use in the clinic.",
"title": ""
},
{
"docid": "4bbcaa76b20afecc8e6002d155acf23e",
"text": "We study the problem of learning mixtures of distributions, a natural formalization of clustering. A mixture of distributions is a collection of distributionsD = {D1, . . .DT }, andmixing weights , {w1, . . . , wT } such that",
"title": ""
},
{
"docid": "0a842427c2c03d08f9950765ee0fb625",
"text": "For centuries, several hundred pesticides have been used to control insects. These pesticides differ greatly in their mode of action, uptake by the body, metabolism, elimination from the body, and toxicity to humans. Potential exposure from the environment can be estimated by environmental monitoring. Actual exposure (uptake) is measured by the biological monitoring of human tissues and body fluids. Biomarkers are used to detect the effects of pesticides before adverse clinical health effects occur. Pesticides and their metabolites are measured in biological samples, serum, fat, urine, blood, or breast milk by the usual analytical techniques. Biochemical responses to environmental chemicals provide a measure of toxic effect. A widely used biochemical biomarker, cholinesterase depression, measures exposure to organophosphorus insecticides. Techniques that measure DNA damage (e.g., detection of DNA adducts) provide a powerful tool in measuring environmental effects. Adducts to hemoglobin have been detected with several pesticides. Determination of chromosomal aberration rates in cultured lymphocytes is an established method of monitoring populations occupationally or environmentally exposed to known or suspected mutagenic-carcinogenic agents. There are several studies on the cytogenetic effects of work with pesticide formulations. The majority of these studies report increases in the frequency of chromosomal aberrations and/or sister chromatid exchanges among the exposed workers. Biomarkers will have a major impact on the study of environmental risk factors. The basic aim of scientists exploring these issues is to determine the nature and consequences of genetic change or variation, with the ultimate purpose of predicting or preventing disease.",
"title": ""
},
{
"docid": "a7090eb926dee4b648e307559db4fc36",
"text": "Technology incubators are university-based technology initiatives that should facilitate knowledge flows from the university to the incubator firms. We thus investigate the research question of how knowledge actually flows from universities to incubator firms. Moreover, we assess the effect of these knowledge flows on incubator firm-level differential performance. Based on the resource-based view of the firm and the absorptive capacity construct, we advance the overarching hypothesis that knowledge flows should enhance incubator firm performance. Drawing on longitudinal and fine-grained firm-level data of 79 technology ventures incubated between 1998 and 2003 at the Advanced Technology Development Center, a technology incubator sponsored by the Georgia Institute of Technology, we find some support for knowledge flows from universities to incubator firms. Our evidence suggests that incubator firms’ absorptive capacity is an important factor when transforming university knowledge into",
"title": ""
},
{
"docid": "6954c2a51c589987ba7e37bd81289ba1",
"text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "b6983a5ccdac40607949e2bfe2beace2",
"text": "A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as \"p-hacking,\" occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.",
"title": ""
},
{
"docid": "36a694668a10bc0475f447adb1e09757",
"text": "Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.",
"title": ""
},
{
"docid": "9d60842315ad481ac55755160a581d74",
"text": "This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.",
"title": ""
},
{
"docid": "989bdb2cf2e2587b854d8411f945d4fe",
"text": "In this paper, we propose a combination of mean-shift-based tracking processes to establish migrating cell trajectories through in vitro phase-contrast video microscopy. After a recapitulation on how the mean-shift algorithm permits efficient object tracking we describe the proposed extension and apply it to the in vitro cell tracking problem. In this application, the cells are unmarked (i.e., no fluorescent probe is used) and are observed under classical phase-contrast microscopy. By introducing an adaptive combination of several kernels, we address several problems such as variations in size and shape of the tracked objects (e.g., those occurring in the case of cell membrane extensions), the presence of incomplete (or noncontrasted) object boundaries, partially overlapping objects and object splitting (in the case of cell divisions or mitoses). Comparing the tracking results automatically obtained to those generated manually by a human expert, we tested the stability of the different algorithm parameters and their effects on the tracking results. We also show how the method is resistant to a decrease in image resolution and accidental defocusing (which may occur during long experiments, e.g., dozens of hours). Finally, we applied our methodology on cancer cell tracking and showed that cytochalasin-D significantly inhibits cell motility.",
"title": ""
},
{
"docid": "34e6ff966bead1eb91d1f21209cf992c",
"text": "UR robotic arms are from a series of lightweight, fast, easy to program, flexible, and safe robotic arms with 6 degrees of freedom. The fairly open control structure and low level programming access with high control bandwidth have made them of interest for many researchers. This paper presents a complete set of mathematical kinematic and dynamic, Matlab, and Simmechanics models for the UR5 robot. The accuracy of the developed mathematical models are demonstrated through kinematic and dynamic analysis. The Simmechanics model is developed based on these models to provide high quality visualisation of this robot for simulation of it in Matlab environment. The models are developed for public access and readily usable in Matlab environment. A position control system has been developed to demonstrate the use of the models and for cross validation purpose.",
"title": ""
},
{
"docid": "148b7445ec2cd811d64fd81c61c20e02",
"text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.",
"title": ""
},
{
"docid": "ad0fb1877ac6323a6f17f885295517bc",
"text": "In current business practice, an integrated approach to business and IT is indispensable. Take for example a company that needs to assess the impact of introducing a new product in its portfolio. This may require defining additional business processes, hiring extra personnel, changing the supporting applications, and augmenting the technological infrastructure to support the additional load of these applications. Perhaps this may even require a change of the organizational structure.",
"title": ""
},
{
"docid": "7fe6bba98c9d3bda246d5cc40c62c27d",
"text": "A large proportion of online comments present on public domains are usually constructive, however a significant proportion are toxic in nature. The comments contain lot of typos which increases the number of features manifold, making the ML model difficult to train. Considering the fact that the data scientists spend approximately 80% of their time in collecting, cleaning and organizing their data [1], we explored how much effort should we invest in the preprocessing (transformation) of raw comments before feeding it to the state-of-the-art classification models. With the help of four models on Jigsaw toxic comment classification data, we demonstrated that the training of model without any transformation produce relatively decent model. Applying even basic transformations, in some cases, lead to worse performance and should be applied with caution.",
"title": ""
},
{
"docid": "e9497a16e9d12ea837c7a0ec44d71860",
"text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.",
"title": ""
}
] |
scidocsrr
|
89e2fb2941f1b1656894e7cae810ffe8
|
Improving the grid power quality using virtual synchronous machines
|
[
{
"docid": "e100a602848dcba4a2e9575148486f9c",
"text": "The increasing integration of decentralized electrical sources is attended by problems with power quality, safe grid operation and grid stability. The concept of the Virtual Synchronous Machine (VISMA) [1] discribes an inverter to particularly connect renewable electrical sources to the grid that provides a wide variety of static an dynamic properties they are also suitable to achieve typical transient and oscillation phenomena in decentralized as well as weak grids. Furthermore in static operation, power plant controlled VISMA systems are capable to cope with critical surplus production of renewable electrical energy without additional communication systems only conducted by the grid frequency. This paper presents the dynamic properties \"damping\" and \"virtual mass\" of the VISMA and their contribution to the stabilization of the grid frequency and the attenuation of grid oscillations examined in an experimental grid set.",
"title": ""
}
] |
[
{
"docid": "84a32cdf9531b70d356ee06d4e2769df",
"text": "In this article we present mechanical measurements of three representative elastomers used in soft robotic systems: Sylgard 184, Smooth-Sil 950, and EcoFlex 00-30. Our aim is to demonstrate the effects of the nonlinear, time-dependent properties of these materials to facilitate improved dynamic modeling of soft robotic components. We employ uniaxial pull-to-failure tests, cyclic loading tests, and stress relaxation tests to provide a qualitative assessment of nonlinear behavior, batch-to-batch repeatability, and effects of prestraining, cyclic loading, and viscoelastic stress relaxation. Strain gauges composed of the elastomers embedded with a microchannel of conductive liquid (eutectic gallium–indium) are also tested to quantify the interaction between material behaviors and measured strain output. It is found that all of the materials tested exhibit the Mullins effect, where the material properties in the first loading cycle differ from the properties in all subsequent cycles, as well as response sensitivity to loading rate and production variations. Although the materials tested show stress relaxation effects, the measured output from embedded resistive strain gauges is found to be uncoupled from the changes to the material properties and is only a function of strain.",
"title": ""
},
{
"docid": "e06005f63efd6f8ca77f8b91d1b3b4a9",
"text": "Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.",
"title": ""
},
{
"docid": "14fe4e2fb865539ad6f767b9fc9c1ff5",
"text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.",
"title": ""
},
{
"docid": "9c35b7e3bf0ef3f3117c6ba8a9ad1566",
"text": "Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data, asynchronous parallelization of SGD has also been studied. Then, a natural question is whether these techniques can be seamlessly integrated with each other, and whether the integration has desirable theoretical guarantee on its convergence. In this paper, we provide our formal answer to this question. In particular, we consider the asynchronous parallelization of SGD, accelerated by leveraging variance reduction, coordinate sampling, and Nesterov’s method. We call the new algorithm asynchronous accelerated SGD (AASGD). Theoretically, we proved a convergence rate of AASGD, which indicates that (i) the three acceleration methods are complementary to each other and can make their own contributions to the improvement of convergence rate; (ii) asynchronous parallelization does not hurt the convergence rate, and can achieve considerable speedup under appropriate parameter setting. Empirically, we tested AASGD on a few benchmark datasets. The experimental results verified our theoretical findings and indicated that AASGD could be a highly effective and efficient algorithm for practical use.",
"title": ""
},
{
"docid": "2ed9db3d174d95e5b97c4fe26ca6c8ac",
"text": "One of the more startling effects of road related accidents is the economic and social burden they cause. Between 750,000 and 880,000 people died globally in road related accidents in 1999 alone, with an estimated cost of US$518 billion [11]. One way of combating this problem is to develop Intelligent Vehicles that are selfaware and act to increase the safety of the transportation system. This paper presents the development and application of a novel multiple-cue visual lane tracking system for research into Intelligent Vehicles (IV). Particle filtering and cue fusion technologies form the basis of the lane tracking system which robustly handles several of the problems faced by previous lane tracking systems such as shadows on the road, unreliable lane markings, dramatic lighting changes and discontinuous changes in road characteristics and types. Experimental results of the lane tracking system running at 15Hz will be discussed, focusing on the particle filter and cue fusion technology used.",
"title": ""
},
{
"docid": "3f5f8e75af4cc24e260f654f8834a76c",
"text": "The Balanced Scorecard (BSC) methodology focuses on major critical issues of modern business organisations: the effective measurement of corporate performance and the evaluation of the successful implementation of corporate strategy. Despite the increased adoption of the BSC methodology by numerous business organisations during the last decade, limited case studies concern non-profit organisations (e.g. public sector, educational institutions, healthcare organisations, etc.). The main aim of this study is to present the development of a performance measurement system for public health care organisations, in the context of BSC methodology. The proposed approach considers the distinguished characteristics of the aforementioned sector (e.g. lack of competition, social character of organisations, etc.). The proposed measurement system contains the most important financial performance indicators, as well as non-financial performance indicators that are able to examine the quality of the provided services, the satisfaction of internal and external customers, the selfimprovement system of the organisation and the ability of the organisation to adapt and change. These indicators play the role of Key Performance Indicators (KPIs), in the context of BSC methodology. The presented analysis is based on a MCDA approach, where the UTASTAR method is used in order to aggregate the marginal performance of KPIs. This approach is able to take into account the preferences of the management of the organisation regarding the achievement of the defined strategic objectives. The main results of the proposed approach refer to the evaluation of the overall scores for each one of the main dimensions of the BSC methodology (i.e. financial, customer, internal business process, and innovation-learning). These results are able to help the organisation to evaluate and revise its strategy, and generally to adopt modern management approaches in every day practise. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b59e332c086a8ce6d6ddc0526b8848c7",
"text": "We propose Generative Adversarial Tree Search (GATS), a sample-efficient Deep Reinforcement Learning (DRL) algorithm. While Monte Carlo Tree Search (MCTS) is known to be effective for search and planning in RL, it is often sampleinefficient and therefore expensive to apply in practice. In this work, we develop a Generative Adversarial Network (GAN) architecture to model an environment’s dynamics and a predictor model for the reward function. We exploit collected data from interaction with the environment to learn these models, which we then use for model-based planning. During planning, we deploy a finite depth MCTS, using the learned model for tree search and a learned Q-value for the leaves, to find the best action. We theoretically show that GATS improves the bias-variance tradeoff in value-based DRL. Moreover, we show that the generative model learns the model dynamics using orders of magnitude fewer samples than the Q-learner. In non-stationary settings where the environment model changes, we find the generative model adapts significantly faster than the Q-learner to the new environment.",
"title": ""
},
{
"docid": "4f2fa764996d666762e0b6ba01a799a2",
"text": "A critical assumption of the Technology Acceptance Model (TAM) is that its belief constructs - perceived ease of use (PEOU) and perceived usefulness (PU) - fully mediate the influence of external variables on IT usage behavior. If this assumption is true, researchers can effectively \"assume away\" the effects of broad categories of external variables, those relating to the specific task, the technology, and user differences. One recent study did indeed find that belief constructs fully mediated individual differences, and its authors suggest that further studies with similar results could pave the way for simpler acceptance models that ignore such differences. To test the validity of these authors' results, we conducted a similar study to determine the effect of staff seniority, age, and education level on usage behavior. Our study involved 106 professional and administrative staff in the IT division of a large manufacturing company who voluntarily use email and word processing. We found that these individual user differences have significant direct effects on both the frequency and volume of usage. These effects are beyond the indirect effects as mediated through the TAM belief constructs. Thus, rather than corroborating the recent study, our findings underscore the importance of users' individual differences and suggest that TAM's belief constructs are accurate but incomplete predictors of usage behavior.",
"title": ""
},
{
"docid": "74a91327b85ac9681f618d4ba6a86151",
"text": "In this paper, a miniaturized planar antenna with enhanced bandwidth is designed for the ISM 433 MHz applications. The antenna is realized by cascading two resonant structures with meander lines, thus introducing two different radiating branches to realize two neighboring resonant frequencies. The techniques of shorting pin and novel ground plane are adopted for bandwidth enhancement. Combined with these structures, a novel antenna with a total size of 23 mm × 49.5 mm for the ISM band application is developed and fabricated. Measured results show that the proposed antenna has good performance with the -10 dB impedance bandwidth is about 12.5 MHz and the maximum gain is about -2.8 dBi.",
"title": ""
},
{
"docid": "931af201822969eb10871ccf10d47421",
"text": "Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task. These models often outperform baselines which use (externally provided) syntax trees to drive the composition order. This work contributes (a) a new latent tree learning model based on shift-reduce parsing, with competitive downstream performance and non-trivial induced trees, and (b) an analysis of the trees learned by our shift-reduce model and by a chart-based model.",
"title": ""
},
{
"docid": "b6ad0aeb5efbde0a9b340e88e68c884a",
"text": "Conservative non-pharmacological evidence-based management options for Chronic Obstructive Pulmonary Disease (COPD) primarily focus on developing physiological capacity. With co-morbidities, including those of the musculoskeletal system, contributing to the overall severity of the disease, further research was needed. This thesis presents a critical review of the active and passive musculoskeletal management approaches currently used in COPD. The evidence for using musculoskeletal interventions in COPD management was inconclusive. Whilst an evaluation of musculoskeletal changes and their influence on pulmonary function was required, it was apparent that this would necessitate a significant programme of research. In view of this a narrative review of musculoskeletal changes in the cervico-thoracic region was undertaken. With a paucity of literature exploring chest wall flexibility and recent clinical guidelines advocating research into thoracic mobility exercises in COPD, a focus on thoracic spine motion analysis literature was taken. On critically reviewing the range of current in vivo measurement techniques it was evident that soft tissue artefact was a potential source of measurement error. As part of this thesis, soft tissue artefact during thoracic spine axial rotation was quantified. Given the level was deemed unacceptable, an alternative approach was developed and tested for intra-rater reliability. This technique, in conjunction with a range of other measures, was subsequently used to evaluate cervico-thoracic musculoskeletal changes and their relationship with pulmonary function in COPD. In summary, subjects with COPD were found to have reduced spinal motion, altered posture and increased muscle sensitivity compared to controls. Reduced spinal motion and altered neck posture were associated with reduced pulmonary function and having diagnosed COPD. Results from this thesis provide evidence to support inception of a clinical trial of flexibility or mobility exercises",
"title": ""
},
{
"docid": "91c6903902eb4edc3d9cf2c3dec66d9e",
"text": "WordNets – lexical databases in which groups of synonyms are arranged according to the semantic relationships between them – are crucial resources in semantically-focused natural language processing tasks, but are extremely costly and labour intensive to produce. In languages besides English, this has led to growing interest in constructing and extending WordNets automatically, as an alternative to producing them from scratch. This paper describes various approaches to constructing WordNets automatically – by leveraging traditional lexical resources and newer trends such as word embeddings – and also offers a discussion of the issues affecting the evaluation of automatically constructed WordNets.",
"title": ""
},
{
"docid": "20746cd01ff3b67b204cd2453f1d8ecb",
"text": "Quantification of human group-behavior has so far defied an empirical, falsifiable approach. This is due to tremendous difficulties in data acquisition of social systems. Massive multiplayer online games (MMOG) provide a fascinating new way of observing hundreds of thousands of simultaneously socially interacting individuals engaged in virtual economic activities. We have compiled a data set consisting of practically all actions of all players over a period of 3 years from a MMOG played by 300,000 people. This largescale data set of a socio-economic unit contains all social and economic data from a single and coherent source. Players have to generate a virtual income through economic activities to ‘survive’ and are typically engaged in a multitude of social activities offered within the game. Our analysis of high-frequency log files focuses on three types of social networks, and tests a series of social-dynamics hypotheses. In particular we study the structure and dynamics of friend-, enemyand communication networks. We find striking differences in topological structure between positive (friend) and negative (enemy) tie networks. All networks confirm the recently observed phenomenon of network densification. We propose two approximate social laws in communication networks, the first expressing betweenness centrality as the inverse square of the overlap, the second relating communication strength to the cube of the overlap. These empirical laws provide strong quantitative evidence for the Weak ties hypothesis of Granovetter. Further, the analysis of triad significance profiles validates well-established assertions from social balance theory. We find overrepresentation (underrepresentation) of complete (incomplete) triads in networks of positive ties, and vice versa for networks of negative ties. Empirical transition probabilities between triad classes provide evidence for triadic closure with extraordinarily high precision. For the first time we provide empirical results for large-scale networks of negative social ties. Whenever possible we compare our findings with data from non-virtual human groups and provide further evidence that online game communities serve as a valid model for a wide class of human societies. With this setup we demonstrate the feasibility for establishing a ‘socio-economic laboratory’ which allows to operate at levels of precision approaching those of the natural sciences. All data used in this study is fully anonymized; the authors have the written consent to publish from the legal department of the Medical University of Vienna. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f6193fa2ac2ea17c7710241a42d34a33",
"text": "BACKGROUND\nThe most common microcytic and hypochromic anemias are iron deficiency anemia and thalassemia trait. Several indices to discriminate iron deficiency anemia from thalassemia trait have been proposed as simple diagnostic tools. However, some of the best discriminative indices use parameters in the formulas that are only measured in modern counters and are not always available in small laboratories. The development of an index with good diagnostic accuracy based only on parameters derived from the blood cell count obtained using simple counters would be useful in the clinical routine. Thus, the aim of this study was to develop and validate a discriminative index to differentiate iron deficiency anemia from thalassemia trait.\n\n\nMETHODS\nTo develop and to validate the new formula, blood count data from 106 (thalassemia trait: 23 and iron deficiency: 83) and 185 patients (thalassemia trait: 30 and iron deficiency: 155) were used, respectively. Iron deficiency, β-thalassemia trait and α-thalassemia trait were confirmed by gold standard tests (low serum ferritin for iron deficiency anemia, HbA2>3.5% for β-thalassemia trait and using molecular biology for the α-thalassemia trait).\n\n\nRESULTS\nThe sensitivity, specificity, efficiency, Youden's Index, area under receiver operating characteristic curve and Kappa coefficient of the new formula, called the Matos & Carvalho Index were 99.3%, 76.7%, 95.7%, 76.0, 0.95 and 0.83, respectively.\n\n\nCONCLUSION\nThe performance of this index was excellent with the advantage of being solely dependent on the mean corpuscular hemoglobin concentration and red blood cell count obtained from simple automatic counters and thus may be of great value in underdeveloped and developing countries.",
"title": ""
},
{
"docid": "950a6a611f1ceceeec49534c939b4e0f",
"text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].",
"title": ""
},
{
"docid": "bca81a5b34376e5a6090e528a583b4f4",
"text": "There has been considerable debate in the literature about the relative merits of information processing versus dynamical approaches to understanding cognitive processes. In this article, we explore the relationship between these two styles of explanation using a model agent evolved to solve a relational categorization task. Specifically, we separately analyze the operation of this agent using the mathematical tools of information theory and dynamical systems theory. Information-theoretic analysis reveals how task-relevant information flows through the system to be combined into a categorization decision. Dynamical analysis reveals the key geometrical and temporal interrelationships underlying the categorization decision. Finally, we propose a framework for directly relating these two different styles of explanation and discuss the possible implications of our analysis for some of the ongoing debates in cognitive science.",
"title": ""
},
{
"docid": "6f4479d224c1546040bee39d50eaba55",
"text": "Bag-of-words (BOW) is now the most popular way to model text in statistical machine learning approaches in sentiment analysis. However, the performance of BOW sometimes remains limited due to some fundamental deficiencies in handling the polarity shift problem. We propose a model called dual sentiment analysis (DSA), to address this problem for sentiment classification. We first propose a novel data expansion technique by creating a sentiment-reversed review for each training and test review. On this basis, we propose a dual training algorithm to make use of original and reversed training reviews in pairs for learning a sentiment classifier, and a dual prediction algorithm to classify the test reviews by considering two sides of one review. We also extend the DSA framework from polarity (positive-negative) classification to 3-class (positive-negative-neutral) classification, by taking the neutral reviews into consideration. Finally, we develop a corpus-based method to construct a pseudo-antonym dictionary, which removes DSA's dependency on an external antonym dictionary for review reversion. We conduct a wide range of experiments including two tasks, nine datasets, two antonym dictionaries, three classification algorithms, and two types of features. The results demonstrate the effectiveness of DSA in supervised sentiment classification.",
"title": ""
},
{
"docid": "73f6ba4ad9559cd3c6f7a88223e4b556",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
}
] |
scidocsrr
|
c9d8f73da91b19104e3ab129444342ec
|
Sentence Ordering and Coherence Modeling using Recurrent Neural Networks
|
[
{
"docid": "ee46ee9e45a87c111eb14397c99cd653",
"text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto",
"title": ""
},
{
"docid": "e7ac73f581ae7799021374ddd3e4d3a2",
"text": "Table: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its non-neural counterpart with p-value < 0.01. Discr. Ins. Acc F1 Random 50.00 50.00 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95†",
"title": ""
}
] |
[
{
"docid": "dbf26735e4bba4f1259a876137dd6f0c",
"text": "A complex waveguide to microstrip line transition is proposed for system-on-package (SOP) on low temperature cofired ceramic (LTCC). Transition is designed to operate around 60 GHz and is used to feed the 16 elements microstrip antenna array. Transition includes waveguide to stripline transition, stripline to embedded microstrip line transition, and finally embedded microstrip line to microstrip line transition. Return loss characteristics for single transitions are presented. For the assembled complex transition 10-dB return loss bandwidth is from 52 GHz up to 75 GHz. System with antenna array and feed line has gain more then 17 dB. Analysis has been performed using full-wave simulation software.",
"title": ""
},
{
"docid": "bc58f2f9f6f5773f5f8b2696d9902281",
"text": "Software development is a complicated process and requires careful planning to produce high quality software. In large software development projects, release planning may involve a lot of unique challenges. Due to time, budget and some other constraints, potentially there are many problems that may possibly occur. Subsequently, project managers have been trying to identify and understand release planning, challenges and possible resolutions which might help them in developing more effective and successful software products. This paper presents the findings from an empirical study which investigates release planning challenges. It takes a qualitative approach using interviews and observations with practitioners and project managers at five large software banking projects in Informatics Services Corporation (ISC) in Iran. The main objective of this study is to explore and increase the understanding of software release planning challenges in several software companies in a developing country. A number of challenges were elaborated and discussed in this study within the domain of software banking projects. These major challenges are classified into two main categories: the human-originated including people cooperation, disciplines and abilities; and the system-oriented including systematic approaches, resource constraints, complexity, and interdependency among the systems.",
"title": ""
},
{
"docid": "028eb67d71987c33c4a331cf02c6ff00",
"text": "We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.",
"title": ""
},
{
"docid": "1000855a500abc1f8ef93d286208b600",
"text": "Nowadays, the most widely used variable speed machine for wind turbine above 1MW is the doubly fed induction generator (DFIG). As the wind power penetration continues to increase, wind turbines are required to provide Low Voltage Ride-Through (LVRT) capability. Crowbars are commonly used to protect the power converters during voltage dips. Its main drawback is that the DFIG absorbs reactive power from the grid during grid faults. This paper proposes an improved control strategy for the crowbar protection to reduce its operation time. And a simple demagnetization method is adopted to decrease the oscillations of the transient current. Moreover, reactive power can be provided to assist the recovery of the grid voltage. Simulation results show the effectiveness of the proposed control schemes.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "154c5c644171c63647e5a1c83ed06440",
"text": "Recommender System are new generation internet tool that help user in navigating through information on the internet and receive information related to their preferences. Although most of the time recommender systems are applied in the area of online shopping and entertainment domains like movie and music, yet their applicability is being researched upon in other area as well. This paper presents an overview of the Recommender Systems which are currently working in the domain of online book shopping. This paper also proposes a new book recommender system that combines user choices with not only similar users but other users as well to give diverse recommendation that change over time. The overall architecture of the proposed system is presented and its implementation with a prototype design is described. Lastly, the paper presents empirical evaluation of the system based on a survey reflecting the impact of such diverse recommendations on the user choices. Key-Words: Recommender system; Collaborative filtering; Content filtering; Data mining; Time; Book",
"title": ""
},
{
"docid": "bd3cedfd42e261e9685cf402fc44c914",
"text": "OBJECTIVES\nThe objective of this study was to compile existing scientific evidence regarding the effects of essential oils (EOs) administered via inhalation for the alleviation of nausea and vomiting.\n\n\nMETHODS\nCINAHL, PubMed, and EBSCO Host and Science Direct databases were searched for articles related to the use of EOs and/or aromatherapy for nausea and vomiting. Only articles using English as a language of publication were included. Eligible articles included all forms of evidence (nonexperimental, experimental, case report). Interventions were limited to the use of EOs by inhalation of their vapors to treat symptoms of nausea and vomiting in various conditions regardless of age group. Studies where the intervention did not utilize EOs or were concerned with only alcohol inhalation and trials that combined the use of aromatherapy with other treatments (massage, relaxations, or acupressure) were excluded.\n\n\nRESULTS\nFive (5) articles met the inclusion criteria encompassing trials with 328 respondents. Their results suggest that the inhaled vapor of peppermint or ginger essential oils not only reduced the incidence and severity of nausea and vomiting but also decreased antiemetic requirements and consequently improved patient satisfaction. However, a definitive conclusion could not be drawn due to methodological flaws in the existing research articles and an acute lack of additional research in this area.\n\n\nCONCLUSIONS\nThe existing evidence is encouraging but yet not compelling. Hence, further well-designed large trials are needed before confirmation of EOs effectiveness in treating nausea and vomiting can be strongly substantiated.",
"title": ""
},
{
"docid": "0ffe744bfa62726930406065399e6bca",
"text": "In this paper we present an annotated corpus created with the aim of analyzing the informative behaviour of emoji – an issue of importance for sentiment analysis and natural language processing. The corpus consists of 2475 tweets all containing at least one emoji, which has been annotated using one of the three possible classes: Redundant, Non Redundant, and Non Redundant + POS. We explain how the corpus was collected, describe the annotation procedure and the interface developed for the task. We provide an analysis of the corpus, considering also possible predictive features, discuss the problematic aspects of the annotation, and suggest future improvements.",
"title": ""
},
{
"docid": "9e0186c53e0a55744f60074145d135e3",
"text": "Two new low-power, and high-performance 1bit Full Adder cells are proposed in this paper. These cells are based on low-power XOR/XNOR circuit and Majority-not gate. Majority-not gate, which produces Cout (Output Carry), is implemented with an efficient method, using input capacitors and a static CMOS inverter. This kind of implementation benefits from low power consumption, a high degree of regularity and simplicity. Eight state-of-the-art 1-bit Full Adders and two proposed Full Adders are simulated with HSPICE using 0.18μm CMOS technology at several supply voltages ranging from 2.4v down to 0.8v. Although low power consumption is targeted in implementation of our designs, simulation results demonstrate great improvement in terms of power consumption and also PDP.",
"title": ""
},
{
"docid": "de8661c2e63188464de6b345bfe3a908",
"text": "Modern computer games show potential not just for engaging and entertaining users, but also in promoting learning. Game designers employ a range of techniques to promote long-term user engagement and motivation. These techniques are increasingly being employed in so-called serious games, games that have nonentertainment purposes such as education or training. Although such games share the goal of AIED of promoting deep learner engagement with subject matter, the techniques employed are very different. Can AIED technologies complement and enhance serious game design techniques, or does good serious game design render AIED techniques superfluous? This paper explores these questions in the context of the Tactical Language Training System (TLTS), a program that supports rapid acquisition of foreign language and cultural skills. The TLTS combines game design principles and game development tools with learner modelling, pedagogical agents, and pedagogical dramas. Learners carry out missions in a simulated game world, interacting with non-player characters. A virtual aide assists the learners if they run into difficulties, and gives performance feedback in the context of preparatory exercises. Artificial intelligence plays a key role in controlling the behaviour of the non-player characters in the game; intelligent tutoring provides supplementary scaffolding.",
"title": ""
},
{
"docid": "135e3fa3b9487255b6ee67465b645fc9",
"text": "In the past few decades, the concepts of personalization in the forms of recommender system, information filtering, or customization not only are quickly accepted by the public but also draw considerable attention from enterprises. Therefore, a number of studies based on personalized recommendations have subsequently been produced. Most of these studies apply on E-commerce, website, and information, and some of them apply on teaching, tourism, and TV programs. Because the recent rise of Web 3.0 emphasizes on providing more complete personal information and service through the efficient method, the recommender application gradually develops towards mobile commerce, mobile information, or social network. Many studies have adopted Content-Based (CB), Collaborative Filtering (CF), and hybrid approach as the main recommender style in the analysis. There are few or even no studies that have emphasized on the review of recommendation recently. For this reason, this study aims to collect, analyze, and review the research topics of recommender systems and their application in the past few decades. This study collects the research types and from various researchers. The literature arrangement of this study can help researchers to understand the recommender system researches in a clear sense and in a short time.",
"title": ""
},
{
"docid": "7c8948433cf6c0d35fe29ccfac75d5b5",
"text": "The EMIB dense MCP technology is a new packaging paradigm that provides localized high density interconnects between two or more die on an organic package substrate, opening up new opportunities for heterogeneous on-package integration. This paper provides an overview of EMIB architecture and package capabilities. First, EMIB is compared with other approaches for high density interconnects. Some of the inherent advantages of the technology, such as the ability to cost effectively implement high density interconnects without requiring TSVs, and the ability to support the integration of many large die in an area much greater than the typical reticle size limit are highlighted. Next, the overall EMIB architecture envelope is discussed along with its constituent building blocks, the package construction with the embedded bridge, die to package interconnect features. Next, the EMIB assembly process is described at a high level. Finally, high bandwidth signaling between the die is discussed and the link bandwidth envelope is quantified.",
"title": ""
},
{
"docid": "6e1eee6355865bffd6af4c5c1d4a5d31",
"text": "Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (MRL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental results have validated the effectiveness of our approach in modeling worker agents’ minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.1",
"title": ""
},
{
"docid": "36b0ace93b5a902966e96e4649d83b98",
"text": "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.",
"title": ""
},
{
"docid": "cd29357697fafb5aa5b66807f746b682",
"text": "Autonomous path planning algorithms are significant to planetary exploration rovers, since relying on commands from Earth will heavily reduce their efficiency of executing exploration missions. This paper proposes a novel learning-based algorithm to deal with global path planning problem for planetary exploration rovers. Specifically, a novel deep convolutional neural network with double branches (DB-CNN) is designed and trained, which can plan path directly from orbital images of planetary surfaces without implementing environment mapping. Moreover, the planning procedure requires no prior knowledge about planetary surface terrains. Finally, experimental results demonstrate that DBCNN achieves better performance on global path planning and faster convergence during training compared with the existing Value Iteration Network (VIN).",
"title": ""
},
{
"docid": "a620202abaa0f11d2d324b05a29986dd",
"text": "Haze is an atmospheric phenomenon that significantly degrades the visibility of outdoor scenes. This is mainly due to the atmosphere particles that absorb and scatter the light. This paper introduces a novel single image approach that enhances the visibility of such degraded images. Our method is a fusion-based strategy that derives from two original hazy image inputs by applying a white balance and a contrast enhancing procedure. To blend effectively the information of the derived inputs to preserve the regions with good visibility, we filter their important features by computing three measures (weight maps): luminance, chromaticity, and saliency. To minimize artifacts introduced by the weight maps, our approach is designed in a multiscale fashion, using a Laplacian pyramid representation. We are the first to demonstrate the utility and effectiveness of a fusion-based technique for dehazing based on a single degraded image. The method performs in a per-pixel fashion, which is straightforward to implement. The experimental results demonstrate that the method yields results comparative to and even better than the more complex state-of-the-art techniques, having the advantage of being appropriate for real-time applications.",
"title": ""
},
{
"docid": "3f904e591a46f770e9a1425e6276041b",
"text": "Several decades of research in underwater communication and networking has resulted in novel and innovative solutions to combat challenges such as long delay spread, rapid channel variation, significant Doppler, high levels of non-Gaussian noise, limited bandwidth and long propagation delays. Many of the physical layer solutions can be tested by transmitting carefully designed signals, recording them after passing through the underwater channel, and then processing them offline using appropriate algorithms. However some solutions requiring online feedback to the transmitter cannot be tested without real-time processing capability in the field. Protocols and algorithms for underwater networking also require real-time communication capability for experimental testing. Although many modems are commercially available, they provide limited flexibility in physical layer signaling and sensing. They also provide limited control over the exact timing of transmission and reception, which can be critical for efficient implementation of some networking protocols with strict time constraints. To aid in our physical and higher layer research, we developed the UNET-2 software-defined modem with flexibility and extensibility as primary design objectives. We present the hardware and software architecture of the modem, focusing on the flexibility and adaptability that it provides researchers with. We describe the network stack that the modem uses, and show how it can also be used as a powerful tool for underwater network simulation. We illustrate the flexibility provided by the modem through a number of practical examples and experiments.",
"title": ""
},
{
"docid": "ad7a5bccf168ac3b13e13ccf12a94f7d",
"text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.",
"title": ""
},
{
"docid": "c3b652b561e38a51f1fa40483532e22d",
"text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.",
"title": ""
},
{
"docid": "332d517d07187d2403a672b08365e5ef",
"text": "Please cite this article in press as: C. Galleguillos doi:10.1016/j.cviu.2010.02.004 The goal of object categorization is to locate and identify instances of an object category within an image. Recognizing an object in an image is difficult when images include occlusion, poor quality, noise or background clutter, and this task becomes even more challenging when many objects are present in the same scene. Several models for object categorization use appearance and context information from objects to improve recognition accuracy. Appearance information, based on visual cues, can successfully identify object classes up to a certain extent. Context information, based on the interaction among objects in the scene or global scene statistics, can help successfully disambiguate appearance inputs in recognition tasks. In this work we address the problem of incorporating different types of contextual information for robust object categorization in computer vision. We review different ways of using contextual information in the field of object categorization, considering the most common levels of extraction of context and the different levels of contextual interactions. We also examine common machine learning models that integrate context information into object recognition frameworks and discuss scalability, optimizations and possible future approaches. 2010 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
315025d0cb659bcb820d9b1393503b08
|
Efficient placement of multi-component applications in edge computing systems
|
[
{
"docid": "bbf5561f88f31794ca95dd991c074b98",
"text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.",
"title": ""
}
] |
[
{
"docid": "1e82d6acef7e5b5f0c2446d62cf03415",
"text": "The purpose of this research is to characterize and model the self-heating effect of multi-finger n-channel MOSFETs. Self-heating effect (SHE) does not need to be analyzed for single-finger bulk CMOS devices. However, it should be considered for multi-finger n-channel MOSFETs that are mainly used for RF-CMOS applications. The SHE mechanism was analyzed based on a two-dimensional device simulator. A compact model, which is a BSIM6 model with additional equations, was developed and implemented in a SPICE simulator with Verilog-A language. Using the proposed model and extracted parameters excellent agreements have been obtained between measurements and simulations in DC and S-parameter domain whereas the original BSIM6 shows inconsistency between static DC and small signal AC simulations due to the lack of SHE. Unlike the generally-used sub-circuits based SHE models including in BSIMSOI models, the proposed SHE model can converge in large scale circuits.",
"title": ""
},
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "b56d61ac3e807219b3caa9ed4362abd9",
"text": "Secure communication is critical in military environments where the network infrastructure is vulnerable to various attacks and compromises. A conventional centralized solution breaks down when the security servers are destroyed by the enemies. In this paper we design and evaluate a security framework for multi-layer ad-hoc wireless networks with unmanned aerial vehicles (UAVs). In battlefields, the framework adapts to the contingent damages on the network infrastructure. Depending on the availability of the network infrastructure, our design is composed of two modes. In infrastructure mode, security services, specifically the authentication services, are implemented on UAVs that feature low overhead and flexible managements. When the UAVs fail or are destroyed, our system seamlessly switches to infrastructureless mode, a backup mechanism that maintains comparable security services among the surviving units. In the infrastructureless mode, the security services are localized to each node’s vicinity to comply with the ad-hoc communication mechanism in the scenario. We study the instantiation of these two modes and the transitions between them. Our implementation and simulation measurements confirm the effectiveness of our design.",
"title": ""
},
{
"docid": "59a16f229e5c205176639843521310d0",
"text": "In the ancient Egypt seven goddesses, represented by seven cows, composed the celestial herd that provides the nourishment to her worshippers. This herd is observed in the sky as a group of stars, the Pleiades, close to Aldebaran, the main star in the Taurus constellation. For many ancient populations, Pleiades were relevant stars and their rising was marked as a special time of the year. In this paper, we will discuss the presence of these stars in ancient cultures. Moreover, we will report some results of archeoastronomy on the role for timekeeping of these stars, results which show that for hunter-gatherers at Palaeolithic times, they were linked to the seasonal cycles of aurochs.",
"title": ""
},
{
"docid": "98a647d378a06c0314a60e220d10976a",
"text": "Driven by the confluence between the need to collect data about people's physical, physiological, psychological, cognitive, and behavioral processes in spaces ranging from personal to urban and the recent availability of the technologies that enable this data collection, wireless sensor networks for healthcare have emerged in the recent years. In this review, we present some representative applications in the healthcare domain and describe the challenges they introduce to wireless sensor networks due to the required level of trustworthiness and the need to ensure the privacy and security of medical data. These challenges are exacerbated by the resource scarcity that is inherent with wireless sensor network platforms. We outline prototype systems spanning application domains from physiological and activity monitoring to large-scale physiological and behavioral studies and emphasize ongoing research challenges.",
"title": ""
},
{
"docid": "760f9f91a845726bc79b874978d5b9ab",
"text": "Data sharing is increasingly recognized as critical to cross-disciplinary research and to assuring scientific validity. Despite National Institutes of Health and National Science Foundation policies encouraging data sharing by grantees, little data sharing of clinical data has in fact occurred. A principal reason often given is the potential of inadvertent violation of the Health Insurance Portability and Accountability Act privacy regulations. While regulations specify the components of private health information that should be protected, there are no commonly accepted methods to de-identify clinical data objects such as images. This leads institutions to take conservative risk-averse positions on data sharing. In imaging trials, where images are coded according to the Digital Imaging and Communications in Medicine (DICOM) standard, the complexity of the data objects and the flexibility of the DICOM standard have made it especially difficult to meet privacy protection objectives. The recent release of DICOM Supplement 142 on image de-identification has removed much of this impediment. This article describes the development of an open-source software suite that implements DICOM Supplement 142 as part of the National Biomedical Imaging Archive (NBIA). It also describes the lessons learned by the authors as NBIA has acquired more than 20 image collections encompassing over 30 million images.",
"title": ""
},
{
"docid": "d59e21319b9915c2f6d7a8931af5503c",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "4122fb29bb82d4432391f4362ddcf512",
"text": "In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "fae3b6d1415e5f1d95aa2126c14e7a09",
"text": "This paper presents an active RF phase shifter with 10 bit control word targeted toward the upcoming 5G wireless systems. The circuit is designed and fabricated using 45 nm CMOS SOI technology. An IQ vector modulator (IQVM) topology is used which provides both amplitude and phase control. The design is programmable with exhaustive digital controls available for parameters like bias voltage, resonance frequency, and gain. The frequency of operation is tunable from 12.5 GHz to 15.7 GHz. The mean angular separation between phase points is 1.5 degree at optimum amplitude levels. The rms phase error over the operating band is as low as 0.8 degree. Active area occupied is 0.18 square millimeter. The total DC power consumed from 1 V supply is 75 mW.",
"title": ""
},
{
"docid": "37bdc258e652fb4a21d9516400428f8b",
"text": "In many Internet of Things (IoT) applications, large numbers of small sensor data are delivered in the network, which may cause heavy traffics. To reduce the number of messages delivered from the sensor devices to the IoT server, a promising approach is to aggregate several small IoT messages into a large packet before they are delivered through the network. When the packets arrive at the destination, they are disaggregated into the original IoT messages. In the existing solutions, packet aggregation/disaggregation is performed by software at the server, which results in long delays and low throughputs. To resolve the above issue, this paper utilizes the programmable Software Defined Networking (SDN) switch to program quick packet aggregation and disaggregation. Specifically, we consider the Programming Protocol-Independent Packet Processor (P4) technology. We design and develop novel P4 programs for aggregation and disaggregation in commercial P4 switches. Our study indicates that packet aggregation can be achieved in a P4 switch with its line rate (without extra packet processing cost). On the other hand, to disaggregate a packet that combines N IoT messages, the processing time is about the same as processing N individual IoT messages. Our implementation conducts IoT message aggregation at the highest bit rate (100 Gbps) that has not been found in the literature. We further propose to provide a small buffer in the P4 switch to significantly reduce the processing power for disaggregating a packet.",
"title": ""
},
{
"docid": "c091e5b24dc252949b3df837969e263a",
"text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.",
"title": ""
},
{
"docid": "b91f54fd70da385625d9df127834d8c7",
"text": "This commentary was stimulated by Yeping Li’s first editorial (2014) citing one of the journal’s goals as adding multidisciplinary perspectives to current studies of single disciplines comprising the focus of other journals. In this commentary, I argue for a greater focus on STEM integration, with a more equitable representation of the four disciplines in studies purporting to advance STEM learning. The STEM acronym is often used in reference to just one of the disciplines, commonly science. Although the integration of STEM disciplines is increasingly advocated in the literature, studies that address multiple disciplines appear scant with mixed findings and inadequate directions for STEM advancement. Perspectives on how discipline integration can be achieved are varied, with reference to multidisciplinary, interdisciplinary, and transdisciplinary approaches adding to the debates. Such approaches include core concepts and skills being taught separately in each discipline but housed within a common theme; the introduction of closely linked concepts and skills from two or more disciplines with the aim of deepening understanding and skills; and the adoption of a transdisciplinary approach, where knowledge and skills from two or more disciplines are applied to real-world problems and projects with the aim of shaping the total learning experience. Research that targets STEM integration is an embryonic field with respect to advancing curriculum development and various student outcomes. For example, we still need more studies on how student learning outcomes arise not only from different forms of STEM integration but also from the particular disciplines that are being integrated. As noted in this commentary, it seems that mathematics learning benefits less than the other disciplines in programs claiming to focus on STEM integration. Factors contributing to this finding warrant more scrutiny. Likewise, learning outcomes for engineering within K-12 integrated STEM programs appear under-researched. This commentary advocates a greater focus on these two disciplines within integrated STEM education research. Drawing on recommendations from the literature, suggestions are offered for addressing the challenges of integrating multiple disciplines faced by the STEM community.",
"title": ""
},
{
"docid": "46209913057e33c17d38a565e50097a3",
"text": "Power-on reset circuits are available as discrete devices as well as on-chip solutions and are indispensable to initialize some critical nodes of analog and digital designs during power-on. In this paper, we present a power-on reset circuit specifically designed for on-chip applications. The mentioned POR circuit should meet certain design requirements necessary to be integrated on-chip, some of them being area-efficiency, power-efficiency, supply rise-time insensitivity and ambient temperature insensitivity. The circuit is implemented within a small area (60mum times 35mum) using the 2.5V tolerant MOSFETs of a 0.28mum CMOS technology. It has a maximum quiescent current consumption of 40muA and works over infinite range of supply rise-times and ambient temperature range of -40degC to 150degC",
"title": ""
},
{
"docid": "ac4d208a022717f6389d8b754abba80b",
"text": "This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.",
"title": ""
},
{
"docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.",
"title": ""
},
{
"docid": "3ff55193d10980cbb8da5ec757b9161c",
"text": "The growth of social web contributes vast amount of user generated content such as customer reviews, comments and opinions. This user generated content can be about products, people, events, etc. This information is very useful for businesses, governments and individuals. While this content meant to be helpful analyzing this bulk of user generated content is difficult and time consuming. So there is a need to develop an intelligent system which automatically mine such huge content and classify them into positive, negative and neutral category. Sentiment analysis is the automated mining of attitudes, opinions, and emotions from text, speech, and database sources through Natural Language Processing (NLP). The objective of this paper is to discover the concept of Sentiment Analysis in the field of Natural Language Processing, and presents a comparative study of its techniques in this field. Keywords— Natural Language Processing, Sentiment Analysis, Sentiment Lexicon, Sentiment Score.",
"title": ""
},
{
"docid": "da4b2452893ca0734890dd83f5b63db4",
"text": "Diabetic retinopathy is when damage occurs to the retina due to diabetes, which affects up to 80 percent of all patients who have had diabetes for 10 years or more. The expertise and equipment required are often lacking in areas where diabetic retinopathy detection is most needed. Most of the work in the field of diabetic retinopathy has been based on disease detection or manual extraction of features, but this paper aims at automatic diagnosis of the disease into its different stages using deep learning. This paper presents the design and implementation of GPU accelerated deep convolutional neural networks to automatically diagnose and thereby classify high-resolution retinal images into 5 stages of the disease based on severity. The single model accuracy of the convolutional neural networks presented in this paper is 0.386 on a quadratic weighted kappa metric and ensembling of three such similar models resulted in a score of 0.3996.",
"title": ""
},
{
"docid": "948295ca3a97f7449548e58e02dbdd62",
"text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.",
"title": ""
},
{
"docid": "4b95b6d7991ea1b774ac8730df6ec21c",
"text": "We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks1 that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.",
"title": ""
}
] |
scidocsrr
|
e3743032e23258c4b1874b76ac169833
|
Cloud computing for Internet of Things & sensing based applications
|
[
{
"docid": "00614d23a028fe88c3f33db7ace25a58",
"text": "Cloud Computing and The Internet of Things are the two hot points in the Internet field. The application of the two new technologies is in hot discussion and research, but quite less on the field of agriculture and forestry. Thus, in this paper, we analyze the study and application of Cloud Computing and The Internet of Things on agriculture and forestry. Then we put forward an idea that making a combination of the two techniques and analyze the feasibility, applications and future prospect of the combination.",
"title": ""
}
] |
[
{
"docid": "9490ca6447448c0aba919871b1fa9791",
"text": "The study's goal was to examine the socially responsible power use in the context of ethical leadership as an explanatory mechanism of the ethical leadership-follower outcomes link. Drawing on the attachment theory (Bowlby, 1969/1982), we explored a power-based process model, which assumes that a leader's personal power is an intervening variable in the relationship between ethical leadership and follower outcomes, while incorporating the moderating role of followers' moral identity in this transformation process. The results of a two-wave field study (N = 235) that surveyed employees and a scenario experiment (N = 169) fully supported the proposed (moderated) mediation models, as personal power mediated the positive relationship between ethical leadership and a broad range of tested follower outcomes (i.e., leader effectiveness, follower extra effort, organizational commitment, job satisfaction, and work engagement), as well as the interactive effects of ethical leadership and follower moral identity on these follower outcomes. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "e9a154af3a041cadc5986b7369ce841b",
"text": "Metrological characterization of high-performance ΔΣ Analog-to-Digital Converters (ADCs) poses severe challenges to reference instrumentation and standard methods. In this paper, most important tests related to noise and effective resolution, nonlinearity, environmental uncertainty, and stability are proved and validated in the specific case of a high-performance ΔΣ ADC. In particular, tests setups are proposed and discussed and the definitions used to assess the performance are clearly stated in order to identify procedures and guidelines for high-resolution ADCs characterization. An experimental case study of the high-performance ΔΣ ADC DS-22 developed at CERN is reported and discussed by presenting effective alternative test setups. Experimental results show that common characterization methods by the IEEE standards 1241 [1] and 1057 [2] cannot be used and alternative strategies turn out to be effective.",
"title": ""
},
{
"docid": "012bcbc6b5e7b8aaafd03f100489961c",
"text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.",
"title": ""
},
{
"docid": "65dd0e6e143624c644043507cf9465a7",
"text": "Let G \" be a non-directed graph having n vertices, without parallel edges and slings. Let the vertices of Gn be denoted by F 1 ,. . ., Pn. Let v(P j) denote the valency of the point P i and put (0. 1) V(G,) = max v(Pj). 1ninn Let E(G.) denote the number of edges of Gn. Let H d (n, k) denote the set of all graphs Gn for which V (G n) = k and the diameter D (Gn) of which is-d, In the present paper we shall investigate the quantity (0 .2) Thus we want to determine the minimal number N such that there exists a graph having n vertices, N edges and diameter-d and the maximum of the valencies of the vertices of the graph is equal to k. To help the understanding of the problem let us consider the following interpretation. Let be given in a country n airports ; suppose we want to plan a network of direct flights between these airports so that the maximal number of airports to which a given airport can be connected by a direct flight should be equal to k (i .e. the maximum of the capacities of the airports is prescribed), further it should be possible to fly from every airport to any other by changing the plane at most d-1 times ; what is the minimal number of flights by which such a plan can be realized? For instance, if n = 7, k = 3, d= 2 we have F2 (7, 3) = 9 and the extremal graph is shown by Fig. 1. The problem of determining Fd (n, k) has been proposed and discussed recently by two of the authors (see [1]). In § 1 we give a short summary of the results of the paper [1], while in § 2 and 3 we give some new results which go beyond those of [1]. Incidentally we solve a long-standing problem about the maximal number of edges of a graph not containing a cycle of length 4. In § 4 we mention some unsolved problems. Let us mention that our problem can be formulated also in terms of 0-1 matrices as follows : Let M=(a il) be a symmetrical n by n zero-one matrix such 2",
"title": ""
},
{
"docid": "c3ee2beee84cd32e543c4b634062eeac",
"text": "In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "9dd8ab91929e3c4e7ddd90919eb79d22",
"text": "–Graphs are currently becoming more important in modeling and demonstrating information. In the recent years, graph mining is becoming an interesting field for various processes such as chemical compounds, protein structures, social networks and computer networks. One of the most important concepts in graph mining is to find frequent subgraphs. The major advantage of utilizing subgraphs is speeding up the search for similarities, finding graph specifications and graph classifications. In this article we classify the main algorithms in the graph mining field. Some fundamental algorithms are reviewed and categorized. Some issues for any algorithm are graph representation, search strategy, nature of input and completeness of output that are discussed in this article. Keywords––Frequent subgraph, Graph mining, Graph mining algorithms",
"title": ""
},
{
"docid": "dff0752eace9db08e25904a844533338",
"text": "The authors investigated whether accuracy in identifying deception from demeanor in high-stake lies is specific to those lies or generalizes to other high-stake lies. In Experiment 1, 48 observers judged whether 2 different groups of men were telling lies about a mock theft (crime scenario) or about their opinion (opinion scenario). The authors found that observers' accuracy in judging deception in the crime scenario was positively correlated with their accuracy in judging deception in the opinion scenario. Experiment 2 replicated the results of Experiment 1, as well as P. Ekman and M. O'Sullivan's (1991) finding of a positive correlation between the ability to detect deceit and the ability to identify micromomentary facial expressions of emotion. These results show that the ability to detect high-stake lies generalizes across high-stake situations and is most likely due to the presence of emotional clues that betray deception in high-stake lies.",
"title": ""
},
{
"docid": "88615ac1788bba148f547ca52bffc473",
"text": "This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.",
"title": ""
},
{
"docid": "b477893ecccb3aee1de3b6f12f3186ca",
"text": "Obesity is a global health problem characterized as an increase in the mass of adipose tissue. Adipogenesis is one of the key pathways that increases the mass of adipose tissue, by which preadipocytes mature into adipocytes through cell differentiation. Peroxisome proliferator-activated receptor γ (PPARγ), the chief regulator of adipogenesis, has been acutely investigated as a molecular target for natural products in the development of anti-obesity treatments. In this review, the regulation of PPARγ expression by natural products through inhibition of CCAAT/enhancer-binding protein β (C/EBPβ) and the farnesoid X receptor (FXR), increased expression of GATA-2 and GATA-3 and activation of the Wnt/β-catenin pathway were analyzed. Furthermore, the regulation of PPARγ transcriptional activity associated with natural products through the antagonism of PPARγ and activation of Sirtuin 1 (Sirt1) and AMP-activated protein kinase (AMPK) were discussed. Lastly, regulation of mitogen-activated protein kinase (MAPK) by natural products, which might regulate both PPARγ expression and PPARγ transcriptional activity, was summarized. Understanding the role natural products play, as well as the mechanisms behind their regulation of PPARγ activity is critical for future research into their therapeutic potential for fighting obesity.",
"title": ""
},
{
"docid": "e7ae72f3bb2c24259dd122bff0f5d04e",
"text": "In this paper we introduce a novel linear precoding technique. The approach used for the design of the precoding matrix is general and the resulting algorithm can address several optimization criteria with an arbitrary number of antennas at the user terminals. We have achieved this by designing the precoding matrices in two steps. In the first step we minimize the overlap of the row spaces spanned by the effective channel matrices of different users using a new cost function. In the next step, we optimize the system performance with respect to specific optimization criteria assuming a set of parallel single- user MIMO channels. By combining the closed form solution with Tomlinson-Harashima precoding we reach the maximum sum-rate capacity when the total number of antennas at the user terminals is less or equal to the number of antennas at the base station. By iterating the closed form solution with appropriate power loading we are able to extract the full diversity in the system and reach the maximum sum-rate capacity in case of high multi-user interference. Joint processing over a group of multi-user MIMO channels in different frequency and time slots yields maximum diversity regardless of the level of multi-user interference.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "869cc834f84bc88a258b2d9d9d4f3096",
"text": "Obesity is a multifactorial disease characterized by an excessive weight for height due to an enlarged fat deposition such as adipose tissue, which is attributed to a higher calorie intake than the energy expenditure. The key strategy to combat obesity is to prevent chronic positive impairments in the energy equation. However, it is often difficult to maintain energy balance, because many available foods are high-energy yielding, which is usually accompanied by low levels of physical activity. The pharmaceutical industry has invested many efforts in producing antiobesity drugs; but only a lipid digestion inhibitor obtained from an actinobacterium is currently approved and authorized in Europe for obesity treatment. This compound inhibits the activity of pancreatic lipase, which is one of the enzymes involved in fat digestion. In a similar way, hundreds of extracts are currently being isolated from plants, fungi, algae, or bacteria and screened for their potential inhibition of pancreatic lipase activity. Among them, extracts isolated from common foodstuffs such as tea, soybean, ginseng, yerba mate, peanut, apple, or grapevine have been reported. Some of them are polyphenols and saponins with an inhibitory effect on pancreatic lipase activity, which could be applied in the management of the obesity epidemic.",
"title": ""
},
{
"docid": "bb774fed5d447fdc181cb712c74925c2",
"text": "Test-driven development is a discipline that helps professional software developers ship clean, flexible code that works, on time. In this article, the author discusses how test-driven development can help software developers achieve a higher degree of professionalism",
"title": ""
},
{
"docid": "c94d01ee0aaa8a70ce4e3441850316a6",
"text": "Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures. It causes that CNNs are allowed to manage data with Euclidean or grid-like structures (e.g., images), not ones with non-Euclidean or graph structures (e.g., traffic networks). To broaden the reach of CNNs, we develop structure-aware convolution to eliminate the invariance, yielding a unified mechanism of dealing with both Euclidean and non-Euclidean structured data. Technically, filters in the structure-aware convolution are generalized to univariate functions, which are capable of aggregating local inputs with diverse topological structures. Since infinite parameters are required to determine a univariate function, we parameterize these filters with numbered learnable parameters in the context of the function approximation theory. By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established. Extensive experiments on eleven datasets strongly evidence that SACNNs outperform current models on various machine learning tasks, including image classification and clustering, text categorization, skeleton-based action recognition, molecular activity detection, and taxi flow prediction.",
"title": ""
},
{
"docid": "5d5014506bdf0c16b566edc8bba3b730",
"text": "This paper surveys recent literature in the domain of machine learning techniques and artificial intelligence used to predict stock market movements. Artificial Neural Networks (ANNs) are identified to be the dominant machine learning technique in stock market prediction area. Keywords— Artificial Neural Networks (ANNs); Stock Market; Prediction",
"title": ""
},
{
"docid": "8b2c83868c16536910e7665998b2d87e",
"text": "Nowadays organizations turn to any standard procedure to gain a competitive advantage. If sustainable, competitive advantage can bring about benefit to the organization. The aim of the present study was to introduce competitive advantage as well as to assess the impacts of the balanced scorecard as a means to measure the performance of organizations. The population under study included employees of organizations affiliated to the Social Security Department in North Khorasan Province, of whom a total number of 120 employees were selected as the participants in the research sample. Two researcher-made questionnaires with a 5-point Likert scale were used to measure the competitive advantage and the balanced scorecard. Besides, Cronbach's alpha coefficient was used to measure the reliability of the instruments that was equal to 0.74 and 0.79 for competitive advantage and the balanced scorecard, respectively. The data analysis was performed using the structural equation modeling and the results indicated the significant and positive impact of the implementation of the balanced scorecard on the sustainable competitive advantage. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "50edb29954ee6cbb3e38055d7b01e99a",
"text": "Security has becoming an important issue everywhere. Home security is becoming necessary nowadays as the possibilities of intrusion are increasing day by day. Safety from theft, leaking of raw gas and fire are the most important requirements of home security system for people. A traditional home security system gives the signals in terms of alarm. However, the GSM (Global System for Mobile communications) based security systems provides enhanced security as whenever a signal from sensor occurs, a text message is sent to a desired number to take necessary actions. This paper suggests two methods for home security system. The first system uses web camera. Whenever there is a motion in front of the camera, it gives security alert in terms of sound and a mail is delivered to the owner. The second method sends SMS which uses GSMGPS Module (sim548c) and Atmega644p microcontroller, sensors, relays and buzzers.",
"title": ""
},
{
"docid": "0b79fc06afe7782e7bdcdbd96cc1c1a0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/annals.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "92ff221950df6e7fd266926c305200cd",
"text": "The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal variables and that it can handle and discover nonlinear relationships between variables. Also, nonlinear PCA can deal with variables at their appropriate measurement level; for example, it can treat Likert-type scales ordinally instead of numerically. Every observed value of a variable can be referred to as a category. While performing PCA, nonlinear PCA converts every category to a numeric value, in accordance with the variable's analysis level, using optimal quantification. The authors discuss how optimal quantification is carried out, what analysis levels are, which decisions have to be made when applying nonlinear PCA, and how the results can be interpreted. The strengths and limitations of the method are discussed. An example applying nonlinear PCA to empirical data using the program CATPCA (J. J. Meulman, W. J. Heiser, & SPSS, 2004) is provided.",
"title": ""
},
{
"docid": "981cbb9140570a6a6f3d4f4f49cd3654",
"text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.",
"title": ""
}
] |
scidocsrr
|
5ff6288bf1a883014805687745c56ca8
|
Effects of missing data in social networks
|
[
{
"docid": "236896835b48994d7737b9152c0e435f",
"text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.",
"title": ""
}
] |
[
{
"docid": "4ba3ac9a0ef8f46fe92401843b1eaba7",
"text": "This paper explores gender-based differences in multimodal deception detection. We introduce a new large, gender-balanced dataset, consisting of 104 subjects with 520 different responses covering multiple scenarios, and perform an extensive analysis of different feature sets extracted from the linguistic, physiological, and thermal data streams recorded from the subjects. We describe a multimodal deception detection system, and show how the two genders achieve different detection rates for different individual and combined feature sets, with accuracy figures reaching 80%. Our experiments and results allow us to make interesting observations concerning the differences in the multimodal detection of deception in males and females.",
"title": ""
},
{
"docid": "7182921f825bd924be6e6441f1fa6433",
"text": "Word embeddings are increasingly being used as a tool to study word associations in specific corpora. However, it is unclear whether such embeddings reflect enduring properties of language or if they are sensitive to inconsequential variations in the source documents. We find that nearest-neighbor distances are highly sensitive to small changes in the training corpus for a variety of algorithms. For all methods, including specific documents in the training set can result in substantial variations. We show that these effects are more prominent for smaller training corpora. We recommend that users never rely on single embedding models for distance calculations, but rather average over multiple bootstrap samples, especially for small corpora.",
"title": ""
},
{
"docid": "4b6a4f9d91bc76c541f4879a1a684a3f",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "f0f16472cdb6b52b05d1d324e55da081",
"text": "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/ √ n, we show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.",
"title": ""
},
{
"docid": "6ab38099b989f1d9bdc504c9b50b6bbe",
"text": "Users' search tactics often appear naïve. Much research has endeavored to understand the rudimentary query typically seen in log analyses and user studies. Researchers have tested a number of approaches to supporting query development, including information literacy training and interaction design these have tried and often failed to induce users to use more complex search strategies. To further investigate this phenomenon, we combined established HCI methods with models from cultural studies, and observed customers' mediated searches for books in bookstores. Our results suggest that sophisticated search techniques demand mental models that many users lack.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "db8cd5dad5c3d3bda0f10f3369351bbd",
"text": "The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.",
"title": ""
},
{
"docid": "1df9ac95778bbe7ad750810e9b5a9756",
"text": "To characterize muscle synergy organization underlying multidirectional control of stance posture, electromyographic activity was recorded from 11 lower limb and trunk muscles of 7 healthy subjects while they were subjected to horizontal surface translations in 12 different, randomly presented directions. The latency and amplitude of muscle responses were quantified for each perturbation direction. Tuning curves for each muscle were examined to relate the amplitude of the muscle response to the direction of surface translation. The latencies of responses for the shank and thigh muscles were constant, regardless of perturbation direction. In contrast, the latencies for another thigh [tensor fascia latae (TFL)] and two trunk muscles [rectus abdominis (RAB) and erector spinae (ESP)] were either early or late, depending on the perturbation direction. These three muscles with direction-specific latencies may play different roles in postural control as prime movers or as stabilizers for different translation directions, depending on the timing of recruitment. Most muscle tuning curves were within one quadrant, having one direction of maximal activity, generally in response to diagonal surface translations. Two trunk muscles (RAB and ESP) and two lower limb muscles (semimembranosus and peroneus longus) had bipolar tuning curves, with two different directions of maximal activity, suggesting that these muscle can play different roles as part of different synergies, depending on translation direction. Muscle tuning curves tended to group into one of three regions in response to 12 different directions of perturbations. Two muscles [rectus femoris (RFM) and TFL] were maximally active in response to lateral surface translations. The remaining muscles clustered into one of two diagonal regions. The diagonal regions corresponded to the two primary directions of active horizontal force vector responses. Two muscles (RFM and adductor longus) were maximally active orthogonal to their predicted direction of maximal activity based on anatomic orientation. Some of the muscles in each of the synergic regions were not anatomic synergists, suggesting a complex central organization for recruitment of muscles. The results suggest that neither a simple reflex mechanism nor a fixed muscle synergy organization is adequate to explain the muscle activation patterns observed in this postural control task. Our results are consistent with a centrally mediated pattern of muscle latencies combined with peripheral influence on muscle magnitude. We suggest that a flexible continuum of muscle synergies that are modifiable in a task-dependent manner be used for equilibrium control in stance.",
"title": ""
},
{
"docid": "b0cba371bb9628ac96a9ae2bb228f5a9",
"text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.",
"title": ""
},
{
"docid": "9a3a73f35b27d751f237365cc34c8b28",
"text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.",
"title": ""
},
{
"docid": "08844c98f9d6b92f84d272516af64281",
"text": "This paper describes the synthesis of Dynamic Differential Logic to increase the resistance of FPGA implementations against Differential Power Analysis. The synthesis procedure is developed and a detailed description is given of how EDA tools should be used appropriately to implement a secure digital design flow. Compared with an existing technique to implement Dynamic Differential Logic on FPGA, the technique saves a factor 2 in slice utilization. Experimental results also indicate that a secure version of the AES encryption algorithm can now be implemented with a mere 50% increase in time delay and 90% increase in slice utilization when compared with a normal non-secure single ended implementation.",
"title": ""
},
{
"docid": "43c9afd57b35c2db2c285b9c0b79b81a",
"text": "We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained image of a human face into shape, reflectance and illuminance. Our network is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic images and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.",
"title": ""
},
{
"docid": "9309ce05609d1cbdadcdc89fe8937473",
"text": "There is an increase use of ontology-driven approaches to support requirements engineering (RE) activities, such as elicitation, analysis, specification, validation and management of requirements. However, the RE community still lacks a comprehensive understanding of how ontologies are used in RE process. Thus, the main objective of this work is to investigate and better understand how ontologies support RE as well as identify to what extent they have been applied to this field. In order to meet our goal, we conducted a systematic literature review (SLR) to identify the primary studies on the use of ontologies in RE, following a predefined review protocol. We then identified the main RE phases addressed, the requirements modelling styles that have been used in conjunction with ontologies, the types of requirements that have been supported by the use of ontologies and the ontology languages that have been adopted. We also examined the types of contributions reported and looked for evidences of the benefits of ontology-driven RE. In summary, the main findings of this work are: (1) there are empirical evidences of the benefits of using ontologies in RE activities both in industry and academy, specially for reducing ambiguity, inconsistency and incompleteness of requirements; (2) the majority of studies only partially address the RE process; (3) there is a great diversity of RE modelling styles supported by ontologies; (4) most studies addressed only functional requirements; (5) several studies describe the use/development of tools to support different types of ontology-driven RE approaches; (6) about half of the studies followed W3C recommendations on ontology-related languages; and (7) a great variety of RE ontologies were identified; nevertheless, none of them has been broadly adopted by the community. Finally, we conclude this work by showing several promising research opportunities that are quite important and interesting but underexplored in current research and practice.",
"title": ""
},
{
"docid": "ac8a620e752144e3f4e20c16efb56ebc",
"text": "or as ventricular fibrillation, the circulation must be restored promptly; otherwise anoxia will result in irreversible damage. There are two techniques that may be used to meet the emergency: one is to open the chest and massage the heart directly and the other is to accomplish the same end by a new method of closed-chest cardiac massage. The latter method is described in this communication. The closed-chest alternating current defibrillator ' that",
"title": ""
},
{
"docid": "bf11d9a1ef46b24f5d13dc119e715005",
"text": "This paper explores the relationship between the three beliefs about online shopping ie. perceived usefulness, perceived ease of use and perceived enjoyment and intention to shop online. A sample of 150 respondents was selected using a purposive sampling method whereby the respondents have to be Internet users to be included in the survey. A structured, self-administered questionnaire was used to elicit responses from these respondents. The findings indicate that perceived ease of use (β = 0.70, p<0.01) and perceived enjoyment (β = 0.32, p<0.05) were positively related to intention to shop online whereas perceived usefulness was not significantly related to intention to shop online. Furthermore, perceived ease of use (β = 0.78, p<0.01) was found to be a significant predictor of perceived usefulness. This goes to show that ease of use and enjoyment are the 2 main drivers of intention to shop online. Implications of the findings for developers are discussed further.",
"title": ""
},
{
"docid": "18c885e8cb799086219585e419140ba5",
"text": "Reaction-time and eye-fixation data are analyzed to investigate how people infer the kinematics of simple mechanical systems (pulley systems) from diagrams showing their static configuration. It is proposed that this mental animation process involves decomposing the representation of a pulley system into smaller units corresponding to the machine components and animating these components in a sequence corresponding to the causal sequence of events in the machine's operation. Although it is possible for people to make inferences against the chain of causality in the machine, these inferences are more difficult, and people have a preference for inferences in the direction of causality. The mental animation process reflects both capacity limitations and limitations of mechanical knowledge.",
"title": ""
},
{
"docid": "13c2c1a1bd4ff886f93d8f89a14e39e2",
"text": "One of the key elements in qualitative data analysis is the systematic coding of text (Strauss and Corbin 1990:57%60; Miles and Huberman 1994:56). Codes are the building blocks for theory or model building and the foundation on which the analyst’s arguments rest. Implicitly or explicitly, they embody the assumptions underlying the analysis. Given the context of the interdisciplinary nature of research at the Centers for Disease Control and Prevention (CDC), we have sought to develop explicit guidelines for all aspects of qualitative data analysis, including codebook development.",
"title": ""
},
{
"docid": "f41f4e3b27bda4b3000f3ab5ae9ef22a",
"text": "This paper, first analysis the performance of image segmentation techniques; K-mean clustering algorithm and region growing for cyst area extraction from liver images, then enhances the performance of K-mean by post-processing. The K-mean algorithm makes the clusters effectively. But it could not separate out the desired cluster (cyst) from the image. So, to enhance its performance for cyst region extraction, morphological opening-by-reconstruction is applied on the output of K-mean clustering algorithm. The results are presented both qualitatively and quantitatively, which demonstrate the superiority of enhanced K-mean as compared to standard K-mean and region growing algorithm.",
"title": ""
},
{
"docid": "c8d2092150e1e50232a5bc3847520d19",
"text": "Thermoregulation disorders are associated with Body temperature fluctuation. Both hyper- and hypothermia are evidence of an ongoing pathological process. Contralateral symmetry in the Body heat spread is considered normal, while asymmetry, if above a certain level, implies an underlying pathology. Infrared thermography (IRT) is employed in many medical fields including ophthalmology. The earliest attempts of eye surface temperature evaluation were made in the 19th century. Over the last 50 years, different authors have been using this method to assess ocular adnexa, however, the technique remains insufficiently studied. The reported IRT data is often contradictory, which may be due to heterogeneity (in terms of severity) of patient groups and disparities between research parameters.",
"title": ""
},
{
"docid": "af5a8f2811ff334d742f802c6c1b7833",
"text": "Kalman filter extensions are commonly used algorithms for nonlinear state estimation in time series. The structure of the state and measurement models in the estimation problem can be exploited to reduce the computational demand of the algorithms. We review algorithms that use different forms of structure and show how they can be combined. We show also that the exploitation of the structure of the problem can lead to improved accuracy of the estimates while reducing the computational load.",
"title": ""
}
] |
scidocsrr
|
e5c8a4269c855f196ddfb34d5c58c304
|
xPrint: A Modularized Liquid Printer for Smart Materials Deposition
|
[
{
"docid": "9c800a53208bf1ded97e963ed4f80b28",
"text": "We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.",
"title": ""
},
{
"docid": "b6d856bf3b61883e3755cf00810b98c7",
"text": "The development of cell printing is vital for establishing biofabrication approaches as clinically relevant tools. Achieving this requires bio-inks which must not only be easily printable, but also allow controllable and reproducible printing of cells. This review outlines the general principles and current progress and compares the advantages and challenges for the most widely used biofabrication techniques for printing cells: extrusion, laser, microvalve, inkjet and tissue fragment printing. It is expected that significant advances in cell printing will result from synergistic combinations of these techniques and lead to optimised resolution, throughput and the overall complexity of printed constructs.",
"title": ""
}
] |
[
{
"docid": "61ff8f4f212aa0a307b228ab48beec77",
"text": "One of the most important features of the Web graph and social networks is that they are constantly evolving. The classical computational paradigm, which assumes a fixed data set as an input to an algorithm that terminates, is inadequate for such settings. In this paper we study the problem of computing PageRank on an evolving graph. We propose an algorithm that, at any moment in the time and by crawling a small portion of the graph, provides an estimate of the PageRank that is close to the true PageRank of the graph at that moment. We will also evaluate our algorithm experimentally on real data sets and on randomly generated inputs. Under a stylized model of graph evolution, we show that our algorithm achieves a provable performance guarantee that is significantly better than the naive algorithm that crawls the nodes in a round-robin fashion.",
"title": ""
},
{
"docid": "cf219b9093dc55f09d067954d8049aeb",
"text": "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.",
"title": ""
},
{
"docid": "02dab9e102d1b8f5e4f6ab66e04b3aad",
"text": "CHILD CARE PRACTICES ANTECEDING THREE PATTERNS OF PRESCHOOL BEHAVIOR. STUDIED SYSTEMATICALLY CHILD-REARING PRACTICES ASSOCIATED WITH COMPETENCE IN THE PRESCHOOL CHILD. 2015 American Psychological Association PDF documents require Adobe Acrobat Reader.Effects of Authoritative Parental Control on Child Behavior, Child. Child care practices anteceding three patterns of preschool behavior. Genetic.She is best known for her work on describing parental styles of child care and. Anteceding Three Patterns of Preschool Behavior, Genetic Psychology.Child care practices anteceding three patterns of preschool behavior.",
"title": ""
},
{
"docid": "d6adda476cc8bd69c37bd2d00f0dace4",
"text": "The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS.",
"title": ""
},
{
"docid": "b8fa50df3c76c2192c67cda7ae4d05f5",
"text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.",
"title": ""
},
{
"docid": "c62742c65b105a83fa756af9b1a45a37",
"text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.",
"title": ""
},
{
"docid": "149d9a316e4c5df0c9300d26da685bc6",
"text": "Multiport dc-dc converters are particularly interesting for sustainable energy generation systems where diverse sources and storage elements are to be integrated. This paper presents a zero-voltage switching (ZVS) three-port bidirectional dc-dc converter. A simple and effective duty ratio control method is proposed to extend the ZVS operating range when input voltages vary widely. Soft-switching conditions over the full operating range are achievable by adjusting the duty ratio of the voltage applied to the transformer winding in response to the dc voltage variations at the port. Keeping the volt-second product (half-cycle voltage-time integral) equal for all the windings leads to ZVS conditions over the entire operating range. A detailed analysis is provided for both the two-port and the three-port converters. Furthermore, for the three-port converter a dual-PI-loop based control strategy is proposed to achieve constant output voltage, power flow management, and soft-switching. The three-port converter is implemented and tested for a fuel cell and supercapacitor system.",
"title": ""
},
{
"docid": "e939e98e090c57e269444ae5d503884b",
"text": "Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.",
"title": ""
},
{
"docid": "d5faccc7187a185f6e287a7cc29f0878",
"text": "The revival of deep neural networks and the availability of ImageNet laid the foundation for recent success in highly complex recognition tasks. However, ImageNet does not cover all visual concepts of all possible application scenarios. Hence, application experts still record new data constantly and expect the data to be used upon its availability. In this paper, we follow this observation and apply the classical concept of fine-tuning deep neural networks to scenarios where data from known or completely new classes is continuously added. Besides a straightforward realization of continuous fine-tuning, we empirically analyze how computational burdens of training can be further reduced. Finally, we visualize how the network’s attention maps evolve over time which allows for visually investigating what the network learned during continuous fine-tuning.",
"title": ""
},
{
"docid": "99c6fb7c765bf749fd40a78eadf3e723",
"text": "This paper presents a new design approach to nonlinear observers for Itô stochastic nonlinear systems with guaranteed stability. A stochastic contraction lemma is presented which is used to analyze incremental stability of the observer. A bound on the mean-squared distance between the trajectories of original dynamics and the observer dynamics is obtained as a function of the contraction rate and maximum noise intensity. The observer design is based on a non-unique state-dependent coefficient (SDC) form, which parametrizes the nonlinearity in an extended linear form. The observer gain synthesis algorithm, called linear matrix inequality state-dependent algebraic Riccati equation (LMI-SDARE), is presented. The LMI-SDARE uses a convex combination of multiple SDC parametrizations. An optimization problem with state-dependent linear matrix inequality (SDLMI) constraints is formulated to select the coefficients of the convex combination for maximizing the convergence rate and robustness against disturbances. Two variations of LMI-SDARE algorithm are also proposed. One of them named convex state-dependent Riccati equation (CSDRE) uses a chosen convex combination of multiple SDC matrices; and the other named Fixed-SDARE uses constant SDC matrices that are pre-computed by using conservative bounds of the system states while using constant coefficients of the convex combination pre-computed by a convex LMI optimization problem. A connection between contraction analysis and L2 gain of the nonlinear system is established in the presence of noise and disturbances. Results of simulation show superiority of the LMI-SDARE algorithm to the extended Kalman filter (EKF) and state-dependent differential Riccati equation (SDDRE) filter.",
"title": ""
},
{
"docid": "1f4b3ad078c42404c6aa27d107026b18",
"text": "This paper presents circuit design methodologies to enhance the electromagnetic immunity of an output-capacitor-free low-dropout (LDO) regulator. To evaluate the noise performance of an LDO regulator in the small-signal domain, power-supply rejection (PSR) is used. We optimize a bandgap reference circuit for optimum dc PSR, and propose a capacitor cancelation technique circuit for bandwidth compensation, and a low-noise biasing circuit for immunity enhancement in the bias circuit. For large-signal, transient performance enhancement, we suggest using a unity-gain amplifier to minimize the voltage difference of the differential inputs of the error amplifier, and an auxiliary N-channel metal oxide semiconductor (NMOS) pass transistor was used to maintain a stable gate voltage in the pass transistor. The effectiveness of the design methodologies proposed in this paper is verified using circuit simulations using an LDO regulator designed by 0.18-$\\mu$m CMOS process. When sine and pulse signals are applied to the input, the worst dc offset variations were enhanced from 36% to 16% and from 31.7% to 9.7%, respectively, as compared with those of the conventional LDO. We evaluated the noise performance versus the conducted electromagnetic interference generated by the dc–dc converter; the noise reduction level was significantly improved.",
"title": ""
},
{
"docid": "23d2349831a364e6b77e3c263a8321c8",
"text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …",
"title": ""
},
{
"docid": "87e52d72533c26f59af13aaea0ea4b7f",
"text": "This study investigated the work role attachment and retirement intentions of public school teachers in Calabar, Nigeria. It was motivated by the observation that most public school workers lack plans for retirement and as such do not prepare for it until it suddenly dawns on them. Few empirical studies were reviewed. Questionnaire was the main instrument used for data collection from a sample of 200 teachers. Independent t-test was used to test the stated hypotheses at 0.05 level of significance. Results showed that the committed/attached/involved workers have retirement intention to take a part-time job after retirement. The uncommitted/unattached/uninvolved workers have intention to retire earlier than those attached to their work. It was recommended that pre-retirement counselling should be adopted to assist teachers to develop good retirement plans.",
"title": ""
},
{
"docid": "a76a1aea4861dfd1e1f426ce55747b2a",
"text": "Which topics spark the most heated debates in social media? Identifying these topics is a first step towards creating systems which pierce echo chambers. In this paper, we perform a systematic methodological study of controversy detection using social media network structure and content.\n Unlike previous work, rather than identifying controversy in a single hand-picked topic and use domain-specific knowledge, we focus on comparing topics in any domain. Our approach to quantifying controversy is a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic, which represents alignment of opinion among users; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii)measuring the amount of controversy from characteristics of the~graph.\n We perform an extensive comparison of controversy measures, as well as graph building approaches and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task.",
"title": ""
},
{
"docid": "936c4fb60d37cce15ed22227d766908f",
"text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.",
"title": ""
},
{
"docid": "0e9c48c7e6871a0cec1cacc1bb0603c4",
"text": "The current banking crisis highlights the challenges faced in the traditional lending model, particularly in terms of screening smaller borrowers. The recent growth in online peer-to-peer lending marketplaces offers opportunities to examine different lending models that rely on screening by multiple peers. This paper evaluates the screening ability of lenders in such peer-topeer markets. Our methodology takes advantage of the fact that lenders do not observe a borrower’s true credit score but only see an aggregate credit category. We find that lenders are able to use available information to infer a third of the variation in creditworthiness that is captured by a borrower’s credit score. This inference is economically significant and allows lenders to lend at a 140-basis-points lower rate for borrowers with (unobserved to lenders) better credit scores within a credit category. While lenders infer the most from standard banking “hard” information, they also use non-standard (subjective) information. Our methodology shows, without needing to code subjective information that lenders learn even from such “softer” information, particularly when it is likely to provide credible signals regarding borrower creditworthiness. Our findings highlight the screening ability of peer-to-peer markets and suggest that these emerging markets may provide a viable complement to traditional lending markets, especially for smaller borrowers. JEL codes: D53, D8, G21, L81",
"title": ""
},
{
"docid": "e7d8f97d7d76ae089842e602b91df21c",
"text": "In this paper, we propose a novel text representation paradigm and a set of follow-up text representation models based on cognitive psychology theories. The intuition of our study is that the knowledge implied in a large collection of documents may improve the understanding of single documents. Based on cognitive psychology theories, we propose a general text enrichment framework, study the key factors to enable activation of implicit information, and develop new text representation methods to enrich text with the implicit information. Our study aims to mimic some aspects of human cognitive procedure in which given stimulant words serve to activate understanding implicit concepts. By incorporating human cognition into text representation, the proposed models advance existing studies by mining implicit information from given text and coordinating with most existing text representation approaches at the same time, which essentially bridges the gap between explicit and implicit information. Experiments on multiple tasks show that the implicit information activated by our proposed models matches human intuition and significantly improves the performance of the text mining tasks as well.",
"title": ""
},
{
"docid": "c8d2092150e1e50232a5bc3847520d19",
"text": "Thermoregulation disorders are associated with Body temperature fluctuation. Both hyper- and hypothermia are evidence of an ongoing pathological process. Contralateral symmetry in the Body heat spread is considered normal, while asymmetry, if above a certain level, implies an underlying pathology. Infrared thermography (IRT) is employed in many medical fields including ophthalmology. The earliest attempts of eye surface temperature evaluation were made in the 19th century. Over the last 50 years, different authors have been using this method to assess ocular adnexa, however, the technique remains insufficiently studied. The reported IRT data is often contradictory, which may be due to heterogeneity (in terms of severity) of patient groups and disparities between research parameters.",
"title": ""
},
{
"docid": "fb9669d1f3e43d69d5893a9b2d15957f",
"text": "Researchers in the Digital Humanities and journalists need to monitor, collect and analyze fresh online content regarding current events such as the Ebola outbreak or the Ukraine crisis on demand. However, existing focused crawling approaches only consider topical aspects while ignoring temporal aspects and therefore cannot achieve thematically coherent and fresh Web collections. Especially Social Media provide a rich source of fresh content, which is not used by state-of-the-art focused crawlers. In this paper we address the issues of enabling the collection of fresh and relevant Web and Social Web content for a topic of interest through seamless integration of Web and Social Media in a novel integrated focused crawler. The crawler collects Web and Social Media content in a single system and exploits the stream of fresh Social Media content for guiding the crawler.",
"title": ""
},
{
"docid": "4428768a82e5eb08495cbfaf36fb9569",
"text": "—A number of studies have shown that e-learning implementation is not simply a technological solution, but a process of many different factors such as social and behavioural contexts. Yet little is known about the important rule of such factors in technology adoption and use in the context of developing countries such as Lebanon. Therefore, the main objective of our study is to empirically validate an extended Technology Acceptance Model (TAM) (to include Social Norms and Quality of Work Life constructs) in the Lebanese context. A quantitative methodology approach was adopted in this study. To test the hypothesized research model, data were collected from 569 undergraduate and postgraduate students studying in Lebanon via questionnaire. The collected data were analysed using structural equation modeling (SEM) technique based on AMOS methods and in conjunction with multi-group analysis. As hypothesized, the results of the study revealed perceived usefulness (PU), perceived ease of use (PEU), social norms (SN) and Quality of Work life (QWL) to be significant determinants of students' behavioral intention (BI). This provides support for the applicability of the extended TAM in the Lebanese context. Implications to both theory and practice of this study are discussed at the end of the paper.",
"title": ""
}
] |
scidocsrr
|
a746849703daae985e9d1c5a62d6b9d3
|
t-FFD: free-form deformation by using triangular mesh
|
[
{
"docid": "7d741e9073218fa073249e512161748d",
"text": "Free-form deformation (FFD) is a powerful modeling tool, but controlling the shape of an object under complex deformations is often difficult. The interface to FFD in most conventional systems simply represents the underlying mathematics directly; users describe deformations by manipulating control points. The difficulty in controlling shape precisely is largely due to the control points being extraneous to the object; the deformed object does not follow the control points exactly. In addition, the number of degrees of freedom presented to the user can be overwhelming. We present a method that allows a user to control a free-form deformation of an object by manipulating the object directly, leading to better control of the deformation and a more intuitive interface. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling Curve, Surface, Solid, and Object Representations; I.3.6 [Computer Graphics]: Methodology and Techniques Interaction Techniques. Additional",
"title": ""
}
] |
[
{
"docid": "b5c7b9f1f57d3d79d3fc8a97eef16331",
"text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.",
"title": ""
},
{
"docid": "2ce21d12502577882ced4813603e9a72",
"text": "Positive psychology is the scientific study of positive experiences and positive individual traits, and the institutions that facilitate their development. A field concerned with well-being and optimal functioning, positive psychology aims to broaden the focus of clinical psychology beyond suffering and its direct alleviation. Our proposed conceptual framework parses happiness into three domains: pleasure, engagement, and meaning. For each of these constructs, there are now valid and practical assessment tools appropriate for the clinical setting. Additionally, mounting evidence demonstrates the efficacy and effectiveness of positive interventions aimed at cultivating pleasure, engagement, and meaning. We contend that positive interventions are justifiable in their own right. Positive interventions may also usefully supplement direct attempts to prevent and treat psychopathology and, indeed, may covertly be a central component of good psychotherapy as it is done now.",
"title": ""
},
{
"docid": "b7aea71af6c926344286fbfa214c4718",
"text": "Semantic segmentation is a task that covers most of the perception needs of intelligent vehicles in an unified way. ConvNets excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at the pixel level. However, current approaches normally involve complex architectures that are expensive in terms of computational resources and are not feasible for ITS applications. In this paper, we propose a deep architecture that is able to run in real-time while providing accurate semantic segmentation. The core of our ConvNet is a novel layer that uses residual connections and factorized convolutions in order to remain highly efficient while still retaining remarkable performance. Our network is able to run at 83 FPS in a single Titan X, and at more than 7 FPS in a Jetson TX1 (embedded GPU). A comprehensive set of experiments demonstrates that our system, trained from scratch on the challenging Cityscapes dataset, achieves a classification performance that is among the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. This makes our model an ideal approach for scene understanding in intelligent vehicles applications.",
"title": ""
},
{
"docid": "ac5c015aa485084431b8dba640f294b5",
"text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word wi given its prefix w0...i−1 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke’s probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.",
"title": ""
},
{
"docid": "6bafdd357ad44debeda78d911a69da90",
"text": "We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.",
"title": ""
},
{
"docid": "69ad6c10f8a7ae4629ff2aee38da0ddb",
"text": "A new hybrid security algorithm is presented for RSA cryptosystem named as Hybrid RSA. The system works on the concept of using two different keys- a private and a public for decryption and encryption processes. The value of public key (P) and private key (Q) depends on value of M, where M is the product of four prime numbers which increases the factorizing of variable M. moreover, the computation of P and Q involves computation of some more factors which makes it complex. This states that the variable x or M is transferred during encryption and decryption process, where x represents the multiplication of two prime numbers A and B. thus, it provides more secure path for encryption and decryption process. The proposed system is compared with the RSA and enhanced RSA (ERSA) algorithms to measure the key generation time, encryption and decryption time which is proved to be more efficient than RSA and ERSA.",
"title": ""
},
{
"docid": "4789f548800a38c11f0fa2f91efc95c9",
"text": "Most of the Low Dropout Regulators (LDRs) have limited operation range of load current due to their stability problem. This paper proposes a new frequency compensation scheme for LDR to optimize the regulator performance over a wide load current range. By introducing a tracking zero to cancel out the regulator output pole, the frequency response of the feedback loop becomes load current independent. The open-loop DC gain is boosted up by a low frequency dominant pole, which increases the regulator accuracy. To demonstrate the feasibility of the proposed scheme, a LDR utilizing the new frequency compensation scheme is designed and fabricated using TSMC 0.3511~1 digital CMOS process. Simulation results show that with output current from 0 pA to 100 mA the bandwidth variation is only 2.3 times and the minimum DC gain is 72 dB. Measurement of the dynamic response matches well with simulation.",
"title": ""
},
{
"docid": "d0811a8c8b760b8dadfa9a51df568bd9",
"text": "A strain of the microalga Chlorella pyrenoidosa F-9 in our laboratory showed special characteristics when transferred from autotrophic to heterotrophic culture. In order to elucidate the possible metabolic mechanism, the gene expression profiles of the autonomous organelles in the green alga C. pyrenoidosa under autotrophic and heterotrophic cultivation were compared by suppression subtractive hybridization technology. Two subtracted libraries of autotrophic and heterotrophic C. pyrenoidosa F-9 were constructed, and 160 clones from the heterotrophic library were randomly selected for DNA sequencing. Dot blot hybridization showed that the ratio of positivity was 70.31% from the 768 clones. Five chloroplast genes (ftsH, psbB, rbcL, atpB, and infA) and two mitochondrial genes (cox2 and nad6) were selected to verify their expression levels by real-time quantitative polymerase chain reaction. Results showed that the seven genes were abundantly expressed in the heterotrophic culture. Among the seven genes, the least increment of gene expression was ftsH, which was expressed 1.31-1.85-fold higher under heterotrophy culture than under autotrophy culture, and the highest increment was psbB, which increased 28.07-39.36 times compared with that under autotrophy conditions. The expression levels of the other five genes were about 10 times higher in heterotrophic algae than in autotrophic algae. In inclusion, the chloroplast and mitochondrial genes in C. pyrenoidosa F-9 might be actively involved in heterotrophic metabolism.",
"title": ""
},
{
"docid": "f7c4b71b970b7527cd2650ce1e05ab1b",
"text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.",
"title": ""
},
{
"docid": "274485dd39c0727c99fcc0a07d434b25",
"text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.",
"title": ""
},
{
"docid": "b27ab468a885a3d52ec2081be06db2ef",
"text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.",
"title": ""
},
{
"docid": "1709f180c56cab295bf9fd9c3e35d4ef",
"text": "Harmonic radar systems provide an effective modality for tracking insect behavior. This letter presents a harmonic radar system proposed to track the migration of the Emerald Ash Borer (EAB). The system offers a unique combination of portability, low power and small tag design. It is comprised of a compact radar unit and a passive RF tag for mounting on the insect. The radar unit transmits a 5.96 GHz signal and detects at the 11.812 GHz band. A prototype of the radar unit was built and tested, and a new small tag was designed for the application. The new tag offers improved harmonic conversion efficiency and much smaller size as compared to previous harmonic radar systems for tracking insects. Unlike RFID detectors whose sensitivity allows detection up to a few meters, the developed radar can detect a tagged insect up to 58 m (190 ft).",
"title": ""
},
{
"docid": "a600a19440b8e6799e0e603cf56ff141",
"text": "In this work, we address the problem of distributed expert finding using chains of social referrals and profile matching with only local information in online social networks. By assuming that users are selfish, rational, and have privately known cost of participating in the referrals, we design a novel truthful efficient mechanism in which an expert-finding query will be relayed by intermediate users. When receiving a referral request, a participant will locally choose among her neighbors some user to relay the request. In our mechanism, several closely coupled methods are carefully designed to improve the performance of distributed search, including, profile matching, social acquaintance prediction, score function for locally choosing relay neighbors, and budget estimation. We conduct extensive experiments on several data sets of online social networks. The extensive study of our mechanism shows that the success rate of our mechanism is about 90 percent in finding closely matched experts using only local search and limited budget, which significantly improves the previously best rate 20 percent. The overall cost of finding an expert by our truthful mechanism is about 20 percent of the untruthful methods, e.g., the method that always selects high-degree neighbors. The median length of social referral chains is 6 using our localized search decision, which surprisingly matches the well-known small-world phenomenon of global social structures.",
"title": ""
},
{
"docid": "fd91f09861da433d27d4db3f7d2a38a6",
"text": "Herbert Simon’s research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon’s approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman’s biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon’s approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment.",
"title": ""
},
{
"docid": "39eac1617b9b68f68022577951460fb5",
"text": "Web services support software architectures that can evolve dynamically. In particular, here we focus on architectures where services are composed (orchestrated) through a workflow described in the BPEL language. We assume that the resulting composite service refers to external services through assertions that specify their expected functional and non-functional properties. Based on these assertions, the composite service may be verified at design time by checking that it ensures certain relevant properties. Because of the dynamic nature of Web services and the multiple stakeholders involved in their provision, however, the external services may evolve dynamically, and even unexpectedly. They may become inconsistent with respect to the assertions against which the workflow was verified during development. As a consequence, validation of the composition must extend to run time. We introduce an assertion language, called ALBERT, which can be used to specify both functional and non-functional properties. We also describe an environment which supports design-time verification of ALBERT assertions for BPEL workflows via model checking. At run time, the assertions can be turned into checks that a software monitor performs on the composite system to verify that it continues to guarantee its required properties. A TeleAssistance application is provided as a running example to illustrate our validation framework.",
"title": ""
},
{
"docid": "2ecfc909301dcc6241bec2472b4d4135",
"text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.",
"title": ""
},
{
"docid": "301ce75026839f85bc15100a9a7cc5ca",
"text": "This paper presents a novel visual-inertial integration system for human navigation in free-living environments, where the measurements from wearable inertial and monocular visual sensors are integrated. The preestimated orientation, obtained from magnet, angular rate, and gravity sensors, is used to estimate the translation based on the data from the visual and inertial sensors. This has a significant effect on the performance of the fusion sensing strategy and makes the fusion procedure much easier, because the gravitational acceleration can be correctly removed from the accelerometer measurements before the fusion procedure, where a linear Kalman filter is selected as the fusion estimator. Furthermore, the use of preestimated orientation can help to eliminate erroneous point matches based on the properties of the pure camera translation and thus the computational requirements can be significantly reduced compared with the RANdom SAmple Consensus algorithm. In addition, an adaptive-frame rate single camera is selected to not only avoid motion blur based on the angular velocity and acceleration after compensation, but also to make an effect called visual zero-velocity update for the static motion. Thus, it can recover a more accurate baseline and meanwhile reduce the computational requirements. In particular, an absolute scale factor, which is usually lost in monocular camera tracking, can be obtained by introducing it into the estimator. Simulation and experimental results are presented for different environments with different types of movement and the results from a Pioneer robot are used to demonstrate the accuracy of the proposed method.",
"title": ""
},
{
"docid": "1968573cf98307276bf0f10037aa3623",
"text": "In many imaging applications, the continuous phase information of the measured signal is wrapped to a single period of 2π, resulting in phase ambiguity. In this paper we consider the two-dimensional phase unwrapping problem and propose a Maximum a Posteriori (MAP) framework for estimating the true phase values based on the wrapped phase data. In particular, assuming a joint Gaussian prior on the original phase image, we show that the MAP formulation leads to a binary quadratic minimization problem. The latter can be efficiently solved by semidefinite relaxation (SDR). We compare the performances of our proposed method with the existing L1/L2-norm minimization approaches. The numerical results demonstrate that the SDR approach significantly outperforms the existing phase unwrapping methods.",
"title": ""
},
{
"docid": "b85e9ef3652a99e55414d95bfed9cc0d",
"text": "Regulatory T cells (Tregs) prevail as a specialized cell lineage that has a central role in the dominant control of immunological tolerance and maintenance of immune homeostasis. Thymus-derived Tregs (tTregs) and their peripherally induced counterparts (pTregs) are imprinted with unique Forkhead box protein 3 (Foxp3)-dependent and independent transcriptional and epigenetic characteristics that bestows on them the ability to suppress disparate immunological and non-immunological challenges. Thus, unidirectional commitment and the predominant stability of this regulatory lineage is essential for their unwavering and robust suppressor function and has clinical implications for the use of Tregs as cellular therapy for various immune pathologies. However, recent studies have revealed considerable heterogeneity or plasticity in the Treg lineage, acquisition of alternative effector or hybrid fates, and promotion rather than suppression of inflammation in extreme contexts. In addition, the absolute stability of Tregs under all circumstances has been questioned. Since these observations challenge the safety and efficacy of human Treg therapy, the issue of Treg stability versus plasticity continues to be enthusiastically debated. In this review, we assess our current understanding of the defining features of Foxp3(+) Tregs, the intrinsic and extrinsic cues that guide development and commitment to the Treg lineage, and the phenotypic and functional heterogeneity that shapes the plasticity and stability of this critical regulatory population in inflammatory contexts.",
"title": ""
},
{
"docid": "d7ab8b7604d90e1a3bb6b4c1e54833a0",
"text": "Invisibility devices have captured the human imagination for many years. Recent theories have proposed schemes for cloaking devices using transformation optics and conformal mapping. Metamaterials, with spatially tailored properties, have provided the necessary medium by enabling precise control over the flow of electromagnetic waves. Using metamaterials, the first microwave cloaking has been achieved but the realization of cloaking at optical frequencies, a key step towards achieving actual invisibility, has remained elusive. Here, we report the first experimental demonstration of optical cloaking. The optical 'carpet' cloak is designed using quasi-conformal mapping to conceal an object that is placed under a curved reflecting surface by imitating the reflection of a flat surface. The cloak consists only of isotropic dielectric materials, which enables broadband and low-loss invisibility at a wavelength range of 1,400-1,800 nm.",
"title": ""
}
] |
scidocsrr
|
7cbf7165bed84c7f0692356c3c5964cf
|
Framework for learning agents in quantum environments
|
[
{
"docid": "871386b0aa9f04eeb622617e241fc6f0",
"text": "I show that for any number of oracle lookups up to about π/4 √ N , Grover’s quantum searching algorithm gives the maximal possible probability of finding the desired element. I explain why this is also true for quantum algorithms which use measurements during the computation. I also show that unfortunately quantum searching cannot be parallelized better than by assigning different parts of the search space to independent quantum computers. 1 Quantum searching Imagine we have N cases of which only one fulfills our conditions. E.g. we have a function which gives 1 only for one out of N possible input values and gives 0 otherwise. Often an analysis of the algorithm for calculating the function will allow us to find quickly the input value for which the output is 1. Here we consider the case where we do not know better than to repeatedly calculate the function without looking at the algorithm, e.g. because the function is calculated in a black box subroutine into which we are not allowed to look. In computer science this is called an oracle. Here I consider only oracles which give 1 for exactly one input. Quantum searching for the case with several inputs which give 1 and even with an unknown number of such inputs is treated in [4]. Obviously on a classical computer we have to query the oracle on average N/2 times before we find the answer. Grover [1] has given a quantum algorithm which can solve the problem in about π/4 √ N steps. Bennett et al. [3] have shown that asymptotically no quantum algorithm can solve the problem in less than a number of steps proportional to √ N . Boyer et al. [4] have improved this result to show that e.g. for a 50% success probability no quantum algorithm can do better than only a few percent faster than Grover’s algorithm. I improve ∗Supported by Schweizerischer Nationalfonds and LANL",
"title": ""
},
{
"docid": "a56edeae4520c745003d5cd0baae7708",
"text": "A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.",
"title": ""
}
] |
[
{
"docid": "e8cf458c60dc7b4a8f71df2fabf1558d",
"text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.",
"title": ""
},
{
"docid": "2ef679126681b96cbe439a45c8f94b91",
"text": "Recent research in computational linguistics has developed algorithms which associate matrices with adjectives and verbs, based on the distribution of words in a corpus of text. These matrices are linear operators on a vector space of context words. They are used to construct the meaning of composite expressions from that of the elementary constituents, forming part of a compositional distributional approach to semantics. We propose a Matrix Theory approach to this data, based on permutation symmetry along with Gaussian weights and their perturbations. A simple Gaussian model is tested against word matrices created from a large corpus of text. We characterize the cubic and quartic departures from the model, which we propose, alongside the Gaussian parameters, as signatures for comparison of linguistic corpora. We propose that perturbed Gaussian models with permutation symmetry provide a promising framework for characterizing the nature of universality in the statistical properties of word matrices. The matrix theory framework developed here exploits the view of statistics as zero dimensional perturbative quantum field theory. It perceives language as a physical system realizing a universality class of matrix statistics characterized by permutation symmetry. 1 [email protected] 2 [email protected] 3 [email protected] ar X iv :1 70 3. 10 25 2v 1 [ cs .C L ] 2 8 M ar 2 01 7",
"title": ""
},
{
"docid": "b92252ac701b564f17aa36d411f65ecf",
"text": "Abstract Image segmentation is a primary step in image analysis used to separate the input image into meaningful regions. MRI is an advanced medical imaging technique widely used in detecting brain tumors. Segmentation of Brain MR image is a complex task. Among the many approaches developed for the segmentation of MR images, a popular method is fuzzy C-mean (FCM). In the proposed method, Artificial Bee Colony (ABC) algorithm is used to improve the efficiency of FCM on abnormal brain images.",
"title": ""
},
{
"docid": "56bfeb2dfec7485a7f980a2c809207e4",
"text": "We measure the size of the fiscal multiplier using a model with incomplete markets and rigid prices and wages. Allowing for incomplete markets instead of complete markets—the prevalent assumption in the literature—comes with two advantages. First, the incomplete markets model delivers a realistic distribution of the marginal propensity to consume across the population, whereas all households counterfactually behave according to the permanent income hypothesis if markets are complete. Second, in our model the equilibrium response of prices, output, consumption and employment to fiscal stimulus is uniquely determined for any monetary policy including the zero-lower bound. We find that market incompleteness plays the key role in determining the size of the fiscal multiplier, which is slightly above or below 1 depending on whether spending is tax or deficit financed. The size of fiscal multiplier remains similar in a liquidity trap.",
"title": ""
},
{
"docid": "9fd0049d079919282082a119763f2740",
"text": "The rapid development of Internet has given birth to a new business model: Cloud Computing. This new paradigm has experienced a fantastic rise in recent years. Because of its infancy, it remains a model to be developed. In particular, it must offer the same features of services than traditional systems. The cloud computing is large distributed systems that employ distributed resources to deliver a service to end users by implementing several technologies. Hence providing acceptable response time for end users, presents a major challenge for cloud computing. All components must cooperate to meet this challenge, in particular through load balancing algorithms. This will enhance the availability and will gain the end user confidence. In this paper we try to give an overview of load balancing in the cloud computing by exposing the most important research challenges.",
"title": ""
},
{
"docid": "7e6f0352f8aae04099c1daf006b9cd52",
"text": "In this tutorial, we present the recent work in the database community for handling Big Spatial Data. This topic became very hot due to the recent explosion in the amount of spatial data generated by smartphones, satellites and medical devices, among others. This tutorial goes beyond the use of existing systems as-is (e.g., Hadoop, Spark or Impala), and digs deep into the core components of big systems (e.g., indexing and query processing) to describe how they are designed to handle big spatial data. During this 90-minute tutorial, we review the state-of-the-art work in the area of Big Spatial Data while classifying the existing research efforts according to the implementation approach, underlying architecture, and system components. In addition, we provide case studies of full-fledged systems and applications that handle Big Spatial Data which allows the audience to better comprehend the whole tutorial.",
"title": ""
},
{
"docid": "22c42b88a1aa733ea0c402274d102302",
"text": "We investigated 2 engagement-fostering aspects of teachers’ instructional styles—autonomy support and structure—and hypothesized that students’ engagement would be highest when teachers provided high levels of both. Trained observers rated teachers’ instructional styles and students’ behavioral engagement in 133 public high school classrooms in the Midwest, and 1,584 students in Grades 9 –11 reported their subjective engagement. Correlational and hierarchical linear modeling analyses showed 3 results: (a) Autonomy support and structure were positively correlated, (b) autonomy support and structure both predicted students’ behavioral engagement, and (c) only autonomy support was a unique predictor of students’ self-reported engagement. We discuss, first, how these findings help illuminate the relations between autonomy support and structure as 2 complementary, rather than antagonistic or curvilinear, engagement-fostering aspects of teachers’ instructional styles and, second, the somewhat different results obtained for the behavioral versus self-report measures of students’ classroom engagement.",
"title": ""
},
{
"docid": "22aa310437f0d860da550497728765c8",
"text": "We have used the initiation of pursuit eye movements as a tool to reveal properties of motion processing in the neural pathways that provide inputs to the human pursuit system. Horizontal and vertical eye position were recorded with a magnetic search coil in six normal adults. Stimuli were provided by individual trials of ramp target motion. Analysis was restricted to the first 100 ms of eye movement, which precedes the onset of corrective feedback. By recording the transient response to target motion at speeds the pursuit motor system can achieve, we investigated the visual properties of images that initiate pursuit. We have found effects of varying the retinal location, the direction, the velocity, the intensity, and the size of the stimulus. Eye acceleration in the first 100 ms of pursuit depended on both the direction of target motion and the initial position of the moving target. For horizontal target motion, eye acceleration was highest if the stimulus was close to the center of the visual field and moved toward the vertical meridian. For vertical target motion, eye acceleration was highest when the stimulus moved upward or downward within the lower visual field. The shape of the relationship between eye acceleration and initial target position was similar for target velocities ranging from 1.0 to 45 degrees/s. The initiation of pursuit showed two components that had different visual properties and were expressed early and late in the first 100 ms of pursuit. In the first 20 ms, instantaneous eye acceleration was in the direction of target motion but did not depend on other visual properties of the stimulus. At later times (e.g., 80-100 ms after pursuit initiation), instantaneous eye acceleration was strongly dependent on each property we tested. Targets that started close to and moved toward the position of fixation evoked the highest eye accelerations. For high-intensity targets, eye acceleration increased steadily as target velocity increased. For low-intensity targets, eye acceleration was selective for target velocities of 30-45 degrees/s. The properties of pursuit initiation in humans, including the differences between the early and late components, are remarkably similar to those reported by Lisberger and Westbrook (12) in monkeys. Our data provide evidence that the cell populations responsible for motion processing are similar in humans and monkeys and imply that the functional organization of the visual cortex is similar in the two species.",
"title": ""
},
{
"docid": "300485eefc3020135cdaa31ad36f7462",
"text": "The number of cyber threats is constantly increasing. In 2013, 200,000 malicious tools were identified each day by antivirus vendors. This figure rose to 800,000 per day in 2014 and then to 1.8 million per day in 2016! The bar of 3 million per day will be crossed in 2017. Traditional security tools (mainly signature-based) show their limits and are less and less effective to detect these new cyber threats. Detecting never-seen-before or zero-day malware, including ransomware, efficiently requires a new approach in cyber security management. This requires a move from signature-based detection to behavior-based detection. We have developed a data breach detection system named CDS using Machine Learning techniques which is able to identify zero-day malware by analyzing the network traffic. In this paper, we present the capability of the CDS to detect zero-day ransomware, particularly WannaCry.",
"title": ""
},
{
"docid": "b4e676d4d11039c5c5feb5e549eb364f",
"text": "Abst ract Qualit at ive case st udy met hodology provides t ools f or researchers t o st udy complex phenomena wit hin t heir cont ext s. When t he approach is applied correct ly, it becomes a valuable met hod f or healt h science research t o develop t heory, evaluat e programs, and develop int ervent ions. T he purpose of t his paper is t o guide t he novice researcher in ident if ying t he key element s f or designing and implement ing qualit at ive case st udy research project s. An overview of t he t ypes of case st udy designs is provided along wit h general recommendat ions f or writ ing t he research quest ions, developing proposit ions, det ermining t he “case” under st udy, binding t he case and a discussion of dat a sources and t riangulat ion. T o f acilit at e applicat ion of t hese principles, clear examples of research quest ions, st udy proposit ions and t he dif f erent t ypes of case st udy designs are provided Keywo rds Case St udy and Qualit at ive Met hod Publicat io n Dat e 12-1-2008 Creat ive Co mmo ns License Journal Home About T his Journal Aims & Scope Edit orial Board Policies Open Access",
"title": ""
},
{
"docid": "77a156afb22bbecd37d0db073ef06492",
"text": "Rhonda Farrell University of Fairfax, Vienna, VA ABSTRACT While acknowledging the many benefits that cloud computing solutions bring to the world, it is important to note that recent research and studies of these technologies have identified a myriad of potential governance, risk, and compliance (GRC) issues. While industry clearly acknowledges their existence and seeks to them as much as possible, timing-wise it is still well before the legal framework has been put in place to adequately protect and adequately respond to these new and differing global challenges. This paper seeks to inform the potential cloud adopter, not only of the perceived great technological benefit, but to also bring to light the potential security, privacy, and related GRC issues which will need to be prioritized, managed, and mitigated before full implementation occurs.",
"title": ""
},
{
"docid": "1bcf0caab94fd99e9b7407b10eaddef0",
"text": "Cloud computing and digital forensics are emerging fields of technology. Unlike traditional digital forensics where the target environment can be almost completely isolated, acquired and can be under the investigators control; in cloud environments, the distribution of computation and storage poses unique and complex challenges to the investigators. Recently, the term “cloud forensics” has an increasing presence in the field of digital forensics. In this state-of-the-art review, we included the most recent research efforts that used “cloud forensics” as a keyword and then classify the literature into three dimensions: (1) survey-based, (2) technology-based and (3) forensics-procedural-based. We discuss widely accepted standard bodies and their efforts to address the current trend of cloud forensics. Our aim is not only to reference related work based on the discussed dimensions, but also to analyse them and generate a mind map that will help in identifying research gaps. Finally, we summarize existing digital forensics tools and the available simulation environments that can be used for evidence acquisition, examination and cloud forensics test purposes.",
"title": ""
},
{
"docid": "b68bf9b74f052f9072a81d0fb462cd49",
"text": "Skin diseases have a serious impact on people's life and health. Current research proposes an efficient approach to identify singular type of skin diseases. It is necessary to develop automatic methods in order to increase the accuracy of diagnosis for multitype skin diseases. In this paper, three type skin diseases such as herpes, dermatitis, and psoriasis skin disease could be identified by a new recognition method. Initially, skin images were preprocessed to remove noise and irrelevant background by filtering and transformation. Then the method of grey-level co-occurrence matrix (GLCM) was introduced to segment images of skin disease. The texture and color features of different skin disease images could be obtained accurately. Finally, by using the support vector machine (SVM) classification method, three types of skin diseases were identified. The experimental results demonstrate the effectiveness and feasibility of the proposed method.",
"title": ""
},
{
"docid": "99d17b558e4ecbcb4cb63d90a9ce2b2d",
"text": "PURPOSE\nManitoba Oculotrichoanal (MOTA) syndrome is an autosomal recessive disorder present in First Nations families that is characterized by ocular (cryptophthalmos), facial, and genital anomalies. At the commencement of this study, its genetic basis was undefined.\n\n\nMETHODS\nHomozygosity analysis was employed to map the causative locus using DNA samples from four probands of Cree ancestry. After single nucleotide polymorphism (SNP) genotyping, data were analyzed and exported to PLINK to identify regions identical by descent (IBD) and common to the probands. Candidate genes within and adjacent to the IBD interval were sequenced to identify pathogenic variants, with analyses of potential deletions or duplications undertaken using the B-allele frequency and log(2) ratio of SNP signal intensity.\n\n\nRESULTS\nAlthough no shared IBD region >1 Mb was evident on preliminary analysis, adjusting the criteria to permit the detection of smaller homozygous IBD regions revealed one 330 Kb segment on chromosome 9p22.3 present in all 4 probands. This interval comprising 152 SNPs, lies 16 Kb downstream of FRAS1-related extracellular matrix protein 1 (FREM1), and no copy number variations were detected either in the IBD region or FREM1. Subsequent sequencing of both genes in the IBD region, followed by FREM1, did not reveal any mutations.\n\n\nCONCLUSIONS\nThis study illustrates the utility of studying geographically isolated populations to identify genomic regions responsible for disease through analysis of small numbers of affected individuals. The location of the IBD region 16 kb from FREM1 suggests the phenotype in these patients is attributable to a variant outside of FREM1, potentially in a regulatory element, whose identification may prove tractable to next generation sequencing. In the context of recent identification of FREM1 coding mutations in a proportion of MOTA cases, characterization of such additional variants offers scope both to enhance understanding of FREM1's role in cranio-facial biology and may facilitate genetic counselling in populations with high prevalences of MOTA to reduce the incidence of this disorder.",
"title": ""
},
{
"docid": "8a45e83904913f8e4fbb7c59ff5d056c",
"text": "The present article examines the nature and function of human agency within the conceptual model of triadic reciprocal causation. In analyzing the operation of human agency in this interactional causal structure, social cognitive theory accords a central role to cognitive, vicarious, self-reflective, and self-regulatory processes. The issues addressed concern the psychological mechanisms through which personal agency is exercised, the hierarchical structure of self-regulatory systems, eschewal of the dichotomous construal of self as agent and self as object, and the properties of a nondualistic but nonreductional conception of human agency. The relation of agent causality to the fundamental issues of freedom and determinism is also analyzed.",
"title": ""
},
{
"docid": "e017a4bed5bec5bb212bb82e78d68236",
"text": "Patent claim sentences, despite their legal importance in patent documents, still pose difficulties for state-of-the-art statistical machine translation (SMT) systems owing to their extreme lengths and their special sentence structure. This paper describes a method for improving the translation quality of claim sentences, by taking into account the features specific to the claim sublanguage. Our method overcomes the issue of special sentence structure, by transferring the sublanguage-specific sentence structure (SSSS) from the source language to the target language, using a set of synchronous context-free grammar rules. Our method also overcomes the issue of extreme lengths by taking the sentence components to be the processing unit for SMT. The results of an experiment demonstrate that our SSSS transfer method, used in conjunction with pre-ordering, significantly improves the translation quality in terms of BLEU scores by five points, in both English-to-Japanese and Japanese-to-English directions. The experiment also shows that the SSSS transfer method significantly improves structural appropriateness in the translated sentences in both translation directions, which is indicated by substantial gains over 30 points in RIBES scores.",
"title": ""
},
{
"docid": "792c0ac288242cedad24627df3092a94",
"text": "The popular media have publicized the idea that social networking Web sites (e.g., Facebook) may enrich the interpersonal lives of people who struggle to make social connections. The opportunity that such sites provide for self-disclosure-a necessary component in the development of intimacy--could be especially beneficial for people with low self-esteem, who are normally hesitant to self-disclose and who have difficulty maintaining satisfying relationships. We suspected that posting on Facebook would reduce the perceived riskiness of self-disclosure, thus encouraging people with low self-esteem to express themselves more openly. In three studies, we examined whether such individuals see Facebook as a safe and appealing medium for self-disclosure, and whether their actual Facebook posts enabled them to reap social rewards. We found that although people with low self-esteem considered Facebook an appealing venue for self-disclosure, the low positivity and high negativity of their disclosures elicited undesirable responses from other people.",
"title": ""
},
{
"docid": "a9c120f7d3d71fb8f1d35ded1bce17ea",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aera.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "baed3d522bfd5d56401bfac48e8c51a2",
"text": "Mobile malware attempts to evade detection during app analysis by mimicking security-sensitive behaviors of benign apps that provide similar functionality (e.g., sending SMS messages), and suppressing their payload to reduce the chance of being observed (e.g., executing only its payload at night). Since current approaches focus their analyses on the types of security-sensitive resources being accessed (e.g., network), these evasive techniques in malware make differentiating between malicious and benign app behaviors a difficult task during app analysis. We propose that the malicious and benign behaviors within apps can be differentiated based on the contexts that trigger security-sensitive behaviors, i.e., the events and conditions that cause the security-sensitive behaviors to occur. In this work, we introduce AppContext, an approach of static program analysis that extracts the contexts of security-sensitive behaviors to assist app analysis in differentiating between malicious and benign behaviors. We implement a prototype of AppContext and evaluate AppContext on 202 malicious apps from various malware datasets, and 633 benign apps from the Google Play Store. AppContext correctly identifies 192 malicious apps with 87.7% precision and 95% recall. Our evaluation results suggest that the maliciousness of a security-sensitive behavior is more closely related to the intention of the behavior (reflected via contexts) than the type of the security-sensitive resources that the behavior accesses.",
"title": ""
}
] |
scidocsrr
|
6a5783cc4e6a093f505b017eadcfd23b
|
Dissociable roles of prefrontal and anterior cingulate cortices in deception.
|
[
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
}
] |
[
{
"docid": "b59965c405937a096186e41b2a3877c3",
"text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].",
"title": ""
},
{
"docid": "b705b194b79133957662c018ea6b1c7a",
"text": "Skew detection has been an important part of the document recognition system. A lot of techniques already exists and has currently been developing for detection of skew of scanned document images. This paper describes the skew detection and correction of scanned document images written in Assamese language using the horizontal and vertical projection profile analysis and brings out the differences after implementation of both the techniques.",
"title": ""
},
{
"docid": "1813c1cefbb5607660626b6c05c41960",
"text": "First described in 1925, giant condyloma acuminatum also known as Buschke-Löwenstein tumor (BLT) is a benign, slow-growing, locally destructive cauliflower-like lesion usually in the genital region. The disease is usually locally aggressive and destructive with a potential for malignant transformation. The causative organism is human papilloma virus. The most common risk factor is immunosuppression with HIV; however, any other cause of immunodeficiency can be a predisposing factor. We present a case of 33-year-old female patient, a known HIV patient on antiretroviral therapy for ten months. She presented with seven-month history of an abnormal growth in the genitalia that was progressive accompanied with foul smelling yellowish discharge and friable. Surgical excision was performed successfully. Pap smear of the excised tissue was negative. Despite being a rare condition, giant condyloma acuminatum is relatively common in HIV-infected patients.",
"title": ""
},
{
"docid": "0763497a09f54e2d49a03e262dcc7b6e",
"text": "Content-based subscription systems are an emerging alternative to traditional publish-subscribe systems, because they permit more flexible subscriptions along multiple dimensions. In these systems, each subscription is a predicate which may test arbitrary attributes within an event. However, the matching problem for content-based systems — determining for each event the subset of all subscriptions whose predicates match the event — is still an open problem. We present an efficient, scalable solution to the matching problem. Our solution has an expected time complexity that is sub-linear in the number of subscriptions, and it has a space complexity that is linear. Specifically, we prove that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes (in some cases, 1=2). We present some optimizations to our algorithms that improve the search time. We also present the results of simulations that validate the theoretical bounds and that show acceptable performance levels for tens of thousands of subscriptions. Department of Computer Science, Cornell University, Ithaca, N.Y. 14853-7501, [email protected] IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598, fstrom, sturman, [email protected] Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave, Urbana, I.L. 61801, [email protected]",
"title": ""
},
{
"docid": "037fb8eb72b55b8dae1aee107eb6b15c",
"text": "Traditional methods on video summarization are designed to generate summaries for single-view video records, and thus they cannot fully exploit the mutual information in multi-view video records. In this paper, we present a multiview metric learning framework for multi-view video summarization. It combines the advantages of maximum margin clustering with the disagreement minimization criterion. The learning framework thus has the ability to find a metric that best separates the input data, and meanwhile to force the learned metric to maintain underlying intrinsic structure of data points, for example geometric information. Facilitated by such a framework, a systematic solution to the multi-view video summarization problem is developed from the viewpoint of metric learning. The effectiveness of the proposed method is demonstrated by experiments.",
"title": ""
},
{
"docid": "76c31d0f392b81658270805daaff661d",
"text": "One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the best template for tracking a given frame. The template selection strategy is selflearned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm effectively decides the best template for visual tracking.",
"title": ""
},
{
"docid": "7c974eacb24368a0c5acfeda45d60f64",
"text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.",
"title": ""
},
{
"docid": "ac8cef535e5038231cdad324325eaa37",
"text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.",
"title": ""
},
{
"docid": "921d9dc34f32522200ddcd606d22b6b4",
"text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.",
"title": ""
},
{
"docid": "246866da7509b2a8a2bda734a664de9c",
"text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.",
"title": ""
},
{
"docid": "1d50d61d6b0abb0d5bec74d613ffe172",
"text": "We propose a novel hardware-accelerated voxelization algorithm for polygonal models. Compared with previous approaches, our algorithm has a major advantage that it guarantees the conservative correctness in voxelization: every voxel intersecting the input model is correctly recognized. This property is crucial for applications like collision detection, occlusion culling and visibility processing. We also present an efficient and robust implementation of the algorithm in the GPU. Experiments show that our algorithm has a lower memory consumption than previous approaches and is more efficient when the volume resolution is high. In addition, our algorithm requires no preprocessing and is suitable for voxelizing deformable models.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "220d7b64db1731667e57ed318d2502ce",
"text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.",
"title": ""
},
{
"docid": "1d7ee43299e3a7581d11604f1596aeab",
"text": "We analyze the impact of corruption on bilateral trade, highlighting its dual role in terms of extortion and evasion. Corruption taxes trade, when corrupt customs officials in the importing country extort bribes from exporters (extortion effect); however, with high tariffs, corruption may be trade enhancing when corrupt officials allow exporters to evade tariff barriers (evasion effect). We derive and estimate a corruption-augmented gravity model, where the effect of corruption on trade flows is ambiguous and contingent on tariffs. Empirically, corruption taxes trade in the majority of cases, but in high-tariff environments (covering 5% to 14% of the observations) their marginal effect is trade enhancing.",
"title": ""
},
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
},
{
"docid": "0962dfe13c1960b345bb0abb480f1520",
"text": "This electronic document presents the application of a novel method of bipedal walking pattern generation assured by “the liquid level model” and the preview control of zero-moment-point (ZMP). In this method, the trajectory of the center of mass (CoM) of the robot is generated assured by the preview controller to maintain the ZMP at the desired location knowing that the robot is modeled as a running liquid level model on a tank. The proposed approach combines the preview control theory with simple model “the liquid level model”, to assure a stable dynamic walking. Simulations results show that the proposed pattern generator guarantee not only to walk dynamically stable but also good performance.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
},
{
"docid": "77ce917536f59d5489d0d6f7000c7023",
"text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.",
"title": ""
}
] |
scidocsrr
|
e1bfbec4d77e0fd9cbeaeadaa36f3267
|
Compressing Convolutional Neural Networks in the Frequency Domain
|
[
{
"docid": "8207f59dab8704d14874417f6548c0a7",
"text": "The fully-connected layers of deep convolutional neural networks typically contain over 90% of the network parameters. Reducing the number of parameters while preserving predictive performance is critically important for training big models in distributed systems and for deployment in embedded devices. In this paper, we introduce a novel Adaptive Fastfood transform to reparameterize the matrix-vector multiplication of fully connected layers. Reparameterizing a fully connected layer with d inputs and n outputs with the Adaptive Fastfood transform reduces the storage and computational costs costs from O(nd) to O(n) and O(n log d) respectively. Using the Adaptive Fastfood transform in convolutional networks results in what we call a deep fried convnet. These convnets are end-to-end trainable, and enable us to attain substantial reductions in the number of parameters without affecting prediction accuracy on the MNIST and ImageNet datasets.",
"title": ""
}
] |
[
{
"docid": "eb2d29417686cc86a45c33694688801f",
"text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.",
"title": ""
},
{
"docid": "04b62ed72ddf8f97b9cb8b4e59a279c1",
"text": "This paper aims to explore some of the manifold and changing links that official Pakistani state discourses forged between women and work from the 1940s to the late 2000s. The focus of the analysis is on discursive spaces that have been created for women engaged in non-domestic work. Starting from an interpretation of the existing academic literature, this paper argues that Pakistani women’s non-domestic work has been conceptualised in three major ways: as a contribution to national development, as a danger to the nation, and as non-existent. The paper concludes that although some conceptualisations of work have been more powerful than others and, at specific historical junctures, have become part of concrete state policies, alternative conceptualisations have always existed alongside them. Disclosing the state’s implication in the discursive construction of working women’s identities might contribute to the destabilisation of hegemonic concepts of gendered divisions of labour in Pakistan. DOI: https://doi.org/10.1016/j.wsif.2013.05.007 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-78605 Accepted Version Originally published at: Grünenfelder, Julia (2013). Discourses of gender identities and gender roles in Pakistan: Women and non-domestic work in political representations. Women’s Studies International Forum, 40:68-77. DOI: https://doi.org/10.1016/j.wsif.2013.05.007",
"title": ""
},
{
"docid": "91811c07f246e979401937aca9b66f7e",
"text": "Extraction of complex head and hand movements along with their constantly changing shapes for recognition of sign language is considered a difficult problem in computer vision. This paper proposes the recognition of Indian sign language gestures using a powerful artificial intelligence tool, convolutional neural networks (CNN). Selfie mode continuous sign language video is the capture method used in this work, where a hearing-impaired person can operate the SLR mobile application independently. Due to non-availability of datasets on mobile selfie sign language, we initiated to create the dataset with five different subjects performing 200 signs in 5 different viewing angles under various background environments. Each sign occupied for 60 frames or images in a video. CNN training is performed with 3 different sample sizes, each consisting of multiple sets of subjects and viewing angles. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our selfie sign language data to obtain better accuracy in recognition. We achieved 92.88% recognition rate compared to other classifier models reported on the same dataset.",
"title": ""
},
{
"docid": "e1060ca6a60857a995fb22b6c773ebe1",
"text": "Fast and robust pupil detection is an essential prerequisite for video-based eye-tracking in real-world settings. Several algorithms for image-based pupil detection have been proposed in the past, their applicability, however, is mostly limited to laboratory conditions. In real-world scenarios, automated pupil detection has to face various challenges, such as illumination changes, reflections (on glasses), make-up, non-centered eye recording, and physiological eye characteristics. We propose ElSe, a novel algorithm based on ellipse evaluation of a filtered edge image. We aim at a robust, inexpensive approach that can be integrated in embedded architectures, e.g., driving. The proposed algorithm was evaluated against four state-of-the-art methods on over 93,000 hand-labeled images from which 55,000 are new eye images contributed by this work. On average, the proposed method achieved a 14.53% improvement on the detection rate relative to the best state-of-the-art performer. Algorithm and data sets are available for download: ftp://[email protected] (password:eyedata).",
"title": ""
},
{
"docid": "d1a9ac5a11d1f9fbd9b9ee24a199cb70",
"text": "In this paper, we proposed a new robust twin support vector machine (called R-TWSVM) via second order cone programming formulations for classification, which can deal with data with measurement noise efficiently. Preliminary experiments confirm the robustness of the proposed method and its superiority to the traditional robust SVM in both computation time and classification accuracy. Remarkably, since there are only inner products about inputs in our dual problems, this makes us apply kernel trick directly for nonlinear cases. Simultaneously we does not need to solve the extra inverse of matrices, which is totally different with existing TWSVMs. In addition, we also show that the TWSVMs are the special case of our robust model and simultaneously give a new dual form of TWSVM by degenerating R-TWSVM, which successfully overcomes the existing shortcomings of TWSVM. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7eca07c70cab1eca77de2e10fc53a72",
"text": "The revolutionary concept of Software Defined Networks (SDNs) potentially provides flexible and wellmanaged next-generation networks. All the hype surrounding the SDNs is predominantly because of its centralized management functionality, the separation of the control plane from the data forwarding plane, and enabling innovation through network programmability. Despite the promising architecture of SDNs, security was not considered as part of the initial design. Moreover, security concerns are potentially augmented considering the logical centralization of network intelligence. Furthermore, the security and dependability of the SDN has largely been a neglected topic and remains an open issue. The paper presents a broad overview of the security implications of each SDN layer/interface. This paper contributes further by devising a contemporary layered/interface taxonomy of the reported security vulnerabilities, attacks, and challenges of SDN. We also highlight and analyze the possible threats on each layer/interface of SDN to help design secure SDNs. Moreover, the ensuing paper contributes by presenting the state-ofthe-art SDNs security solutions. The categorization of solutions is followed by a critical analysis and discussion to devise a comprehensive thematic taxonomy. We advocate the production of secure and dependable SDNs by presenting potential requirements and key enablers. Finally, in an effort to anticipate secure and dependable SDNs, we present the ongoing open security issues, challenges and future research directions. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28d89bf52b1955de36474fc247a381cf",
"text": "Cannabis has been employed medicinally throughout history, but its recent legal prohibition, biochemical complexity and variability, quality control issues, previous dearth of appropriately powered randomised controlled trials, and lack of pertinent education have conspired to leave clinicians in the dark as to how to advise patients pursuing such treatment. With the advent of pharmaceutical cannabis-based medicines (Sativex/nabiximols and Epidiolex), and liberalisation of access in certain nations, this ignorance of cannabis pharmacology and therapeutics has become untenable. In this article, the authors endeavour to present concise data on cannabis pharmacology related to tetrahydrocannabinol (THC), cannabidiol (CBD) et al., methods of administration (smoking, vaporisation, oral), and dosing recommendations. Adverse events of cannabis medicine pertain primarily to THC, whose total daily dose-equivalent should generally be limited to 30mg/day or less, preferably in conjunction with CBD, to avoid psychoactive sequelae and development of tolerance. CBD, in contrast to THC, is less potent, and may require much higher doses for its adjunctive benefits on pain, inflammation, and attenuation of THC-associated anxiety and tachycardia. Dose initiation should commence at modest levels, and titration of any cannabis preparation should be undertaken slowly over a period of as much as two weeks. Suggestions are offered on cannabis-drug interactions, patient monitoring, and standards of care, while special cases for cannabis therapeutics are addressed: epilepsy, cancer palliation and primary treatment, chronic pain, use in the elderly, Parkinson disease, paediatrics, with concomitant opioids, and in relation to driving and hazardous activities.",
"title": ""
},
{
"docid": "8e5573b7ab9789a73d431b666bfb3c8a",
"text": "Automated question answering has been a topic of research and development since the earliest AI applications. Computing power has increased since the first such systems were developed, and the general methodology has changed from the use of hand-encoded knowledge bases about simple domains to the use of text collections as the main knowledge source over more complex domains. Still, many research issues remain. The focus of this article is on the use of restricted domains for automated question answering. The article contains a historical perspective on question answering over restricted domains and an overview of the current methods and applications used in restricted domains. A main characteristic of question answering in restricted domains is the integration of domain-specific information that is either developed for question answering or that has been developed for other purposes. We explore the main methods developed to leverage this domain-specific information.",
"title": ""
},
{
"docid": "d7e2ab4a70dee48770a1ed9ccbeba08f",
"text": "Brezonik, P., K.D. Menken and M. Bauer. 2005. Landsat-based remote sensing of lake water quality characteristics, including chlorophyll and colored dissolved organic matter (CDOM). Lake and Reserv. Manage. 21(4):373-382. Ground-based measurements on 15 Minnesota lakes with wide ranges of optical properties and Landsat TM data from the same lakes were used to evaluate the effect of humic color on satellite-inferred water quality conditions. Color (C440), as measured by absorbance at 440 nm, causes only small biases in estimates of Secchi disk transparency (SDT) from Landsat TM data, except at very high values (> ~ 300 chloroplatinate units, CPU). Similarly, when chlorophyll a (chl a) levels are moderate or high (> 10 μg/L), low-to-moderate levels of humic color have only a small influence on the relationship between SDT and chl a concentration, but it has a pronounced influence at high levels of C440 (e.g., > ~200 CPU). However, deviations from the general chl a-SDT relationship occur at much lower C440 values (~ 60 CPU) when chl a levels are low. Good statistical relationships were found between optical properties of lake water generally associated with algal abundance (SDT, chl a, turbidity) and measured brightness of various Landsat TM bands. The best relationships for chl a (based on R2 and absence of statistical outliers or lakes with large leverage) were combinations of bands 1, 2, or 4 with the band ratio 1:3 (R2 = 0.88). Although TM bands 1-4 individually or as simple ratios were poor predictors of C440, multiple regression analyses between ln(C440) and combinations of bands 1-4 and band ratios yielded several relationships with R2 ≥ 0.70, suggesting that C440 can be estimated with fair reliability from Landsat TM data.",
"title": ""
},
{
"docid": "f38709ee76dd9988b36812a7801f7336",
"text": "BACKGROUND\nMost individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment.\n\n\nMETHODS\nThe Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments.\n\n\nRESULTS\nSeveral principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care.\n\n\nCONCLUSIONS\nEfficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.",
"title": ""
},
{
"docid": "7334904bb8b95fbf9668c388d30d4d72",
"text": "Write-optimized data structures like Log-Structured Merge-tree (LSM-tree) and its variants are widely used in key-value storage systems like Big Table and Cassandra. Due to deferral and batching, the LSM-tree based storage systems need background compactions to merge key-value entries and keep them sorted for future queries and scans. Background compactions play a key role on the performance of the LSM-tree based storage systems. Existing studies about the background compaction focus on decreasing the compaction frequency, reducing I/Os or confining compactions on hot data key-ranges. They do not pay much attention to the computation time in background compactions. However, the computation time is no longer negligible, and even the computation takes more than 60% of the total compaction time in storage systems using flash based SSDs. Therefore, an alternative method to speedup the compaction is to make good use of the parallelism of underlying hardware including CPUs and I/O devices. In this paper, we analyze the compaction procedure, recognize the performance bottleneck, and propose the Pipelined Compaction Procedure (PCP) to better utilize the parallelism of CPUs and I/O devices. Theoretical analysis proves that PCP can improve the compaction bandwidth. Furthermore, we implement PCP in real system and conduct extensive experiments. The experimental results show that the pipelined compaction procedure can increase the compaction bandwidth and storage system throughput by 77% and 62% respectively.",
"title": ""
},
{
"docid": "d9356e0a1e207c53301d776b0895bcd3",
"text": "Neurodegenerative diseases are a common cause of morbidity and cognitive impairment in older adults. Most clinicians who care for the elderly are not trained to diagnose these conditions, perhaps other than typical Alzheimer's disease (AD). Each of these disorders has varied epidemiology, clinical symptomatology, laboratory and neuroimaging features, neuropathology, and management. Thus, it is important that clinicians be able to differentiate and diagnose these conditions accurately. This review summarizes and highlights clinical aspects of several of the most commonly encountered neurodegenerative diseases, including AD, frontotemporal dementia (FTD) and its variants, progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), Parkinson's disease (PD), dementia with Lewy bodies (DLB), multiple system atrophy (MSA), and Huntington's disease (HD). For each condition, we provide a brief overview of the epidemiology, defining clinical symptoms and diagnostic criteria, relevant imaging and laboratory features, genetics, pathology, treatments, and differential diagnosis.",
"title": ""
},
{
"docid": "4a31889cf90d39b7c49d02174a425b5b",
"text": "Inter-vehicle communication (IVC) protocols have the potential to increase the safety, efficiency, and convenience of transportation systems involving planes, trains, automobiles, and robots. The applications targeted include peer-to-peer networks for web surfing, coordinated braking, runway incursion prevention, adaptive traffic control, vehicle formations, and many others. The diversity of the applications and their potential communication protocols has challenged a systematic literature survey. We apply a classification technique to IVC applications to provide a taxonomy for detailed study of their communication requirements. The applications are divided into type classes which share common communication organization and performance requirements. IVC protocols are surveyed separately and their fundamental characteristics are revealed. The protocol characteristics are then used to determine the relevance of specific protocols to specific types of IVC applications.",
"title": ""
},
{
"docid": "8774c5a504e2d04e8a49e3625327828a",
"text": "Forest fire prediction constitutes a significant component of forest fire management. It plays a major role in resource allocation, mitigation and recovery efforts. This paper presents a description and analysis of forest fire prediction methods based on artificial intelligence. A novel forest fire risk prediction algorithm, based on support vector machines, is presented. The algorithm depends on previous weather conditions in order to predict the fire hazard level of a day. The implementation of the algorithm using data from Lebanon demonstrated its ability to accurately predict the hazard of fire occurrence.",
"title": ""
},
{
"docid": "ec7c9fa71dcf32a3258ee8712ccb95c1",
"text": "Fuzzy graph is now a very important research area due to its wide application. Fuzzy multigraph and fuzzy planar graphs are two subclasses of fuzzy graph theory. In this paper, we define both of these graphs and studied a lot of properties. A very close association of fuzzy planar graph is fuzzy dual graph. This is also defined and studied several properties. The relation between fuzzy planar graph and fuzzy dual graph is also established.",
"title": ""
},
{
"docid": "e0092f7964604f7adbe9f010bbac4871",
"text": "In the last decade, Web 2.0 services such as blogs, tweets, forums, chats, email etc. have been widely used as communication media, with very good results. Sharing knowledge is an important part of learning and enhancing skills. Furthermore, emotions may affect decisionmaking and individual behavior. Bitcoin, a decentralized electronic currency system, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work, we investigated if the spread of the Bitcoin’s price is related to the volumes of tweets or Web Search media results. We compared trends of price with Google Trends data, volume of tweets and particularly with those that express a positive sentiment. We found significant cross correlation values, especially between Bitcoin price and Google Trends data, arguing our initial idea based on studies about trends in stock and goods market.",
"title": ""
},
{
"docid": "dd4860e8dfe73c56c7bd30863ca626b4",
"text": "Terrain rendering is an important component of many GIS applications and simulators. Most methods rely on heightmap-based terrain which is simple to acquire and handle, but has limited capabilities for modeling features like caves, steep cliffs, or overhangs. In contrast, volumetric terrain models, e.g. based on isosurfaces can represent arbitrary topology. In this paper, we present a fast, practical and GPU-friendly level of detail algorithm for large scale volumetric terrain that is specifically designed for real-time rendering applications. Our algorithm is based on a longest edge bisection (LEB) scheme. The resulting tetrahedral cells are subdivided into four hexahedra, which form the domain for a subsequent isosurface extraction step. The algorithm can be used with arbitrary volumetric models such as signed distance fields, which can be generated from triangle meshes or discrete volume data sets. In contrast to previous methods our algorithm does not require any stitching between detail levels. It generates crack free surfaces with a good triangle quality. Furthermore, we efficiently extract the geometry at runtime and require no preprocessing, which allows us to render infinite procedural content with low memory",
"title": ""
},
{
"docid": "77f795e245cd0c358ad42b11199167e1",
"text": "Object recognition and pedestrian detection are of crucial importance to autonomous driving applications. Deep learning based methods have exhibited very large improvements in accuracy and fast decision in real time applications thanks to CUDA support. In this paper, we propose two Convolutions Neural Networks (CNNs) architectures with different layers. We extract the features obtained from the proposed CNN, CNN in AlexNet architecture, and Bag of visual Words (BOW) approach by using SURF, HOG and k-means. We use linear SVM classifiers for training the features. In the experiments, we carried out object recognition and pedestrian detection tasks using the benchmark the Caltech 101 and the Caltech Pedestrian Detection datasets.",
"title": ""
},
{
"docid": "e757ff7aa63b4fea854641ff97de6fb9",
"text": "It is well known that natural images admit sparse representations by redundant dictionaries of basis functions such as Gabor-like wavelets. However, it is still an open question as to what the next layer of representational units above the layer of wavelets should be. We address this fundamental question by proposing a sparse FRAME (Filters, Random field, And Maximum Entropy) model for representing natural image patterns. Our sparse FRAME model is an inhomogeneous generalization of the original FRAME model. It is a non-stationary Markov random field model that reproduces the observed statistical properties of filter responses at a subset of selected locations, scales and orientations. Each sparse FRAME model is intended to represent an object pattern and can be considered a deformable template. The sparse FRAME model can be written as a shared sparse coding model, which motivates us to propose a two-stage algorithm for learning the model. The first stage selects the subset of wavelets from the dictionary by a shared matching pursuit algorithm. The second stage then estimates the parameters of the model given the selected wavelets. Our experiments show that the sparse FRAME models are capable of representing a wide variety of object patterns in natural images and that the learned models are useful for object classification.",
"title": ""
},
{
"docid": "e520b7a8c9f323c92a7e0fa52f38f16d",
"text": "BACKGROUND\nRecent research has revealed concerning rates of anxiety and depression among university students. Nevertheless, only a small percentage of these students receive treatment from university health services. Universities are thus challenged with instituting preventative programs that address student stress and reduce resultant anxiety and depression.\n\n\nMETHOD\nA systematic review of the literature and meta-analysis was conducted to examine the effectiveness of interventions aimed at reducing stress in university students. Studies were eligible for inclusion if the assignment of study participants to experimental or control groups was by random allocation or parallel cohort design.\n\n\nRESULTS\nRetrieved studies represented a variety of intervention approaches with students in a broad range of programs and disciplines. Twenty-four studies, involving 1431 students were included in the meta-analysis. Cognitive, behavioral and mindfulness interventions were associated with decreased symptoms of anxiety. Secondary outcomes included lower levels of depression and cortisol.\n\n\nLIMITATIONS\nIncluded studies were limited to those published in peer reviewed journals. These studies over-represent interventions with female students in Western countries. Studies on some types of interventions such as psycho-educational and arts based interventions did not have sufficient data for inclusion in the meta-analysis.\n\n\nCONCLUSION\nThis review provides evidence that cognitive, behavioral, and mindfulness interventions are effective in reducing stress in university students. Universities are encouraged to make such programs widely available to students. In addition however, future work should focus on developing stress reduction programs that attract male students and address their needs.",
"title": ""
}
] |
scidocsrr
|
e2fd9849b1664bbdf7f8f9130f94ab8a
|
User Movement Prediction: The Contribution of Machine Learning Techniques
|
[
{
"docid": "e494f926c9b2866d2c74032d200e4d0a",
"text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"title": ""
},
{
"docid": "ec06587bff3d5c768ab9083bd480a875",
"text": "Wireless sensor networks are an emerging technology for low-cost, unattended monitoring of a wide range of environments, and their importance has been enforced by the recent delivery of the IEEE 802.15.4 standard for the physical and MAC layers and the forthcoming Zigbee standard for the network and application layers. The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.",
"title": ""
}
] |
[
{
"docid": "5483778c0565b3fef8fbc2c4f9769d5d",
"text": "Previous studies of preference for and harmony of color combinations have produced confusing results. For example, some claim that harmony increases with hue similarity, whereas others claim that it decreases. We argue that such confusions are resolved by distinguishing among three types of judgments about color pairs: (1) preference for the pair as a whole, (2) harmony of the pair as a whole, and (3) preference for its figural color when viewed against its colored background. Empirical support for this distinction shows that pair preference and harmony both increase as hue similarity increases, but preference relies more strongly on component color preference and lightness contrast. Although pairs with highly contrastive hues are generally judged to be neither preferable nor harmonious, figural color preference ratings increase as hue contrast with the background increases. The present results thus refine and clarify some of the best-known and most contentious claims of color theorists.",
"title": ""
},
{
"docid": "9f3388eb88e230a9283feb83e4c623e1",
"text": "Entity Linking (EL) is an essential task for semantic text understanding and information extraction. Popular methods separately address the Mention Detection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging their mutual dependency. We here propose the first neural end-to-end EL system that jointly discovers and links entities in a text document. The main idea is to consider all possible spans as potential mentions and learn contextual similarity scores over their entity candidates that are useful for both MD and ED decisions. Key components are context-aware mention embeddings, entity embeddings and a probabilistic mention entity map, without demanding other engineered features. Empirically, we show that our end-to-end method significantly outperforms popular systems on the Gerbil platform when enough training data is available. Conversely, if testing datasets follow different annotation conventions compared to the training set (e.g. queries/ tweets vs news documents), our ED model coupled with a traditional NER system offers the best or second best EL accuracy.",
"title": ""
},
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "89bec90bd6715a3907fba9f0f7655158",
"text": "Long text brings a big challenge to neural network based text matching approaches due to their complicated structures. To tackle the challenge, we propose a knowledge enhanced hybrid neural network (KEHNN) that leverages prior knowledge to identify useful information and filter out noise in long text and performs matching from multiple perspectives. The model fuses prior knowledge into word representations by knowledge gates and establishes three matching channels with words, sequential structures of text given by Gated Recurrent Units (GRUs), and knowledge enhanced representations. The three channels are processed by a convolutional neural network to generate high level features for matching, and the features are synthesized as a matching score by a multilayer perceptron. In this paper, we focus on exploring the use of taxonomy knowledge for text matching. Evaluation results from extensive experiments on public data sets of question answering and conversation show that KEHNN can significantly outperform state-of-the-art matching models and particularly improve matching accuracy on pairs with long text.",
"title": ""
},
{
"docid": "a960d6049c099ec652da81216b3bc173",
"text": "Recent research has illustrated privacy breaches that can be effected on an anonymized dataset by an attacker who has access to auxiliary information about the users. Most of these attack strategies rely on the uniqueness of specific aspects of the users' data - e.g., observing a mobile user at just a few points on the time-location space are sufficient to uniquely identify him/her from an anonymized set of users. In this work, we consider de-anonymization attacks on anonymized summary statistics in the form of histograms. Such summary statistics are useful for many applications that do not need knowledge about exact user behavior. We consider an attacker who has access to an anonymized set of histograms of K users' data and an independent set of data belonging to the same users. Modeling the users' data as i.i.d., we study the composite hypothesis testing problem of identifying the correct matching between the anonymized histograms from the first set and the user data from the second. We propose a Generalized Likelihood Ratio Test as a solution to this problem and show that the solution can be identified using a minimum weight matching algorithm on an K × K complete bipartite weighted graph. We show that a variant of this solution is asymptotically optimal as the data lengths are increased.We apply the algorithm on mobility traces of over 1000 users on EPFL campus collected during two weeks and show that up to 70% of the users can be correctly matched. These results show that anonymized summary statistics of mobility traces themselves contain a significant amount of information that can be used to uniquely identify users by an attacker who has access to auxiliary information about the statistics.",
"title": ""
},
{
"docid": "1d6733d6b017248ef935a833ecfe6f0d",
"text": "Users increasingly rely on crowdsourced information, such as reviews on Yelp and Amazon, and liked posts and ads on Facebook. This has led to a market for blackhat promotion techniques via fake (e.g., Sybil) and compromised accounts, and collusion networks. Existing approaches to detect such behavior relies mostly on supervised (or semi-supervised) learning over known (or hypothesized) attacks. They are unable to detect attacks missed by the operator while labeling, or when the attacker changes strategy. We propose using unsupervised anomaly detection techniques over user behavior to distinguish potentially bad behavior from normal behavior. We present a technique based on Principal Component Analysis (PCA) that models the behavior of normal users accurately and identifies significant deviations from it as anomalous. We experimentally validate that normal user behavior (e.g., categories of Facebook pages liked by a user, rate of like activity, etc.) is contained within a low-dimensional subspace amenable to the PCA technique. We demonstrate the practicality and effectiveness of our approach using extensive ground-truth data from Facebook: we successfully detect diverse attacker strategies—fake, compromised, and colluding Facebook identities—with no a priori labeling while maintaining low false-positive rates. Finally, we apply our approach to detect click-spam in Facebook ads and find that a surprisingly large fraction of clicks are from anomalous users.",
"title": ""
},
{
"docid": "eed515cb3a2a990e67bf76c176c16d29",
"text": "This paper describes the question generation system developed at UPenn for QGSTEC, 2010. The system uses predicate argument structures of sentences along with semantic roles for the question generation task from paragraphs. The semantic role labels are used to identify relevant parts of text before forming questions over them. The generated questions are then ranked to pick final six best questions.",
"title": ""
},
{
"docid": "34523c9ccd5d8c0bec2a84173205be99",
"text": "Deep learning has achieved astonishing results onmany taskswith large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.",
"title": ""
},
{
"docid": "9321905fe504f3a1f5c5e63e92f9d5ec",
"text": "The principles of implementation of the control system with sinusoidal PWM inverter voltage frequency scalar and vector control induction motor are reviewed. Comparisons of simple control system with sinusoidal PWM control system and sinusoidal PWM control with an additional third-harmonic signal and gain modulated control signal are carried out. There are shown the maximum amplitude and actual values phase and line inverter output voltage at the maximum amplitude of the control signals. Recommendations on the choice of supply voltage induction motor electric drive with frequency scalar control are presented.",
"title": ""
},
{
"docid": "41b92e3e2941175cf6d80bf809d7bd32",
"text": "Automated citation analysis (ACA) can be important for many applications including author ranking and literature based information retrieval, extraction, summarization and question answering. In this study, we developed a new compositional attention network (CAN) model to integrate local and global attention representations with a hierarchical attention mechanism. Training on a new benchmark corpus we built, our evaluation shows that the CAN model performs consistently well on both citation classification and sentiment analysis tasks.",
"title": ""
},
{
"docid": "876bbee05b7838f4de218b424d895887",
"text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-",
"title": ""
},
{
"docid": "0a50e10df0a8e4a779de9ed9bf81e442",
"text": "This paper presents a novel self-correction method of commutation point for high-speed sensorless brushless dc motors with low inductance and nonideal back electromotive force (EMF) in order to achieve low steady-state loss of magnetically suspended control moment gyro. The commutation point before correction is obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. Since the speed variation is small between adjacent commutation points, the difference of the nonenergized phase's terminal voltage between the beginning and the end of commutation is mainly related to the commutation error. A novel control method based on model-free adaptive control is proposed, and the delay degree is corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range.",
"title": ""
},
{
"docid": "f59096137378d49c81bcb1de0be832b2",
"text": "Here the transformation related to the fast Fourier strategy mainly used in the field oriented well effective operations of the strategy elated to the scenario of the design oriented fashion in its implementation related to the well efficient strategy of the processing of the signal in the digital domain plays a crucial role in its analysis point of view in well oriented fashion respectively. It can also be applicable for the processing of the images and there is a crucial in its analysis in terms of the pixel wise process takes place in the system in well effective manner respectively. There is a vast number of the applications oriented strategy takes place in the system in w ell effective manner in the system based implementation followed by the well efficient analysis point of view in well stipulated fashion of the transformation related to the fast Fourier strategy plays a crucial role and some of them includes analysis of the signal, Filtering of the sound and also the compression of the data equations of the partial differential strategy plays a major role and the responsibility in its implementation scenario in a well oriented fashion respectively. There is a huge amount of the efficient analysis of the system related to the strategy of the transformation of the fast Fourier environment plays a crucial role and the responsibility for the effective implementation of the DFT in well respective fashion. Here in the present system oriented strategy DFT implementation takes place in a well explicit manner followed by the well effective analysis of the system where domain related to the time based strategy of the decimation plays a crucial role in its implementation aspect in well effective fashion respectively. Experiments have been conducted on the present method where there is a lot of analysis takes place on the large number of the huge datasets in a well oriented fashion with respect to the different environmental strategy and there is an implementation of the system in a well effective manner in terms of the improvement in the performance followed by the outcome of the entire system in well oriented fashion respectively.",
"title": ""
},
{
"docid": "acf86ba9f98825a032cebb0a98db4360",
"text": "Malware is the root cause of many security threats on the Internet. To cope with the thousands of new malware samples that are discovered every day, security companies and analysts rely on automated tools to extract the runtime behavior of malicious programs. Of course, malware authors are aware of these tools and increasingly try to thwart their analysis techniques. To this end, malware code is often equipped with checks that look for evidence of emulated or virtualized analysis environments. When such evidence is found, the malware program behaves differently or crashes, thus showing a different “personality” than on a real system. Recent work has introduced transparent analysis platforms (such as Ether or Cobra) that make it significantly more difficult for malware programs to detect their presence. Others have proposed techniques to identify and bypass checks introduced by malware authors. Both approaches are often successful in exposing the runtime behavior of malware even when the malicious code attempts to thwart analysis efforts. However, these techniques induce significant performance overhead, especially for fine-grained analysis. Unfortunately, this makes them unsuitable for the analysis of current highvolume malware feeds. In this paper, we present a technique that efficiently detects when a malware program behaves differently in an emulated analysis environment and on an uninstrumented reference host. The basic idea is simple: we just compare the runtime behavior of a sample in our analysis system and on a reference machine. However, obtaining a robust and efficient comparison is very difficult. In particular, our approach consists of recording the interactions of the malware with the operating system in one run and using this information to deterministically replay the program in our analysis environment. Our experiments demonstrate that, by using our approach, one can efficiently detect malware samples that use a variety of techniques to identify emulated analysis environments.",
"title": ""
},
{
"docid": "4a9d14c2fd87d8ab64560adf13c6164c",
"text": "Cepstral coefficients derived either through linear prediction (LP) analysis or from filter bank are perhaps the most commonly used features in currently available speech rec ognition systems. In this paper, we propose spectral subband centroids as new features and use them as supplement to cepstral features for speech rec ognition. We show that these features have properties similar to formant frequencies and they are quite robust to noise. Recognition results are reported in the paper justifying the usefulness of these features as supplementary features.",
"title": ""
},
{
"docid": "48dfee242d5daf501c72e14e6b05c3ba",
"text": "One possible alternative to standard in vivo exposure may be virtual reality exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure (VRE) is potentially an efficient and cost-effective treatment of anxiety disorders. VRE therapy has been successful in reducing the fear of heights in the first known controlled study of virtual reality in the treatment of a psychological disorder. Outcome was assessed on measures of anxiety, avoidance, attitudes, and distress. Significant group differences were found on all measures such that the VRE group was significantly improved at posttreatment but the control group was unchanged. The efficacy of virtual reality exposure therapy was also supported for the fear of flying in a case study. The potential for virtual reality exposure treatment for these and other disorders is explored.",
"title": ""
},
{
"docid": "137cb8666a1b5465abf8beaf394e3a30",
"text": "Person re-identification (re-ID) has been gaining in popularity in the research community owing to its numerous applications and growing importance in the surveillance industry. Recent methods often employ partial features for person re-ID and offer fine-grained information beneficial for person retrieval. In this paper, we focus on learning improved partial discriminative features using a deep convolutional neural architecture, which includes a pyramid spatial pooling module for efficient person feature representation. Furthermore, we propose a multi-task convolutional network that learns both personal attributes and identities in an end-to-end framework. Our approach incorporates partial features and global features for identity and attribute prediction, respectively. Experiments on several large-scale person re-ID benchmark data sets demonstrate the accuracy of our approach. For example, we report rank-1 accuracies of 85.37% (+3.47 %) and 92.81% (+0.51 %) on the DukeMTMC re-ID and Market-1501 data sets, respectively. The proposed method shows encouraging improvements compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "b408788cd974438f32c1858cda9ff910",
"text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.",
"title": ""
},
{
"docid": "a4af2c561f340c52629478cac5e691d3",
"text": "The Internet has always been a means of communication between people, but with the technological development and changing requirements and lifestyle, this network has become a tool of communication between things of all types and sizes, and is known as Internet of things (IoT) for this reason.\n One of the most promising applications of IoT technology is the automated irrigation systems. The aim of this paper is to propose a methodology of the implementation of wireless sensor networks as an IoT device to develop a smart irrigation management system powered by solar energy.",
"title": ""
},
{
"docid": "678a4872dfe753bac26bff2b29ac26b0",
"text": "Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.",
"title": ""
}
] |
scidocsrr
|
0fc7cf48da43ab10d584b87d8c593354
|
Access control in IoT: Survey & state of the art
|
[
{
"docid": "7e152f2fcd452e67f52b4a5165950f2d",
"text": "This paper describes a framework that allows fine-grained and flexible access control to connected devices with very limited processing power and memory. We propose a set of security and performance requirements for this setting and derive an authorization framework distributing processing costs between constrained devices and less constrained back-end servers while keeping message exchanges with the constrained devices at a minimum. As a proof of concept we present performance results from a prototype implementing the device part of the framework.",
"title": ""
}
] |
[
{
"docid": "d8255047dc2e28707d711f6d6ff19e30",
"text": "This paper discusses the design of a 10 kV and 200 A hybrid dc circuit breaker suitable for the protection of the dc power systems in electric ships. The proposed hybrid dc circuit breaker employs a Thompson coil based ultrafast mechanical switch (MS) with the assistance of two additional solid-state power devices. A low-voltage (80 V) metal–oxide–semiconductor field-effect transistors (MOSFETs)-based commutating switch (CS) is series connected with the MS to realize the zero current turn-OFF of the MS. In this way, the arcing issue with the MS is avoided. A 15 kV SiC emitter turn-OFF thyristor-based main breaker (MB) is parallel connected with the MS and CS branch to interrupt the fault current. A stack of MOVs parallel with the MB are used to clamp the voltage across the hybrid dc circuit breaker during interruption. This paper focuses on the electronic parts of the hybrid dc circuit breaker, and a companion paper will elucidate the principle and operation of the fast acting MS and the overall operation of the hybrid dc circuit breaker. The selection and design of both the high-voltage and low-voltage electronic components in the hybrid dc circuit breaker are presented in this paper. The turn-OFF capability of the MB with and without snubber circuit is experimentally tested, validating its suitability for the hybrid dc circuit breaker application. The CSs’ conduction performances are tested up to 200 A, and its current commutating during fault current interruption is also analyzed. Finally, the hybrid dc circuit breaker demonstrated a fast current interruption within 2 ms at 7 kV and 100 A.",
"title": ""
},
{
"docid": "63a29e42a28698339d7d1f5e1a2fabcc",
"text": "(n) k edges have equal probabilities to be chosen as the next one . We shall 2 study the \"evolution\" of such a random graph if N is increased . In this investigation we endeavour to find what is the \"typical\" structure at a given stage of evolution (i . e . if N is equal, or asymptotically equal, to a given function N(n) of n) . By a \"typical\" structure we mean such a structure the probability of which tends to 1 if n -* + when N = N(n) . If A is such a property that lim Pn,N,(n ) ( A) = 1, we shall say that „almost all\" graphs Gn,N(n) n--possess this property .",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "be1965fb5a8c15b07e2b6f9895d383b2",
"text": "Although braided pneumatic actuators are capable of producing phenomenal forces compared to their weight, they have yet to see mainstream use due to their relatively short fatigue lives. By improving manufacturing techniques, actuator lifetime was extended by nearly an order of magnitude. Another concern is that their response times may be too long for control of legged robots. In addition, the frequency response of these actuators was found to be similar to that of human muscle.",
"title": ""
},
{
"docid": "54c8a8669b133e23035d93aabdc01a54",
"text": "The proposed antenna topology is an interesting radiating element, characterized by broadband or multiband capabilities. The exponential and soft/tapered design of the edge transitions and feeding makes it a challenging item to design and tune, leading though to impressive results. The antenna is build on Rogers RO3010 material. The bands in which the antenna works are GPS and Galileo (1.57 GHz), UMTS (1.8–2.17 GHz) and ISM 2.4 GHz (Bluetooth WiFi). The purpose of such an antenna is to be embedded in an Assisted GPS (A-GPS) reference station. Such a device serves as a fix GPS reference distributing the positioning information to mobile device users and delivering at the same time services via GSM network standards or via Wi-Fi / Bluetooth connections.",
"title": ""
},
{
"docid": "e1b536458ddc8603b281bac69e6bd2e8",
"text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.",
"title": ""
},
{
"docid": "95452e8b73a19500b1820665d2ad50b5",
"text": "Voltage noise not only detracts from reliability and performance, but has been used to attack system security. Most systems are completely unaware of fluctuations occurring on nanosecond time scales. This paper quantifies the threat to FPGA-based systems and presents a solution approach. Novel measurements of transients on 28nm FPGAs show that extreme activity in the fabric can cause enormous undershoot and overshoot, more than 10× larger than what is allowed by the specification. An existing voltage sensor is evaluated and shown to be insufficient. Lastly, a sensor design using reconfigurable logic is presented; its time-to-digital converter enables sample rates 500× faster than the 28nm Xilinx ADC. This enables quick characterization of transients that would normally go undetected, thereby providing potentially useful data for system optimization and helping to defend against supply voltage attacks.",
"title": ""
},
{
"docid": "2ceedf1be1770938c94892c80ae956e4",
"text": "Although there is interest in the educational potential of online multiplayer games and virtual worlds, there is still little evidence to explain specifically what and how people learn from these environments. This paper addresses this issue by exploring the experiences of couples that play World of Warcraft together. Learning outcomes were identified (involving the management of ludic, social and material resources) along with learning processes, which followed Wenger’s model of participation in Communities of Practice. Comparing this with existing literature suggests that productive comparisons can be drawn with the experiences of distance education students and the social pressures that affect their participation. Introduction Although there is great interest in the potential that computer games have in educational settings (eg, McFarlane, Sparrowhawk & Heald, 2002), and their relevance to learning more generally (eg, Gee, 2003), there has been relatively little in the way of detailed accounts of what is actually learnt when people play (Squire, 2002), and still less that relates such learning to formal education. In this paper, we describe a study that explores how people learn when they play the massively multiplayer online role-playing game (MMORPG), World of Warcraft. Detailed, qualitative research was undertaken with couples to explore their play, adopting a social perspective on learning. The paper concludes with a discussion that relates this to formal curricula and considers the implications for distance learning. British Journal of Educational Technology Vol 40 No 3 2009 444–457 doi:10.1111/j.1467-8535.2009.00948.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Background Researchers have long been interested in games and learning. There is, for example, a tradition of work within psychology exploring what makes games motivating, and relating this to learning (eg, Malone & Lepper, 1987). Games have been recently featured in mainstream educational policy (eg, DfES, 2005), and it has been suggested (eg, Gee, 2003) that they provide a model that should inform educational practice more generally. However, research exploring how games can be used in formal education suggests that the potential value of games to support learning is not so easy to realise. McFarlane et al (2002, p. 16), for example, argued that ‘the greatest obstacle to integrating games into the curriculum is the mismatch between the skills and knowledge developed in games, and those recognised explicitly within the school system’. Mitchell and Savill-Smith (2004) noted that although games have been used to support various kinds of learning (eg, recall of content, computer literacy, strategic skills), such uses were often problematic, being complicated by the need to integrate games into existing educational contexts. Furthermore, games specifically designed to be educational were ‘typically disliked’ (p. 44) as well as being expensive to produce. Until recently, research on the use of games in education tended to focus on ‘stand alone’ or single player games. Such games can, to some extent, be assessed in terms of their content coverage or instructional design processes, and evaluated for their ‘fit’ with a given curriculum (eg, Kirriemuir, 2002). Gaming, however, is generally a social activity, and this is even more apparent when we move from a consideration of single player games to a focus on multiplayer, online games. Viewing games from a social perspective opens the possibility of understanding learning as a social achievement, not just a process of content acquisition or skills development (Squire, 2002). In this study, we focus on a particular genre of online, multiplayer game: an MMORPG. MMORPGs incorporate structural elements drawn from table-top role-playing games (Dungeons & Dragons being the classic example). Play takes place in an expansive and persistent graphically rendered world. Players form teams and guilds, undertake group missions, meet in banks and auction houses, chat, congregate in virtual cities and engage in different modes of play, which involve various forms of collaboration and competition. As Squire noted (2002), socially situated accounts of actual learning in games (as opposed to what they might, potentially, help people to learn) have been lacking, partly because the topic is so complex. How, indeed, should the ‘game’ be understood—is it limited to the rules, or the player’s interactions with these rules? Does it include other players, and all possible interactions, and extend to out-of-game related activities and associated materials such as fan forums? Such questions have methodological implications, and hint at the ambiguities that educators working with virtual worlds might face (Carr, Oliver & Burn, 2008). Learning in virtual worlds 445 © 2009 The Authors. Journal compilation © 2009 Becta. Work in this area is beginning to emerge, particularly in relation to the learning and mentoring that takes place within player ‘guilds’ and online clans (see Galarneau, 2005; Steinkuehler, 2005). However, it is interesting to note that the research emerging from a digital game studies perspective, including much of the work cited thus far, is rarely utilised by educators researching the pedagogic potentials of virtual worlds such as Second Life. This study is informed by and attempts to speak to both of these communities. Methodology The purpose of this study was to explore how people learn in such virtual worlds in general. It was decided that focusing on a MMORPG such as World of Warcraft would be practical and offer a rich opportunity to study learning. MMORPGs are games; they have rules and goals, and particular forms of progression. Expertise in a virtual world such as Second Life is more dispersed, because the range of activities is that much greater (encompassing building, playing, scripting, creating machinima or socialising, for instance). Each of these activities would involve particular forms of expertise. The ‘curriculum’ proposed by World of Warcraft is more specified. It was important to approach learning practices in this game without divorcing such phenomena from the real-world contexts in which play takes place. In order to study players’ accounts of learning and the links between their play and other aspects of their social lives, we sought participants who would interact with each other both in the context of the game and outside of it. To this end, we recruited couples that play together in the virtual environment of World of Warcraft, while sharing real space. This decision was taken to manage the potential complexity of studying social settings: couples were the simplest stable social formation that we could identify who would interact both in the context of the game and outside of this too. Interviews were conducted with five couples. These were theoretically sampled, to maximise diversity in players’ accounts (as with any theoretically sampled study, this means that no claims can be made about prevalence or typicality). Players were recruited through online guilds and real-world social networks. The first two sets of participants were sampled for convenience (two heterosexual couples); the rest were invited to participate in order to broaden this sample (one couple was chosen because they shared a single account, one where a partner had chosen to stop playing and one mother–son pairing). All participants were adults, and conventional ethical procedures to ensure informed consent were followed, as specified in the British Educational Research Association guidelines. The couples were interviewed in the game world at a location of their choosing. The interviews, which were semi-structured, were chat-logged and each lasted 60–90 minutes. The resulting transcripts were split into self-contained units (typically a single statement, or a question and answer, or a short exchange) and each was categorised 446 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Authors. Journal compilation © 2009 Becta. thematically. The initial categories were then jointly reviewed in order to consolidate and refine them, cross-checking them with the source transcripts to ensure their relevance and coherence. At this stage, the categories included references to topics such as who started first, self-assessments of competence, forms of help, guilds, affect, domestic space and assets, ‘alts’ (multiple characters) and so on. These were then reviewed to develop a single category that might provide an overview or explanation of the process. It should be noted that although this approach was informed by ‘grounded theory’ processes as described in Glaser and Strauss (1967), it does not share their positivistic stance on the status of the model that has been developed. Instead, it accords more closely with the position taken by Charmaz (2000), who recognises the central role of the researcher in shaping the data collected and making sense of it. What is produced therefore is seen as a socially constructed model, based on personal narratives, rather than an objective account of an independent reality. Reviewing the categories that emerged in this case led to ‘management of resources’ being selected as a general marker of learning. As players moved towards greater competence, they identified and leveraged an increasingly complex array of in-game resources, while also negotiating real-world resources and demands. To consider this framework in greater detail, ‘management of resources’ was subdivided into three categories: ludic (concerning the skills, knowledge and practices of game play), social and material (concerning physical resources such as the embodied setting for play) (see Carr & Oliver, 2008). Using this explanation of learning, the transcripts were re-reviewed in order to ",
"title": ""
},
{
"docid": "62766b08b1666085543b732cf839dec0",
"text": "The research area of evolutionary multiobjective optimization (EMO) is reaching better understandings of the properties and capabilities of EMO algorithms, and accumulating much evidence of their worth in practical scenarios. An urgent emerging issue is that the favoured EMO algorithms scale poorly when problems have \"many\" (e.g. five or more) objectives. One of the chief reasons for this is believed to be that, in many-objective EMO search, populations are likely to be largely composed of nondominated solutions. In turn, this means that the commonly-used algorithms cannot distinguish between these for selective purposes. However, there are methods that can be used validly to rank points in a nondominated set, and may therefore usefully underpin selection in EMO search. Here we discuss and compare several such methods. Our main finding is that simple variants of the often-overlooked \"Average Ranking\" strategy usually outperform other methods tested, covering problems with 5-20 objectives and differing amounts of inter-objective correlation.",
"title": ""
},
{
"docid": "f2c345550dae6b6da01b4ce335173693",
"text": "The key or the scale information of a piece of music provides important clues on its high level musical content, like harmonic and melodic context, which can be useful for music classification, retrieval or further content analysis. Researchers have previously addressed the issue of finding the key for symbolically encoded music (MIDI); however, very little work has been done on key detection for acoustic music. In this paper, we present a method for estimating the root of diatonic scale and the key directly from acoustic signals (waveform) of popular and classical music. We propose a method to extract pitch profile features from the audio signal, which characterizes the tone distribution in the music. The diatonic scale root and key are estimated based on the extracted pitch profile by using a tone clustering algorithm and utilizing the tone structure of keys. Experiments on 72 music pieces have been conducted to evaluate the proposed techniques. The success rate of scale root detection for pop music pieces is above 90%.",
"title": ""
},
{
"docid": "a7f1565d548359c9f19bed304c2fbba6",
"text": "Handwritten character generation is a popular research topic with various applications. Various methods have been proposed in the literatures which are based on methods such as pattern recognition, machine learning, deep learning or others. However, seldom method could generate realistic and natural handwritten characters with a built-in determination mechanism to enhance the quality of generated image and make the observers unable to tell whether they are written by a person. To address these problems, in this paper, we proposed a novel generative adversarial network, multi-scale multi-class generative adversarial network (MSMC-CGAN). It is a neural network based on conditional generative adversarial network (CGAN), and it is designed for realistic multi-scale character generation. MSMC-CGAN combines the global and partial image information as condition, and the condition can also help us to generate multi-class handwritten characters. Our model is designed with unique neural network structures, image features and training method. To validate the performance of our model, we utilized it in Chinese handwriting generation, and an evaluation method called mean opinion score (MOS) was used. The MOS results show that MSMC-CGAN achieved good performance.",
"title": ""
},
{
"docid": "81ea96fd08b41ce6e526d614e9e46a7e",
"text": "BACKGROUND\nChronic alcoholism is known to impair the functioning of episodic and working memory, which may consequently reduce the ability to learn complex novel information. Nevertheless, semantic and cognitive procedural learning have not been properly explored at alcohol treatment entry, despite its potential clinical relevance. The goal of the present study was therefore to determine whether alcoholic patients, immediately after the weaning phase, are cognitively able to acquire complex new knowledge, given their episodic and working memory deficits.\n\n\nMETHODS\nTwenty alcoholic inpatients with episodic memory and working memory deficits at alcohol treatment entry and a control group of 20 healthy subjects underwent a protocol of semantic acquisition and cognitive procedural learning. The semantic learning task consisted of the acquisition of 10 novel concepts, while subjects were administered the Tower of Toronto task to measure cognitive procedural learning.\n\n\nRESULTS\nAnalyses showed that although alcoholic subjects were able to acquire the category and features of the semantic concepts, albeit slowly, they presented impaired label learning. In the control group, executive functions and episodic memory predicted semantic learning in the first and second halves of the protocol, respectively. In addition to the cognitive processes involved in the learning strategies invoked by controls, alcoholic subjects seem to attempt to compensate for their impaired cognitive functions, invoking capacities of short-term passive storage. Regarding cognitive procedural learning, although the patients eventually achieved the same results as the controls, they failed to automate the procedure. Contrary to the control group, the alcoholic groups' learning performance was predicted by controlled cognitive functions throughout the protocol.\n\n\nCONCLUSION\nAt alcohol treatment entry, alcoholic patients with neuropsychological deficits have difficulty acquiring novel semantic and cognitive procedural knowledge. Compared with controls, they seem to use more costly learning strategies, which are nonetheless less efficient. These learning disabilities need to be considered when treatment requiring the acquisition of complex novel information is envisaged.",
"title": ""
},
{
"docid": "1f9940ff3e31267cfeb62b2a7915aba9",
"text": "Infrared vein detection is one of the newest biomedical techniques researched today. Basic principal behind this is, when IR light transmitted on palm it passes through tissue and veins absorbs that light and the vein appears darker than surrounding tissue. This paper presents vein detection system using strong IR light source, webcam, Matlab based image processing algorithm. Using the Strong IR light source consisting of high intensity led and webcam camera we captured transillumination image of palm. Image processing algorithm is used to separate out the veins from palm.",
"title": ""
},
{
"docid": "53518256d6b4f3bb4e8dcf28a35f9284",
"text": "Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product but buy it for a lower price at a competing online retailer. This free-riding behavior by customers is referred to as “showrooming” and we show that this is detrimental to the profits of the brick-and-mortar stores. We first analyze price matching as a short-term strategy to counter showrooming. Since customers purchase from the store at lower than store posted price when they ask for price-matching, one would expect the price matching strategy to be less effective as the fraction of customers who seek the matching increases. However, our results show that with an increase in the fraction of customers who seek price matching, the stores profits initially decrease and then increase. While price-matching could be used even when customers do not exhibit showrooming behavior, we find that it is more effective when customers do showrooming. We then study exclusivity of product assortments as a long-term strategy to counter showrooming. This strategy can be implemented in two different ways. One, by arranging for exclusivity of known brands (e.g. Macy’s has such an arrangement with Tommy Hilfiger), or, two, through creation of store brands at the brick-and-mortar store (T.J.Maxx uses a large number of store brands). Our analysis suggests that implementing exclusivity through store brands is better than exclusivity through known brands when the product category has few digital attributes. However, when customers do not showroom, the known brand strategy dominates the store brand strategy.",
"title": ""
},
{
"docid": "30e0918ec670bdab298f4f5bb59c3612",
"text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.",
"title": ""
},
{
"docid": "9e669f91dcce29a497c8524fccc1380d",
"text": "Increased serum cholesterol and decreased high-density lipoprotein (HDL) cholesterol level in serum and cerebro-spinal fluid is a risk factor for the development of Alzheimer disease, and also a predictor of cardiovascular events and stroke in epidemiologic studies. Niacin (vitamin B 3 or nicotinic acid) is the most effective medication in current clinical use for increasing HDL cholesterol and it substantially lowers triglycerides and LDL cholesterol. This review provides an update on the role of the increasing HDL cholesterol agent, niacin, as a neuroprotective and neurorestorative agent which promotes angiogenesis and arteriogenesis after stroke and improves neurobehavioral recovery following central nervous system diseases such as stroke, Alzheimer’s disease and multiple sclerosis. The mechanisms underlying the niacin induced neuroprotective and neurorestorative effects after stroke are discussed. The primary focus of this review is on stroke, with shorter discussion on Alzheimer disease and multiple sclerosis.",
"title": ""
},
{
"docid": "dc8180cdc6344f1dc5bfa4dbf048912c",
"text": "Image analysis is a key area in the computer vision domain that has many applications. Genetic Programming (GP) has been successfully applied to this area extensively, with promising results. Highlevel features extracted from methods such as Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HoG) are commonly used for object detection with machine learning techniques. However, GP techniques are not often used with these methods, despite being applied extensively to image analysis problems. Combining the training process of GP with the powerful features extracted by SURF or HoG has the potential to improve the performance by generating high-level, domaintailored features. This paper proposes a new GP method that automatically detects di↵erent regions of an image, extracts HoG features from those regions, and simultaneously evolves a classifier for image classification. By extending an existing GP region selection approach to incorporate the HoG algorithm, we present a novel way of using high-level features with GP for image classification. The ability of GP to explore a large search space in an e cient manner allows all stages of the new method to be optimised simultaneously, unlike in existing approaches. The new approach is applied across a range of datasets, with promising results when compared to a variety of well-known machine learning techniques. Some high-performing GP individuals are analysed to give insight into how GP can e↵ectively be used with high-level features for image classification.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
},
{
"docid": "c352aff803967465db59c44801d4368c",
"text": "A voltage reference was developed using a 0.18 μm standard CMOS process technology, which is compatible with high power supply rejection ratio (PSRR) and low power consumption. The proposed reference circuit operating with all transistors biased in subthreshold region, which provide a reference voltage of 256 mV. The temperature coefficient (TC) was 5 ppm/°C at best and 6.6 ppm/°C on average, in a range from 0 to 140 °C. The line sensitivity was 163 ppm/V in a supply voltage range of 0.8 to 3.2 V, and the power supply rejection was 82 dB at 100 Hz. The current consumption is 30 nA at 140 °C. The chip area was 0.0042mm2.",
"title": ""
},
{
"docid": "844c75292441af560ed2d2abc1d175f6",
"text": "Completion rates for massive open online classes (MOOCs) are notoriously low, but learner intent is an important factor. By studying students who drop out despite their intent to complete the MOOC, it may be possible to develop interventions to improve retention and learning outcomes. Previous research into predicting MOOC completion has focused on click-streams, demographics, and sentiment analysis. This study uses natural language processing (NLP) to examine if the language in the discussion forum of an educational data mining MOOC is predictive of successful class completion. The analysis is applied to a subsample of 320 students who completed at least one graded assignment and produced at least 50 words in discussion forums. The findings indicate that the language produced by students can predict with substantial accuracy (67.8 %) whether students complete the MOOC. This predictive power suggests that NLP can help us both to understand student retention in MOOCs and to develop automated signals of student success.",
"title": ""
}
] |
scidocsrr
|
f4fd964e1e14671425741205bc032e95
|
Graded Causation and Defaults
|
[
{
"docid": "a19e10548c395cdd03fdc80bb8c25ce1",
"text": "The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. The gods did not reveal, from the beginning, All things to us, but in the course of time Through seeking we may learn and know things better. But as for certain truth, no man has known it, Nor shall he know it, neither of the gods Nor yet of all the things of which I speak. For even if by chance he were to utter The final truth, he would himself not know it: For all is but a woven web of guesses. Xenophanes",
"title": ""
}
] |
[
{
"docid": "ce9421a7f8c1ae3a6b3983d7e0ff66c0",
"text": "Supporting Hebb's 1949 hypothesis of use-induced plasticity of the nervous system, our group found in the 1960s that training or differential experience induced neurochemical changes in cerebral cortex of the rat and regional changes in weight of cortex. Further studies revealed changes in cortical thickness, size of synaptic contacts, number of dendritic spines, and dendritic branching. Similar effects were found whether rats were assigned to differential experience at weaning (25 days of age), as young adults (105 days) or as adults (285 days). Enriched early experience improved performance on several tests of learning. Cerebral results of experience in an enriched environment are similar to results of formal training. Enriched experience and training appear to evoke the same cascade of neurochemical events in causing plastic changes in brain. Sufficiently rich experience may be necessary for full growth of species-specific brain characteristics and behavioral potential. Clayton and Krebs found in 1994 that birds that normally store food have larger hippocampi than related species that do not store. This difference develops only in birds given the opportunity to store and recover food. Research on use-induced plasticity is being applied to promote child development, successful aging, and recovery from brain damage; it is also being applied to benefit animals in laboratories, zoos and farms.",
"title": ""
},
{
"docid": "98d766b3756d1fe6634996fd91169c19",
"text": "Kratom (Mitragyna speciosa) is a widely abused herbal drug preparation in Southeast Asia. It is often consumed as a substitute for heroin, but imposing itself unknown harms and addictive burdens. Mitragynine is the major psychostimulant constituent of kratom that has recently been reported to induce morphine-like behavioural and cognitive effects in rodents. The effects of chronic consumption on non-drug related behaviours are still unclear. In the present study, we investigated the effects of chronic mitragynine treatment on spontaneous activity, reward-related behaviour and cognition in mice in an IntelliCage® system, and compared them with those of morphine and Δ-9-tetrahydrocannabinol (THC). We found that chronic mitragynine treatment significantly potentiated horizontal exploratory activity. It enhanced spontaneous sucrose preference and also its persistence when the preference had aversive consequences. Furthermore, mitragynine impaired place learning and its reversal. Thereby, mitragynine effects closely resembled that of morphine and THC sensitisation. These findings suggest that chronic mitragynine exposure enhances spontaneous locomotor activity and the preference for natural rewards, but impairs learning and memory. These findings confirm pleiotropic effects of mitragynine (kratom) on human lifestyle, but may also support the recognition of the drug's harm potential.",
"title": ""
},
{
"docid": "dbd11235f7b6b515f672b06bb10ebc3d",
"text": "Until recently job seeking has been a tricky, tedious and time consuming process, because people looking for a new position had to collect information from many different sources. Job recommendation systems have been proposed in order to automate and simplify this task, also increasing its effectiveness. However, current approaches rely on scarce manually collected data that often do not completely reveal people skills. Our work aims to find out relationships between jobs and people skills making use of data from LinkedIn users’ public profiles. Semantic associations arise by applying Latent Semantic Analysis (LSA). We use the mined semantics to obtain a hierarchical clustering of job positions and to build a job recommendation system. The outcome proves the effectiveness of our method in recommending job positions. Anyway, we argue that our approach is definitely general, because the extracted semantics could be worthy not only for job recommendation systems but also for recruiting systems. Furthermore, we point out that both the hierarchical clustering and the recommendation system do not require parameters to be tuned.",
"title": ""
},
{
"docid": "8cbfb79df2516bb8a06a5ae9399e3685",
"text": "We consider the problem of approximate set similarity search under Braun-Blanquet similarity <i>B</i>(<i>x</i>, <i>y</i>) = |<i>x</i> â© <i>y</i>| / max(|<i>x</i>|, |<i>y</i>|). The (<i>b</i><sub>1</sub>, <i>b</i><sub>2</sub>)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets <i>P</i> such that, given a query set <i>q</i>, if there exists <i>x</i> â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>) ⥠<i>b</i><sub>1</sub>, then we can efficiently return <i>x</i>â² â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>â²) > <i>b</i><sub>2</sub>. \nWe present a simple data structure that solves this problem with space usage <i>O</i>(<i>n</i><sup>1+Ï</sup>log<i>n</i> + â<sub><i>x</i> â <i>P</i></sub>|<i>x</i>|) and query time <i>O</i>(|<i>q</i>|<i>n</i><sup>Ï</sup> log<i>n</i>) where <i>n</i> = |<i>P</i>| and Ï = log(1/<i>b</i><sub>1</sub>)/log(1/<i>b</i><sub>2</sub>). Making use of existing lower bounds for locality-sensitive hashing by OâDonnell et al. (TOCT 2014) we show that this value of Ï is tight across the parameter space, i.e., for every choice of constants 0 < <i>b</i><sub>2</sub> < <i>b</i><sub>1</sub> < 1. \nIn the case where all sets have the same size our solution strictly improves upon the value of Ï that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broderâs MinHash (CCS 1997) for Jaccard similarity and Andoni et al.âs cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-<em>dependent</em> method by Andoni and Razenshteyn (STOC 2015).",
"title": ""
},
{
"docid": "4c406b80ad6c6ca617177a55d149f325",
"text": "REST Chart is a Petri-Net based XML modeling framework for REST API. This paper presents two important enhancements and extensions to REST Chart modeling - Hyperlink Decoration and Hierarchical REST Chart. In particular, the proposed Hyperlink Decoration decomposes resource connections from resource representation, such that hyperlinks can be defined independently of schemas. This allows a Navigation-First Design by which the important global connections of a REST API can be designed first and reused before the local resource representations are implemented and specified. Hierarchical REST Chart is a powerful mechanism to rapidly decompose and extend a REST API in several dimensions based on Hyperlink Decoration. These new mechanisms can be used to manage the complexities in large scale REST APIs that undergo frequent changes as in some large scale open source development projects. This paper shows that these new capabilities can fit nicely in the REST Chart XML with very minor syntax changes. These enhancements to REST Chart are applied successfully in designing and verifying REST APIs for software-defined-networking (SDN) and Cloud computing.",
"title": ""
},
{
"docid": "6fe413cf75a694217c30a9ef79fab589",
"text": "Zusammenfassung) Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of “Biometric Cryptosystems”. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called “Weak-identity Privacy” and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving crypto-",
"title": ""
},
{
"docid": "a4d8b3e9f60dfe8adbc95448a9feea2e",
"text": "In this article, I discuss material which is related to the recent proof of Fermat’s Last Theorem: elliptic curves, modular forms, Galois representations and their deformations, Frey’s construction, and the conjectures of Serre and of Taniyama-Shimura.",
"title": ""
},
{
"docid": "e66ce20b22d183d5b1d9aec2cdc1f736",
"text": "Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40 °C–50 °C while keeping the cold-side temperature fixed at 20 °C. It was found that the average heat transfer rate and heat transfer performance of the countercurrrent configuration were 6.8% and 10%‒15% higher, respectively, than those of the parallel flow. The average heat transfer rate, heat transfer performance and pressure drop increased with increasing Reynolds number in all experiments. Increasing inlet temperature did not affect the heat transfer performance while it slightly decreased the pressure drop in the experimental range considered. Empirical correlations have been developed for the heat transfer coefficient and pressure drop factor as functions of the Reynolds number.",
"title": ""
},
{
"docid": "d16053590115de26743945649a682878",
"text": "This chapter addresses various subjects, including some open questions related to energy dissipation, information, and noise, that are relevant for nanoand molecular electronics. The object is to give a brief and coherent presentation of the results of a number of recent studies of ours. 1 Energy Dissipation and Miniaturization It has been observed, in the context of Moore’s law, that the power density dissipation of microprocessors keeps growing with increasing miniaturization [1–4], and quantum computing schemes are not principally different [5, 6] for general-purpose computing applications. However, as we point out in Sect. 2 and seemingly in contrast with the above statements, the fundamental lower limit of energy dissipation of a single-bit-flip event (or switching event) is independent of the size of the system. Therefore, the increasing power dissipation may stem from the following practical facts [1–4]: • A larger number of transistors on the chip, contributing to a higher number of switching events per second; • lower relaxation time constants with smaller elements, allowing higher clock frequency and the resulting increased number of switching events per second; L.B. Kish (✉) ⋅ S.P. Khatri Department of Electrical and Computer Engineering, Texas A&M University, TAMUS 3128, College Station, TX 77843-3128, USA e-mail: [email protected]; [email protected] C.-G. Granqvist ⋅ G.A. Niklasson The Ångström Laboratory, Department of Engineering Sciences, Uppsala University, P.O. Box 534, SE-75121 Uppsala, Sweden F. Peper CiNet, NICT, Osaka University, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan © Springer International Publishing AG 2017 T. Ogawa (ed.), Molecular Architectonics, Advances in Atom and Single Molecule Machines, DOI 10.1007/978-3-319-57096-9_2 27 • increasing electrical field and current density, because the power supply voltage is not scaled back to the same extent as the device size; and • enhanced leakage current and related excess power dissipation, caused by an exponentially increasing tunneling effect associated with decreased insulator thickness and increased electrical field. It is clearly up to future technology to approach the fundamental limits of energy dissipation as much as possible. It is our goal in this chapter to address some of the basic, yet often controversial, aspects of the fundamental limits for nanoand molecular electronics. Specifically, we deal with the following issues: • The fundamental limit of energy dissipation for writing a bit of information. This energy is always positive and characterized by Brillouin’s negentropy formula and our refinement for longer bit operations [7–10]. • The fundamental limits of energy dissipation for erasing a bit of information [7–12]. This energy can be zero or negative; we also present a simple proof of the non-validity of Landauer’s principle of erasure dissipation [11, 12]. • Thermal noise in the low-temperature and/or high-frequency limit, i.e., in the quantum regime (referred to as “zero-point noise”). It is easy to show that both the quantum theory of the fluctuation–dissipation theorem and Nyquist’s seminal formula are incorrect and dependent on the experimental situation [13, 14], which implies that further studies are needed to clarify the properties of zero-point fluctuations in resistors in electronics-based information processors operating in the quantum limit. 2 Fundamental Lower Limits of Energy Dissipation for Writing an Information Bit [7–10] Szilard [15] (in 1929, in an incorrect way) and Brillouin [16] (in 1953, correctly) concluded that the minimum energy dissipation H1 due to changing a bit of information in a system at absolute temperature T is given as",
"title": ""
},
{
"docid": "f5b6dba70d19e8327a885c912dac23b6",
"text": "Genital warts affect 1% of the sexually active U.S. population and are commonly seen in primary care. Human papillomavirus types 6 and 11 are responsible for most genital warts. Warts vary from small, flat-topped papules to large, cauliflower-like lesions on the anogenital mucosa and surrounding skin. Diagnosis is clinical, but atypical lesions should be confirmed by histology. Treatments may be applied by patients, or by a clinician in the office. Patient-applied treatments include topical imiquimod, podofilox, and sinecatechins, whereas clinician-applied treatments include podophyllin, bichloroacetic acid, and trichloroacetic acid. Surgical treatments include excision, cryotherapy, and electrosurgery. The quadrivalent human papillomavirus vaccine is active against virus subtypes that cause genital warts in men and women. Additionally, male circumcision may be effective in decreasing the transmission of human immunodeficiency virus, human papillomavirus, and herpes simplex virus.",
"title": ""
},
{
"docid": "fa0f02cde08a3cee4b691788815cb757",
"text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.",
"title": ""
},
{
"docid": "7edea9ca3ec520656c741c04ba7041bf",
"text": "Air pollution in urban cites is caused not only by local emission sources but also significantly by regional atmospheric pollutant transport from surrounding areas. This study is to identify regional atmospheric PM10 transport pathways through an integrated modeling and synoptic pressure pattern analysis approach with a case study in Beijing, northern China. Beijing is a city sensitive to trans-boundary transport of aerosols from its surrounding provinces. The pathway identification was conducted through tracking air masses and analyzing dominant transport patterns leading to air pollution episode in Beijing. Trajectories were calculated using NOAA-HYSPLIT model based on the meteorological field of MM5 outputs in October 2002. A k-means clustering algorithm was applied to group these trajectories into different transport patterns. Monitored PM10 levels during each transport pattern were then examined to evaluate its influence on atmospheric PM10 in Beijing. The southwest transport pathway was identified to be closely associated with the increasing phase of PM10. An integrated MM5-CMAQ modeling systemwas then applied to simulate PM10 concentrations in Beijing and its surrounding provinces for the southwest transport period. It was found that a convergence flow field with high PM10 concentrations frequently appeared between northwest mountain breeze and southwest plain breeze on the lee of the Taihang Mountains. Further analysis indicated that high-pressure systems accompanied with thermal inversion in the boundary layer were the governing synoptic patterns in southwest transport period. Transboundary transport along with the convergence zone induced by mesoscale low pressure system in front of the Taihang Mountains, which was generated by topographical dynamics and thermal effects, proved to be the main cause of high PM10 levels in Beijing. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "79f7d7dc109a9e8d2e5197de8f2d76e7",
"text": "Goal Oriented Requirements Engineering (GORE) is concerned with the identification of goals of the software according to the need of the stakeholders. In GORE, goals are the need of the stakeholders. These goals are refined and decomposed into sub-goals until the responsibility of the last goals are assigned to some agent or some software system. In literature different methods have been developed based on GORE concepts for the identification of software goals or software requirements like fuzzy attributed goal oriented software requirements analysis (FAGOSRA) method, knowledge acquisition for automated specifications (KAOS), i∗ framework, attributed goal oriented requirements analysis (AGORA) method, etc. In AGORA, decision makers use subjective values during the selection and the prioritization of software requirements. AGORA method can be extended by computing the objective values. These objective values can be obtained by using analytic hierarchy process (AHP). In AGORA, there is no support to check whether the values provided by the decision makers are consistent or not. Therefore, in order to address this issue we proposed a method for the prioritization of software requirements by applying the AHP in goal oriented requirements elicitation method. Finally, we consider an example to explain the proposed method.",
"title": ""
},
{
"docid": "e5ecbd3728e93badd4cfbf5eef6957f9",
"text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.",
"title": ""
},
{
"docid": "7716409441fb8e34013d3e9f58d32476",
"text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bf1d099d80d522d9c51e28d04fd8236b",
"text": "With the rise of social media platforms based on the sharing of pictures and videos, the question of how such platforms should be studied arises. Previous research on social media (content) has mainly focused on text (written words) and the rather text-based social media platforms Twitter and Facebook. Drawing on research in the fields of visual, political, and business communication, we introduce a methodological framework to study the fast-growing image-sharing service Instagram. This methodological framework was developed to study political parties’ Instagram accounts and tested by means of a study of Swedish political parties during the 2014 election campaign. In this article, we adapt the framework to also study other types of organizations active on Instagram by focusing on the following main questions: Do organizations only use Instagram to share one-way information, focusing on disseminating information and self-presentation? Or is Instagram used for two-way communication to establish and cultivate organization-public relationships? We introduce and discuss the coding of variables with respect to four clusters: the perception of the posting, image management, integration, and interactivity.",
"title": ""
},
{
"docid": "e27ba8014614830b209dd9bbb4d42c4c",
"text": "One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to interand intra-subject differences, as well as the inherent noise associated with EEG data collection. Herein, we explore the capabilities of the recent deep neural architectures for modeling cognitive events from EEG data. In this paper, we present recent achievements applying deep learning for EEG signal classification. We investigate the use of feed forward, convolutional, recurrent neural nets, as well as deep belief networks, echo-state networks, reservoir computing, and denoising auto encoder models. We present the application of these architectures for classifying user intent generated through different motor imagery; BCI to control wheelchair and robotic arm; mental load classification; discriminating emotional state; feature dimensionality reduction for EEG data. Many of the models prove to be more accurate and more efficient than current state-of-the-art models.",
"title": ""
},
{
"docid": "6fb8b461530af2c56ec0fac36dd85d3a",
"text": "Psoriatic arthritis is one of the spondyloarthritis. It is a disease of clinical heterogenicity, which may affect peripheral joints, as well as axial spine, with presence of inflammatory lesions in soft tissue, in a form of dactylitis and enthesopathy. Plain radiography remains the basic imaging modality for PsA diagnosis, although early inflammatory changes affecting soft tissue and bone marrow cannot be detected with its use, or the image is indistinctive. Typical radiographic features of PsA occur in an advanced disease, mainly within the synovial joints, but also in fibrocartilaginous joints, such as sacroiliac joints, and additionally in entheses of tendons and ligaments. Moll and Wright classified PsA into 5 subtypes: asymmetric oligoarthritis, symmetric polyarthritis, arthritis mutilans, distal interphalangeal arthritis of the hands and feet and spinal column involvement. In this part of the paper we discuss radiographic features of the disease. The next one will address magnetic resonance imaging and ultrasonography.",
"title": ""
},
{
"docid": "4fa9f9ac4204de1394cd7133254aa046",
"text": "Over the last ten years, face recognition has become a specialized applications area within the field of computer vision. Sophisticated commercial systems have been developed that achieve high recognition rates. Although elaborate, many of these systems include a subspace projection step and a nearest neighbor classifier. The goal of this paper is to rigorously compare two subspace projection techniques within the context of a baseline system on the face recognition task. The first technique is principal component analysis (PCA), a well-known “baseline” for projection techniques. The second technique is independent component analysis (ICA), a newer method that produces spatially localized and statistically independent basis vectors. Testing on the FERET data set (and using standard partitions), we find that, when a proper distance metric is used, PCA significantly outperforms ICA on a human face recognition task. This is contrary to previously",
"title": ""
},
{
"docid": "ee510bbe7c7be6e0fb86a32d9f527be1",
"text": "Internet communications with paths that include satellite link face some peculiar challenges, due to the presence of a long propagation wireless channel. In this paper, we propose a performance enhancing proxy (PEP) solution, called PEPsal, which is, to the best of the authors' knowledge, the first open source TCP splitting solution for the GNU/Linux operating systems. PEPsal improves the performance of a TCP connection over a satellite channel making use of the TCP Hybla, a TCP enhancement for satellite networks developed by the authors. The objective of the paper is to present and evaluate the PEPsal architecture, by comparing it with end to end TCP variants (NewReno, SACK, Hybla), considering both performance and reliability issues. Performance is evaluated by making use of a testbed set up at the University of Bologna, to study advanced transport protocols and architectures for Internet satellite communications",
"title": ""
}
] |
scidocsrr
|
954a411bf58312459ac38b4b9d4d3bf1
|
Foresight: Rapid Data Exploration Through Guideposts
|
[
{
"docid": "299242a092512f0e9419ab6be13f9b93",
"text": "In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset.\n We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques.",
"title": ""
},
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
},
{
"docid": "467c2a106b6fd5166f3c2a44d655e722",
"text": "AutoVis is a data viewer that responds to content – text, relational tables, hierarchies, streams, images – and displays the information appropriately (that is, as an expert would). Its design rests on the grammar of graphics, scagnostics and a modeler based on the logic of statistical analysis. We distinguish an automatic visualization system (AVS) from an automated visualization system. The former automatically makes decisions about what is to be visualized. The latter is a programming system for automating the production of charts, graphs and visualizations. An AVS is designed to provide a first glance at data before modeling and analysis are done. AVS is designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models. The design of this system incorporates several unique features: (1) a spare interface – analysts simply drag a data source into an empty window, (2) a graphics generator that requires no user definitions to produce graphs, (3) a statistical analyzer that protects users from false conclusions, and (4) a pattern recognizer that responds to the aspects (density, shape, trend, and so on) that professional statisticians notice when investigating data sets.",
"title": ""
}
] |
[
{
"docid": "4d1ae6893fa8b19d05da5794a3fb7978",
"text": "This study analyzes the influence of IT governance on IT investment performance. IT investment performance is known to vary widely across firms. Prior studies find that the variations are often due to the lack of investments in complementary organizational capitals. The presence of complementarities between IT and organizational capitals suggests that IT investment decisions should be made at the right organizational level to ensure that both IT and organizational factors are taken into consideration. IT governance, which determines the allocation of IT decision rights within a firm, therefore, plays an important role in IT investment performance. This study tests this proposition by using a sample dataset from Fortune 1000 firms. A key challenge in this study is that the appropriate IT governance mode varies across firms as well as across business units within a firm. We address this challenge by developing an empirical model of IT governance that is based on earlier studies on multiple contingency factors of IT governance. We use the empirical model to predict the appropriate IT governance mode for each business unit within a firm and use the difference between the predicted and observed IT governance mode to derive a measure of a firm’s IT governance misalignment. We find that firms with high IT governance misalignment receive no benefits from their IT investments; whereas firms with low IT governance misalignment obtain two to three times the value from their IT investments compared to firms with average IT governance misalignment. Our results highlight the importance of IT governance in realizing value from IT investments and confirm the validity of using the multiple contingency factor model in assessing IT governance decisions.",
"title": ""
},
{
"docid": "972ef2897c352ad384333dd88588f0e6",
"text": "We describe a vision-based obstacle avoidance system for of f-road mobile robots. The system is trained from end to end to map raw in put images to steering angles. It is trained in supervised mode t predict the steering angles provided by a human driver during training r uns collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two f orwardpointing wireless color cameras. A remote computer process es the video and controls the robot via radio. The learning system is a lar ge 6-layer convolutional network whose input is a single left/right pa ir of unprocessed low-resolution images. The robot exhibits an excell ent ability to detect obstacles and navigate around them in real time at spe ed of 2 m/s.",
"title": ""
},
{
"docid": "0f0305afce53933df1153af6a31c09fb",
"text": "In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features-point and line segments-have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.",
"title": ""
},
{
"docid": "9c9e3261c293aedea006becd2177a6d5",
"text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.",
"title": ""
},
{
"docid": "36874bcbbea1563542265cf2c6261ede",
"text": "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "48b78cae830b76b85c5205a9728244be",
"text": "The striking ability of music to elicit emotions assures its prominent status in human culture and every day life. Music is often enjoyed and sought for its ability to induce or convey emotions, which may manifest in anything from a slight variation in mood, to changes in our physical condition and actions. Consequently, research on how we might associate musical pieces with emotions and, more generally, how music brings about an emotional response is attracting ever increasing attention. First, this paper provides a thorough review of studies on the relation of music and emotions from di↵erent disciplines. We then propose new insights to enhance automated music emotion recognition models using recent results from psychology, musicology, a↵ective computing, semantic technologies and music information retrieval.",
"title": ""
},
{
"docid": "709c06739d20fe0a5ba079b21e5ad86d",
"text": "Bug triaging refers to the process of assigning a bug to the most appropriate developer to fix. It becomes more and more difficult and complicated as the size of software and the number of developers increase. In this paper, we propose a new framework for bug triaging, which maps the words in the bug reports (i.e., the term space) to their corresponding topics (i.e., the topic space). We propose a specialized topic modeling algorithm named <italic> multi-feature topic model (MTM)</italic> which extends Latent Dirichlet Allocation (LDA) for bug triaging. <italic>MTM </italic> considers product and component information of bug reports to map the term space to the topic space. Finally, we propose an incremental learning method named <italic>TopicMiner</italic> which considers the topic distribution of a new bug report to assign an appropriate fixer based on the affinity of the fixer to the topics. We pair <italic> TopicMiner</italic> with MTM (<italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math> <alternatives><inline-graphic xlink:href=\"xia-ieq1-2576454.gif\"/></alternatives></inline-formula></italic>). We have evaluated our solution on 5 large bug report datasets including GCC, OpenOffice, Mozilla, Netbeans, and Eclipse containing a total of 227,278 bug reports. We show that <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\"> $^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq2-2576454.gif\"/></alternatives></inline-formula> </italic> can achieve top-1 and top-5 prediction accuracies of 0.4831-0.6868, and 0.7686-0.9084, respectively. We also compare <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives> <inline-graphic xlink:href=\"xia-ieq3-2576454.gif\"/></alternatives></inline-formula></italic> with Bugzie, LDA-KL, SVM-LDA, LDA-Activity, and Yang et al.'s approach. The results show that <italic>TopicMiner<inline-formula> <tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq4-2576454.gif\"/> </alternatives></inline-formula></italic> on average improves top-1 and top-5 prediction accuracies of Bugzie by 128.48 and 53.22 percent, LDA-KL by 262.91 and 105.97 percent, SVM-LDA by 205.89 and 110.48 percent, LDA-Activity by 377.60 and 176.32 percent, and Yang et al.'s approach by 59.88 and 13.70 percent, respectively.",
"title": ""
},
{
"docid": "4cc3f3a5e166befe328b6e18bc836e89",
"text": "Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and web to mobile devices. High-quality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application developers have ample animation techniques for virtual characters at their disposal, implementation of these techniques into an existing application tends to be a daunting and time-consuming task. In this paper we present visage|SDK, a versatile framework for real-time character animation based on MPEG-4 FBA standard that offers a wide spectrum of features that includes animation playback, lip synchronization and facial motion tracking, while facilitating rapid production of art assets and easy integration with existing graphics engines.",
"title": ""
},
{
"docid": "002fe3efae0fc9f88690369496ce5e7d",
"text": "Experimental evidence suggests that emotions can both speed-up and slow-down the internal clock. Speeding up has been observed for to-be-timed emotional stimuli that have the capacity to sustain attention, whereas slowing down has been observed for to-be-timed neutral stimuli that are presented in the context of emotional distractors. These effects have been explained by mechanisms that involve changes in bodily arousal, attention, or sentience. A review of these mechanisms suggests both merits and difficulties in the explanation of the emotion-timing link. Therefore, a hybrid mechanism involving stimulus-specific sentient representations is proposed as a candidate for mediating emotional influences on time. According to this proposal, emotional events enhance sentient representations, which in turn support temporal estimates. Emotional stimuli with a larger share in ones sentience are then perceived as longer than neutral stimuli with a smaller share.",
"title": ""
},
{
"docid": "782346defc00d03c61fb8f694d612653",
"text": "We present PrologCheck, an automatic tool for propertybased testing of programs in the logic programming language Prolog with randomised test data generation. The tool is inspired by the well known QuickCheck, originally designed for the functional programming language Haskell. It includes features that deal with specific characteristics of Prolog such as its relational nature (as opposed to Haskell) and the absence of a strong type discipline. PrologCheck expressiveness stems from describing properties as Prolog goals. It enables the definition of custom test data generators for random testing tailored for the property to be tested. Further, it allows the use of a predicate specification language that supports types, modes and constraints on the number of successful computations. We evaluate our tool on a number of examples and apply it successfully to debug a Prolog library for AVL search trees.",
"title": ""
},
{
"docid": "ba4df2305d4f292a6ee0f033e58d7a16",
"text": "Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is evaluated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors vary from 1.58 to 2.17 cm.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "c196444f2093afc3092f85b8fbb67da5",
"text": "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.",
"title": ""
},
{
"docid": "8b45d7f55e7968a203da2eb09c712858",
"text": "The importance of demonstrating the value achieved from IT investments is long established in the Computer Science (CS) and Information Systems (IS) literature. However, emerging technologies such as the ever-changing complex area of cloud computing present new challenges and opportunities for demonstrating how IT investments lead to business value. This paper conducts a multidisciplinary systematic literature review drawing from CS, IS, and Business disciplines to understand the current evidence on the quantification of financial value from cloud computing investments. The study identified 53 articles, which were coded in an analytical framework across six themes (measurement type, costs, benefits, adoption type, actor and service model). Future research directions were presented for each theme. The review highlights the need for multi-disciplinary research which both explores and further develops the conceptualization of value in cloud computing research, and research which investigates how IT value manifests itself across the chain of service provision and in inter-organizational scenarios.",
"title": ""
},
{
"docid": "fd2e7025271565927f43784f0c69c3fb",
"text": "In this paper, we have proposed a fingerprint orientation model based on 2D Fourier expansions (FOMFE) in the phase plane. The FOMFE does not require prior knowledge of singular points (SPs). It is able to describe the overall ridge topology seamlessly, including the SP regions, even for noisy fingerprints. Our statistical experiments on a public database show that the proposed FOMFE can significantly improve the accuracy of fingerprint feature extraction and thus that of fingerprint matching. Moreover, the FOMFE has a low-computational cost and can work very efficiently on large fingerprint databases. The FOMFE provides a comprehensive description for orientation features, which has enabled its beneficial use in feature-related applications such as fingerprint indexing. Unlike most indexing schemes using raw orientation data, we exploit FOMFE model coefficients to generate the feature vector. Our indexing experiments show remarkable results using different fingerprint databases",
"title": ""
},
{
"docid": "4bf253b2349978d17fd9c2400df61d21",
"text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: [email protected] Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "99bac31f4d0df12cf25f081c96d9a81a",
"text": "Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.",
"title": ""
},
{
"docid": "f0846b4e74110ed469704c4a24407cc6",
"text": "Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection. & 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).",
"title": ""
}
] |
scidocsrr
|
61290fc1aa8836e245109969b9aaec02
|
Review of the Impact of Vehicle-to-Grid Technologies on Distribution Systems and Utility Interfaces
|
[
{
"docid": "1cef757143fc21e712f47b29ee72dfe8",
"text": "Large-scale sustainable energy systems will be necessary for substantial reduction of CO2. However, large-scale implementation faces two major problems: (1) we must replace oil in the transportation sector, and (2) since today’s inexpensive and abundant renewable energy resources have fluctuating output, to increase the fraction of electricity from them, we must learn to maintain a balance between demand and supply. Plug-in electric vehicles (EVs) could reduce or eliminate oil for the light vehicle fleet. Adding ‘‘vehicle-to-grid’’ (V2G) technology to EVs can provide storage, matching the time of generation to time of load. Two national energy systems are modelled, one for Denmark, including combined heat and power (CHP) and the other a similarly sized country without CHP (the latter being more typical of other industrialized countries). The model (EnergyPLAN) integrates energy for electricity, transport and heat, includes hourly fluctuations in human needs and the environment (wind resource and weather-driven need for heat). Four types of vehicle fleets are modelled, under levels of wind penetration varying from 0% to 100%. EVs were assumed to have high power (10 kW) connections, which provide important flexibility in time and duration of charging. We find that adding EVs and V2G to these national energy systems allows integration of much higher levels of wind electricity without excess electric production, and also greatly reduces national CO2 emissions. & 2008 Published by Elsevier Ltd. 67",
"title": ""
}
] |
[
{
"docid": "f7b369690fa93420baa7bb43aa75ffec",
"text": "Total Quality Management (TQM) and Kaizena continuous change toward betterment are two fundamental concepts directly dealing with continuous improvement of quality of processes and performance of an organization to achieve positive transformation in mindset and action of employees and management. For clear understanding and to get maximum benefit from both of these concepts, as such it becomes mandatory to precisely differentiate between TQM and Kaizen. TQM features primarily focus on customer’s satisfaction through improvement of quality. It is both a top down and bottom up approach whereas kaizen is processes focused and a bottom up approach of small incremental changes. Implementation of TQM is more costly as compared to Kaizen. Through kaizen, improvements are made using organization’s available resources. For the effective implementation of kaizen, the culture of the organization must be supportive and the result of continuous improvement should be communicated to the whole organization for motivation of all employees and for the success of continuous improvement program in the organization. This paper focuses on analyzing the minute differences between TQM and Kaizen. It also discusses the different tools and techniques under the umbrella of kaizen and TQM Philosophy. This paper will elucidate the differences in both these concepts as far as their inherent characteristics and practical implementations are concerned. In spite of differences in methodology, focus and scale of operation in both the concept, it can be simply concluded that Kaizen is one of the Technique of the T QM for continuous improvement of quality, process and performance of the organization. [Muhammad Saleem, Nawar Khan, Shafqat Hameed, M Abbas Ch. An Analysis of Relationship between Total Quality Management and Kaizen. Life Science Journal. 2012;9(3):31-40] (ISSN:1097-8135). http://www.lifesciencesite.com. 5 Key Worlds: Total Quality Management, Kaizen Technique, Continuous Improvement (CI), Tools & Techniques",
"title": ""
},
{
"docid": "ad2efda03f2657ff73cac8cb992eba8e",
"text": "This paper investigates the effects of grounding the p-type gate-oxide protection layer called bottom p-well (BPW) of a trench-gate SiC-MOSFET on the short-circuit ruggedness of the device. The BPW is grounded by forming ground contacts in various cell layouts, and the layout of the contact cells is found to be a significant factor that determines the short-circuit safe operation area (SCSOA) of a device. By grounding the BPW in an optimized cell layout, an SCSOA of over 10 μs is obtained at room temperature. Further investigation revealed that minimizing the distance between the ground contacts for the BPW is a key to developing a highly-robust, high-performance power device.",
"title": ""
},
{
"docid": "8694f84e4e2bd7da1e678a3b38ccd447",
"text": "This paper describes a general methodology for extracting attribute-value pairs from web pages. It consists of two phases: candidate generation, in which syntactically likely attribute-value pairs are annotated; and candidate filtering, in which semantically improbable annotations are removed. We describe three types of candidate generators and two types of candidate filters, all of which are designed to be massively parallelizable. Our methods can handle 1 billion web pages in less than 6 hours with 1,000 machines. The best generator and filter combination achieves 70% F-measure compared to a hand-annotated corpus.",
"title": ""
},
{
"docid": "bbd64fe2f05e53ca14ad1623fe51cd1c",
"text": "Virtual assistants are the cutting edge of end user interaction, thanks to endless set of capabilities across multiple services. The natural language techniques thus need to be evolved to match the level of power and sophistication that users expect from virtual assistants. In this report we investigate an existing deep learning model for semantic parsing, and we apply it to the problem of converting natural language to trigger-action programs for the Almond virtual assistant. We implement a one layer seq2seq model with attention layer, and experiment with grammar constraints and different RNN cells. We take advantage of its existing dataset and we experiment with different ways to extend the training set. Our parser shows mixed results on the different Almond test sets, performing better than the state of the art on synthetic benchmarks by about 10% but poorer on realistic user data by about 15%. Furthermore, our parser is shown to be extensible to generalization, as well as or better than the current system employed by Almond.",
"title": ""
},
{
"docid": "8955c715c0341057b471eeed90c9c82d",
"text": "The letter presents an exact small-signal discrete-time model for digitally controlled pulsewidth modulated (PWM) dc-dc converters operating in constant frequency continuous conduction mode (CCM) with a single effective A/D sampling instant per switching period. The model, which is based on well-known approaches to discrete-time modeling and the standard Z-transform, takes into account sampling, modulator effects and delays in the control loop, and is well suited for direct digital design of digital compensators. The letter presents general results valid for any CCM converter with leading or trailing edge PWM. Specific examples, including approximate closed-form expressions for control-to-output transfer functions are given for buck and boost converters. The model is verified in simulation using an independent system identification approach.",
"title": ""
},
{
"docid": "5c4f313482543223306be014cff0cc2e",
"text": "Transformer inrush currents are high-magnitude, harmonic rich currents generated when transformer cores are driven into saturation during energization. These currents have undesirable effects, including potential damage or loss-of-life of transformer, protective relay miss operation, and reduced power quality on the system. This paper explores the theoretical explanations of inrush currents and explores different factors that have influences on the shape and magnitude of those inrush currents. PSCAD/EMTDC is used to investigate inrush currents phenomena by modeling a practical power system circuit for single phase transformer",
"title": ""
},
{
"docid": "c7c103a48a80ffee561a120913855758",
"text": "We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network. Recent work has focused on learning such models using inference (or recognition) networks; we identify a crucial problem when modeling large, sparse, highdimensional datasets – underfitting. We study the extent of underfitting, highlighting that its severity increases with the sparsity of the data. We propose methods to tackle it via iterative optimization inspired by stochastic variational inference (Hoffman et al. , 2013) and improvements in the sparse data representation used for inference. The proposed techniques drastically improve the ability of these powerful models to fit sparse data, achieving state-of-the-art results on a benchmark textcount dataset and excellent results on the task of top-N recommendation.",
"title": ""
},
{
"docid": "59639429e45dc75e0b8db773d112f994",
"text": "Vector modulators are a key component in phased array antennas and communications systems. The paper describes a novel design methodology for a bi-directional, reflection-type balanced vector modulator using metal-oxide-semiconductor field-effect (MOS) transistors as active loads, which provides an improved constellation quality. The fabricated IC occupies 787 × 1325 μm2 and exhibits a minimum transmission loss of 9 dB and return losses better than 14 dB. As an application example, its use in a 16-QAM modulator is verified.",
"title": ""
},
{
"docid": "256b22fd89c0f7311e043efd2dd142f9",
"text": "Suicide rates are higher in later life than in any other age group. The design of effective suicide prevention strategies hinges on the identification of specific, quantifiable risk factors. Methodological challenges include the lack of systematically applied terminology in suicide and risk factor research, the low base rate of suicide, and its complex, multidetermined nature. Although variables in mental, physical, and social domains have been correlated with completed suicide in older adults, controlled studies are necessary to test hypothesized risk factors. Prospective cohort and retrospective case control studies indicate that affective disorder is a powerful independent risk factor for suicide in elders. Other mental illnesses play less of a role. Physical illness and functional impairment increase risk, but their influence appears to be mediated by depression. Social ties and their disruption are significantly and independently associated with risk for suicide in later life, relationships between which may be moderated by a rigid, anxious, and obsessional personality style. Affective illness is a highly potent risk factor for suicide in later life with clear implications for the design of prevention strategies. Additional research is needed to define more precisely the interactions between emotional, physical, and social factors that determine risk for suicide in the older adult.",
"title": ""
},
{
"docid": "924e10782437c323b8421b156db50584",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "dd9f6ef9eafdef8b29c566bcea8ded57",
"text": "A recent trend in saliency algorithm development is large-scale benchmarking and algorithm ranking with ground truth provided by datasets of human fixations. In order to accommodate the strong bias humans have toward central fixations, it is common to replace traditional ROC metrics with a shuffled ROC metric which uses randomly sampled fixations from other images in the database as the negative set. However, the shuffled ROC introduces a number of problematic elements, including a fundamental assumption that it is possible to separate visual salience and image spatial arrangement. We argue that it is more informative to directly measure the effect of spatial bias on algorithm performance rather than try to correct for it. To capture and quantify these known sources of bias, we propose a novel metric for measuring saliency algorithm performance: the spatially binned ROC (spROC). This metric provides direct in-sight into the spatial biases of a saliency algorithm without sacrificing the intuitive raw performance evaluation of traditional ROC measurements. By quantitatively measuring the bias in saliency algorithms, researchers will be better equipped to select and optimize the most appropriate algorithm for a given task. We use a baseline measure of inherent algorithm bias to show that Adaptive Whitening Saliency (AWS) [14], Attention by Information Maximization (AIM) [8], and Dynamic Visual Attention (DVA) [20] provide the least spatially biased results, suiting them for tasks in which there is no information about the underlying spatial bias of the stimuli, whereas algorithms such as Graph Based Visual Saliency (GBVS) [18] and Context-Aware Saliency (CAS) [15] have a significant inherent central bias.",
"title": ""
},
{
"docid": "7178130e1a69bb93c4dc6b90b2c98bb2",
"text": "Leprosy is caused by Mycobacterium leprae bacillus and despite recommendation of multidrug therapy by World Health Organisation in 1981 and eradication programme in various countries; disease prevails and new cases added annually. Variable clinical presentation ranges from limited tuberculoid to widespread lepromatous leprosy. The neuritic presentation varies from mononeuropathy to mononeuropathy multiplex. The disease commonly affects the ulnar, radial in upper and common peroneal, posterior tibial in lower extremity. The neuritic leprosy is easily suspected when there is hypoanesthetic skin lesion with thickened and tender nerve. Involvement of uncommon nerve and pure neuritic presentation, a rare form of leprosy in which skin is spared often leads to diagnostic challenge. Biopsy is not needed to initiate treatment but sometimes required to rule our other diseases. We report a rare case of isolated thickening of greater auricular nerve and diagnostic dilemma encountered in the era of evidence-based medicine.",
"title": ""
},
{
"docid": "babe85fa78ea1f4ce46eb0cfd77ae2b8",
"text": "x + a1x + · · ·+ an = 0. On s’interesse surtout à la résolution “par radicaux”, c’est-à-dire à la résolution qui n’utilise que des racines m √ a. Il est bien connu depuis le 16 siècle que l’on peut résoudre par radicaux des équations de degré n ≤ 4. Par contre, selon un résultat célèbre d’Abel, l’équation générale de degré n ≥ 5 n’est pas résoluble par radicaux. L’idée principale de la théorie de Galois est d’associer à chaque équation son groupe de symétrie. Cette construction permet de traduire des propriétés de l’équation (telles que la résolubilité par radicaux) aux propriétés du groupe associé. Le cours ne suivra pas le chemin historique. L’ouvrage [Ti 1, 2] est une référence agréable pour l’histoire du sujet.",
"title": ""
},
{
"docid": "7643347a62e8835b5cc4b1b432f504c1",
"text": "Simulation systems have become an essential component in the development and validation of autonomous driving technologies. The prevailing state-of-the-art approach for simulation is to use game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (e.g., the assets for simulation) remains a manual task that can be costly and time-consuming. In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance. In this paper we present a novel approach to address these issues: Augmented Autonomous Driving Simulation (AADS). Our formulation augments real-world pictures with a simulated traffic flow to create photo-realistic simulation images and renderings. More specifically, we use LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generate highly plausible traffic flows for cars and pedestrians and compose them into the background. The composite images can be re-synthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photo-realistic, fully annotated, and ready for end-to-end training and testing of autonomous driving systems from perception to planning. We explain our system design and validate our algorithms with a number of autonomous driving tasks from detection to segmentation and predictions. Compared to traditional approaches, our method offers unmatched scalability and realism. Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility in a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.",
"title": ""
},
{
"docid": "56e47efe6efdb7819c6a2e87e8fbb56e",
"text": "Recent investigations of Field Programmable Gate Array (FPGA)-based time-to-digital converters (TDCs) have predominantly focused on improving the time resolution of the device. However, the monolithic integration of multi-channel TDCs and the achievement of high measurement throughput remain challenging issues for certain applications. In this paper, the potential of the resources provided by the Kintex-7 Xilinx FPGA is fully explored, and a new design is proposed for the implementation of a high performance multi-channel TDC system on this FPGA. Using the tapped-delay-line wave union TDC architecture, in which a negative pulse is triggered by the hit signal propagating along the carry chain, two time measurements are performed in a single carry chain within one clock cycle. The differential non-linearity and time resolution can be significantly improved by realigning the bins. The on-line calibration and on-line updating of the calibration table reduce the influence of variations of environmental conditions. The logic resources of the 6-input look-up tables in the FPGA are employed for hit signal edge detection and bubble-proof encoding, thereby allowing the TDC system to operate at the maximum allowable clock rate of the FPGA and to achieve the maximum possible measurement throughput. This resource-efficient design, in combination with a modular implementation, makes the integration of multiple channels in one FPGA practicable. Using our design, a 128-channel TDC with a dead time of 1.47 ns, a dynamic range of 360 ns, and a root-mean-square resolution of less than 10 ps was implemented in a single Kintex-7 device.",
"title": ""
},
{
"docid": "397f6c39825a5d8d256e0cc2fbba5d15",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "38b5917f30f33c55d3af42022dcb28d7",
"text": "We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural network. Our algorithm learns much faster than common exploration strategies such as -greedy, Boltzmann, bootstrapping, and intrinsic-reward-based ones. Additionally, we show that spiking the replay buffer with experiences from just a few successful episodes can make Q-learning feasible when it might otherwise fail.",
"title": ""
},
{
"docid": "2e2a21ca1be2da2d30b1b2a92cd49628",
"text": "A new form of cloud computing, serverless computing, is drawing attention as a new way to design micro-services architectures. In a serverless computing environment, services are developed as service functional units. The function development environment of all serverless computing framework at present is CPU based. In this paper, we propose a GPU-supported serverless computing framework that can deploy services faster than existing serverless computing framework using CPU. Our core approach is to integrate the open source serverless computing framework with NVIDIA-Docker and deploy services based on the GPU support container. We have developed an API that connects the open source framework to the NVIDIA-Docker and commands that enable GPU programming. In our experiments, we measured the performance of the framework in various environments. As a result, developers who want to develop services through the framework can deploy high-performance micro services and developers who want to run deep learning programs without a GPU environment can run code on remote GPUs with little performance degradation.",
"title": ""
},
{
"docid": "4c1c72fde3bbe25f6ff3c873a87b86ba",
"text": "The purpose of this study was to translate the Foot Function Index (FFI) into Italian, to perform a cross-cultural adaptation and to evaluate the psychometric properties of the Italian version of FFI. The Italian FFI was developed according to the recommended forward/backward translation protocol and evaluated in patients with foot and ankle diseases. Feasibility, reliability [intraclass correlation coefficient (ICC)], internal consistency [Cronbach’s alpha (CA)], construct validity (correlation with the SF-36 and a visual analogue scale (VAS) assessing for pain), responsiveness to surgery were assessed. The standardized effect size and standardized response mean were also evaluated. A total of 89 patients were recruited (mean age 51.8 ± 13.9 years, range 21–83). The Italian version of the FFI consisted in 18 items separated into a pain and disability subscales. CA value was 0.95 for both the subscales. The reproducibility was good with an ICC of 0.94 and 0.91 for pain and disability subscales, respectively. A strong correlation was found between the FFI and the scales of the SF-36 and the VAS with related content, particularly in the areas of physical function and pain was observed indicating good construct validity. After surgery, the mean FFI improved from 55.9 ± 24.8 to 32.4 ± 26.3 for the pain subscale and from 48.8 ± 28.8 to 24.9 ± 23.7 for the disability subscale (P < 0.01). The Italian version of the FFI showed satisfactory psychometric properties in Italian patients with foot and ankle diseases. Further testing in different and larger samples is required in order to ensure the validity and reliability of this score.",
"title": ""
}
] |
scidocsrr
|
4da19684f8282cca31c25868fefacab5
|
TripPlanner: Personalized Trip Planning Leveraging Heterogeneous Crowdsourced Digital Footprints
|
[
{
"docid": "71e9bb057e90f754f658c736e4f02b7a",
"text": "When tourists visit a city or region, they cannot visit every point of interest available, as they are constrained in time and budget. Tourist recommender applications help tourists by presenting a personal selection. Providing adequate tour scheduling support for these kinds of applications is a daunting task for the application developer. The objective of this paper is to demonstrate how existing models from the field of Operations Research (OR) fit this scheduling problem, and enable a wide range of tourist trip planning functionalities. Using the Orienteering Problem (OP) and its extensions to model the tourist trip planning problem, allows to deal with a vast number of practical planning problems.",
"title": ""
}
] |
[
{
"docid": "fff21e37244f5c097dc9e8935bb92939",
"text": "For the purpose of enhancing the search ability of the cuckoo search (CS) algorithm, an improved robust approach, called HS/CS, is put forward to address the optimization problems. In HS/CS method, the pitch adjustment operation in harmony search (HS) that can be considered as a mutation operator is added to the process of the cuckoo updating so as to speed up convergence. Several benchmarks are applied to verify the proposed method and it is demonstrated that, in most cases, HS/CS performs better than the standard CS and other comparative methods. The parameters used in HS/CS are also investigated by various simulations.",
"title": ""
},
{
"docid": "387c2b51fcac3c4f822ae337cf2d3f8d",
"text": "This paper directly follows and extends, where a novel method for measurement of extreme impedances is described theoretically. In this paper experiments proving that the method can significantly improve stability of a measurement system are described. Using Agilent PNA E8364A vector network analyzer (VNA) the method is able to measure reflection coefficient with stability improved 36-times in magnitude and 354-times in phase compared to the classical method of reflection coefficient measurement. Further, validity of the error model and related equations stated in are verified by real measurement of SMD resistors (size 0603) in microwave test fixture. Values of the measured SMD resistors range from 12 kOmega up to 330 kOmega. A novel calibration technique using three different resistors as calibration standards is used. The measured values of impedances reasonably agree with assumed values.",
"title": ""
},
{
"docid": "6057638a2a1cfd07ab2e691baf93a468",
"text": "Cybersecurity in smart grids is of critical importance given the heavy reliance of modern societies on electricity and the recent cyberattacks that resulted in blackouts. The evolution of the legacy electric grid to a smarter grid holds great promises but also comes up with an increasesd attack surface. In this article, we review state of the art developments in cybersecurity for smart grids, both from a standardization as well technical perspective. This work shows the important areas of future research for academia, and collaboration with government and industry stakeholders to enhance smart grid cybersecurity and make this new paradigm not only beneficial and valuable but also safe and secure.",
"title": ""
},
{
"docid": "305f877227516eded75819bdf48ab26d",
"text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.",
"title": ""
},
{
"docid": "dac5cebcbc14b82f7b8df977bed0c9d8",
"text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.",
"title": ""
},
{
"docid": "6e80065ade40ada9efde1f58859498bc",
"text": "Neural networks, as powerful tools for data mining and knowledge engineering, can learn from data to build feature-based classifiers and nonlinear predictive models. Training neural networks involves the optimization of nonconvex objective functions, and usually, the learning process is costly and infeasible for applications associated with data streams. A possible, albeit counterintuitive, alternative is to randomly assign a subset of the networks’ weights so that the resulting optimization task can be formulated as a linear least-squares problem. This methodology can be applied to both feedforward and recurrent networks, and similar techniques can be used to approximate kernel functions. Many experimental results indicate that such randomized models can reach sound performance compared to fully adaptable ones, with a number of favorable benefits, including (1) simplicity of implementation, (2) faster learning with less intervention from human beings, and (3) possibility of leveraging overall linear regression and classification algorithms (e.g., l1 norm minimization for obtaining sparse formulations). This class of neural networks attractive and valuable to the data mining community, particularly for handling large scale data mining in real-time. However, the literature in the field is extremely vast and fragmented, with many results being reintroduced multiple times under different names. This overview aims to provide a self-contained, uniform introduction to the different ways in which randomization can be applied to the design of neural networks and kernel functions. A clear exposition of the basic framework underlying all these approaches helps to clarify innovative lines of research, open problems, and most importantly, foster the exchanges of well-known results throughout different communities. © 2017 John Wiley & Sons, Ltd",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "0e2b885774f69342ade2b9ad1bc84835",
"text": "History repeatedly demonstrates that rural communities have unique technological needs. Yet, we know little about how rural communities use modern technologies, so we lack knowledge on how to design for them. To address this gap, our empirical paper investigates behavioral differences between more than 3,000 rural and urban social media users. Using a dataset collected from a broadly popular social network site, we analyze users' profiles, 340,000 online friendships and 200,000 interpersonal messages. Using social capital theory, we predict differences between rural and urban users and find strong evidence supporting our hypotheses. Namely, rural people articulate far fewer friends online, and those friends live much closer to home. Our results also indicate that the groups have substantially different gender distributions and use privacy features differently. We conclude by discussing design implications drawn from our findings; most importantly, designers should reconsider the binary friend-or-not model to allow for incremental trust-building.",
"title": ""
},
{
"docid": "44928aa4c5b294d1b8f24eaab14e9ce7",
"text": "Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.",
"title": ""
},
{
"docid": "d83031118ea8c9bcdfc6df0d26b87e15",
"text": "Camera-based motion tracking has become a popular enabling technology for gestural human-computer interaction. However, the approach suffers from several limitations, which have been shown to be particularly problematic when employed within musical contexts. This paper presents Leimu, a wrist mount that couples a Leap Motion optical sensor with an inertial measurement unit to combine the benefits of wearable and camera-based motion tracking. Leimu is designed, developed and then evaluated using discourse and statistical analysis methods. Qualitative results indicate that users consider Leimu to be an effective interface for gestural music interaction and the quantitative results demonstrate that the interface offers improved tracking precision over a Leap Motion positioned on a table top.",
"title": ""
},
{
"docid": "8e3bf062119c6de9fa5670ce4b00764b",
"text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2) V(-1) s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.",
"title": ""
},
{
"docid": "914b38c4a5911a481bf9088f75adef30",
"text": "This paper presents a mixed-integer LP approach to the solution of the long-term transmission expansion planning problem. In general, this problem is large-scale, mixed-integer, nonlinear, and nonconvex. We derive a mixed-integer linear formulation that considers losses and guarantees convergence to optimality using existing optimization software. The proposed model is applied to Garver’s 6-bus system, the IEEE Reliability Test System, and a realistic Brazilian system. Simulation results show the accuracy as well as the efficiency of the proposed solution technique.",
"title": ""
},
{
"docid": "de9ed927d395f78459e84b1c27f9c746",
"text": "JuMP is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax. JuMP takes advantage of advanced features of the Julia programming language to offer unique functionality while achieving performance on par with commercial modeling tools for standard tasks. In this work we will provide benchmarks, present the novel aspects of the implementation, and discuss how JuMP can be extended to new problem classes and composed with state-of-the-art tools for visualization and interactivity.",
"title": ""
},
{
"docid": "2488c17b39dd3904e2f17448a8519817",
"text": "Young healthy participants spontaneously use different strategies in a virtual radial maze, an adaptation of a task typically used with rodents. Functional magnetic resonance imaging confirmed previously that people who used spatial memory strategies showed increased activity in the hippocampus, whereas response strategies were associated with activity in the caudate nucleus. Here, voxel based morphometry was used to identify brain regions covarying with the navigational strategies used by individuals. Results showed that spatial learners had significantly more gray matter in the hippocampus and less gray matter in the caudate nucleus compared with response learners. Furthermore, the gray matter in the hippocampus was negatively correlated to the gray matter in the caudate nucleus, suggesting a competitive interaction between these two brain areas. In a second analysis, the gray matter of regions known to be anatomically connected to the hippocampus, such as the amygdala, parahippocampal, perirhinal, entorhinal and orbitofrontal cortices were shown to covary with gray matter in the hippocampus. Because low gray matter in the hippocampus is a risk factor for Alzheimer's disease, these results have important implications for intervention programs that aim at functional recovery in these brain areas. In addition, these data suggest that spatial strategies may provide protective effects against degeneration of the hippocampus that occurs with normal aging.",
"title": ""
},
{
"docid": "fdc4d23fa336ca122fdfb12818901180",
"text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.",
"title": ""
},
{
"docid": "1ab4f605d67dabd3b2815a39b6123aa4",
"text": "This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.",
"title": ""
},
{
"docid": "e38cbee5c03319d15086e9c39f7f8520",
"text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.",
"title": ""
},
{
"docid": "16a8fc39efe95c05a25deba4da6aa806",
"text": "Although effective treatments for obsessive-compulsive disorder (OCD) exist, there are significant barriers to receiving evidence-based care. Mobile health applications (Apps) offer a promising way of overcoming these barriers by increasing access to treatment. The current study investigated the feasibility, acceptability, and preliminary efficacy of LiveOCDFree, an App designed to help OCD patients conduct exposure and response prevention (ERP). Twenty-one participants with mild to moderate symptoms of OCD were enrolled in a 12-week open trial of App-guided self-help ERP. Self-report assessments of OCD, depression, anxiety, and quality of life were completed at baseline, mid-treatment, and post-treatment. App-guided ERP was a feasible and acceptable self-help intervention for individuals with OCD, with high rates of retention and satisfaction. Participants reported significant improvement in OCD and anxiety symptoms pre- to post-treatment. Findings suggest that LiveOCDFree is a feasible and acceptable self-help intervention for OCD. Preliminary efficacy results are encouraging and point to the potential utility of mobile Apps in expanding the reach of existing empirically supported treatments.",
"title": ""
},
{
"docid": "9b4c240bd55523360e92dbed26cb5dc2",
"text": "CBT has been seen as an alternative to the unmanageable population of undergraduate students in Nigerian universities. This notwithstanding, the peculiar nature of some courses hinders its total implementation. This study was conducted to investigate the students’ perception of CBT for undergraduate chemistry courses in University of Ilorin. To this end, it examined the potential for using student feedback in the validation of assessment. A convenience sample of 48 students who had taken test on CBT in chemistry was surveyed and questionnaire was used for data collection. Data analysis demonstrated an auspicious characteristics of the target context for the CBT implementation as majority (95.8%) of students said they were competent with the use of computers and 75% saying their computer anxiety was only mild or low but notwithstanding they have not fully accepted the testing mode with only 29.2% in favour of it, due to the impaired validity of the test administration which they reported as being many erroneous chemical formulas, equations and structures in the test items even though they have nonetheless identified the achieved success the testing has made such as immediate scoring, fastness and transparency in marking. As quality of designed items improves and sufficient time is allotted according to the test difficulty, the test experience will become favourable for students and subsequently CBT will gain its validation in this particular context.",
"title": ""
},
{
"docid": "b53f1a0b71fe5588541195d405b4a104",
"text": "We propose a neural machine-reading model that constructs dynamic knowledge graphs from procedural text. It builds these graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities. We harness and extend a recently proposed machine reading comprehension (MRC) model to query for entity states, since these states are generally communicated in spans of text and MRC models perform well in extracting entity-centric spans. The explicit, structured, and evolving knowledge graph representations that our model constructs can be used in downstream question answering tasks to improve machine comprehension of text, as we demonstrate empirically. On two comprehension tasks from the recently proposed PROPARA dataset (Dalvi et al., 2018), our model achieves state-of-the-art results. We further show that our model is competitive on the RECIPES dataset (Kiddon et al., 2015), suggesting it may be generally applicable. We present some evidence that the model’s knowledge graphs help it to impose commonsense constraints on its predictions.",
"title": ""
}
] |
scidocsrr
|
eb78f2f66c5e7e2e7817d8c15b672e06
|
A deep representation for depth images from synthetic data
|
[
{
"docid": "01534202e7db5d9059651290e1720bf0",
"text": "The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across variou s CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.",
"title": ""
}
] |
[
{
"docid": "b716af4916ac0e4a0bf0b040dccd352b",
"text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.",
"title": ""
},
{
"docid": "020970e68281409d378e6682a780f54c",
"text": "Lung Carcinoma is a disease of uncontrolled growth of cancerous cells in the tissues of the lungs. The early detection of lung cancer is the key of its cure. Early diagnosis of the disease saves enormous lives, failing in which may lead to other severe problems causing sudden fatal death. In general, a measure for early stage diagnosis mainly includes X-rays, CT-images, MRI’s, etc. In this system first we would use some techniques that are essential for the task of medical image mining such as Data Preprocessing, Training and testing of samples, Classification using Backpropagation Neural Network which would classify the digital X-ray, CT-images, MRI’s, etc. as normal or abnormal. The normal state is the one that characterizes a healthy patient. The abnormal image will be further considered for the feature analysis. Further for optimized analysis of features Genetic Algorithm will be used that would extract as well as select features on the basis of the fitness of the features extracted. The selected features would be further classified as cancerous or noncancerous for the images classified as abnormal before. Hence this system will help to draw an appropriate decision about a particular patient’s state. Keywords—BackpopagationNeuralNetworks,Classification, Genetic Algorithm, Lung Cancer, Medical Image Mining.",
"title": ""
},
{
"docid": "9ec39badc92094783fcaaa28c2eb2f7a",
"text": "In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed.",
"title": ""
},
{
"docid": "344112b4ecf386026fd4c4714f0f3087",
"text": "This paper deals with easy programming methods of dual-arm manipulation tasks for humanoid robots. Hereby a programming by demonstration system is used in order to observe, learn and generalize tasks performed by humans. A classification for dual-arm manipulations is introduced, enabling a segmentation of tasks into adequate subtasks. Further it is shown how the generated programs are mapped on and executed by a humanoid robot.",
"title": ""
},
{
"docid": "f465415bf9cc982b4eb75ee9a02b1468",
"text": "After the demise of the Industrial Age, we currently live in an 'Information Age' fuelled mainly by the Internet, with an ever-increasing medically and dentally literate population. The media has played its role by reporting scientific advances, as well as securitising medical and dental practices. Reality television such as 'Extreme makeovers' has also raised public awareness of body enhancements, with a greater number of people seeking such procedures. To satiate this growing demand, the dental industry has flourished by introducing novel cosmetic products such as bleaching kits, tooth coloured filling materials and a variety of dental ceramics. In addition, one only has to browse through a dental journal to notice innumerable courses and lectures on techniques for providing cosmetic dentistry. The incessant public interest, combined with unrelenting marketing by companies is gradually shifting the balance of dental care from a healing to an enhancement profession. The purpose of this article is to endeavour to answer questions such as, What is aesthetic or cosmetic dentistry? Why do patients seek cosmetic dentistry? Are enhancement procedures a part of dental practice? What, if any, ethical guidelines and constraints apply to elective enhancement procedures? What is the role of the dentist in providing or encouraging this type of 'therapy'? What treatment modalities are available for aesthetic dental treatment?",
"title": ""
},
{
"docid": "db8325925cb9fd1ebdcf7480735f5448",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
},
{
"docid": "ff71838a3f8f44e30dc69ed2f9371bfc",
"text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.",
"title": ""
},
{
"docid": "9760e3676a7df5e185ec35089d06525e",
"text": "This paper examines the sufficiency of existing e-Learning standards for facilitating and supporting the introduction of adaptive techniques in computer-based learning systems. To that end, the main representational and operational requirements of adaptive learning environments are examined and contrasted against current eLearning standards. The motivation behind this preliminary analysis is attainment of: interoperability between adaptive learning systems; reuse of adaptive learning materials; and, the facilitation of adaptively supported, distributed learning activities.",
"title": ""
},
{
"docid": "3ed927f16de87a753fd7c1cc2cce7cef",
"text": "The state-of-the-art in securing mobile software systems are substantially intended to detect and mitigate vulnerabilities in a single app, but fail to identify vulnerabilities that arise due to the interaction of multiple apps, such as collusion attacks and privilege escalation chaining, shown to be quite common in the apps on the market. This paper demonstrates COVERT, a novel approach and accompanying tool-suite that relies on a hybrid static analysis and lightweight formal analysis technique to enable compositional security assessment of complex software. Through static analysis of Android application packages, it extracts relevant security specifications in an analyzable formal specification language, and checks them as a whole for inter-app vulnerabilities. To our knowledge, COVERT is the first formally-precise analysis tool for automated compositional analysis of Android apps. Our study of hundreds of Android apps revealed dozens of inter-app vulnerabilities, many of which were previously unknown. A video highlighting the main features of the tool can be found at: http://youtu.be/bMKk7OW7dGg.",
"title": ""
},
{
"docid": "80c1f7e845e21513fc8eaf644b11bdc5",
"text": "We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and \"Like\"ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.",
"title": ""
},
{
"docid": "23d1534a9daee5eeefaa1fdc8a5db0aa",
"text": "Obtaining a protein’s 3D structure is crucial to the understanding of its functions and interactions with other proteins. It is critical to accelerate the protein crystallization process with improved accuracy for understanding cancer and designing drugs. Systematic high-throughput approaches in protein crystallization have been widely applied, generating a large number of protein crystallization-trial images. Therefore, an efficient and effective automatic analysis for these images is a top priority. In this paper, we present a novel system, CrystalNet, for automatically labeling outcomes of protein crystallization-trial images. CrystalNet is a deep convolutional neural network that automatically extracts features from X-ray protein crystallization images for classification. We show that (1) CrystalNet can provide real-time labels for crystallization images effectively, requiring approximately 2 seconds to provide labels for all 1536 images of crystallization microassay on each plate; (2) compared with the stateof-the-art classification systems in crystallization image analysis, our technique demonstrates an improvement of 8% in accuracy, and achieve 90.8% accuracy in classification. As a part of the high-throughput pipeline which generates millions of images a year, CrystalNet can lead to a substantial reduction of labor-intensive screening.",
"title": ""
},
{
"docid": "af6c9c39b9d1be54ccc6e2478823df16",
"text": "Mobile security threats have recently emerged because of the fast growth in mobile technologies and the essential role that mobile devices play in our daily lives. For that, and to particularly address threats associated with malware, various techniques are developed in the literature, including ones that utilize static, dynamic, on-device, off-device, and hybrid approaches for identifying, classifying, and defend against mobile threats. Those techniques fail at times, and succeed at other times, while creating a trade-off of performance and operation. In this paper, we contribute to the mobile security defense posture by introducing Andro-AutoPsy, an anti-malware system based on similarity matching of malware-centric and malware creator-centric information. Using Andro-AutoPsy, we detect and classify malware samples into similar subgroups by exploiting the profiles extracted from integrated footprints, which are implicitly equivalent to distinct characteristics. The experimental results demonstrate that Andro-AutoPsy is scalable, performs precisely in detecting and classifying malware with low false positives and false negatives, and is capable of identifying zero-day mobile malware. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "7ec7b4783afb72ff3b182e1375187b11",
"text": "Climate change is predicted to increase the intensity and negative impacts of urban heat events, prompting the need to develop preparedness and adaptation strategies that reduce societal vulnerability to extreme heat. Analysis of societal vulnerability to extreme heat events requires an interdisciplinary approach that includes information about weather and climate, the natural and built environment, social processes and characteristics, interactions with stakeholders, and an assessment of community vulnerability at a local level. In this letter, we explore the relationships between people and places, in the context of urban heat stress, and present a new research framework for a multi-faceted, top-down and bottom-up analysis of local-level vulnerability to extreme heat. This framework aims to better represent societal vulnerability through the integration of quantitative and qualitative data that go beyond aggregate demographic information. We discuss how different elements of the framework help to focus attention and resources on more targeted health interventions, heat hazard mitigation and climate adaptation strategies.",
"title": ""
},
{
"docid": "35a85d6652bd333d93f8112aff83ab83",
"text": "For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is modelagnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems.",
"title": ""
},
{
"docid": "2d8baa9a78e5e20fd20ace55724e2aec",
"text": "To determine the relationship between fatigue and post-activation potentiation, we examined the effects of sub-maximal continuous running on neuromuscular function tests, as well as on the squat jump and counter movement jump in endurance athletes. The height of the squat jump and counter movement jump and the estimate of the fast twitch fiber recruiting capabilities were assessed in seven male middle distance runners before and after 40 min of continuous running at an intensity corresponding to the individual lactate threshold. The same test was then repeated after three weeks of specific aerobic training. Since the three variables were strongly correlated, only the estimate of the fast twitch fiber was considered for the results. The subjects showed a significant improvement in the fast twitch fiber recruitment percentage after the 40 min run. Our data show that submaximal physical exercise determined a change in fast twitch muscle fiber recruitment patterns observed when subjects performed vertical jumps; however, this recruitment capacity was proportional to the subjects' individual fast twitch muscle fiber profiles measured before the 40 min run. The results of the jump tests did not change significantly after the three-week training period. These results suggest that pre-fatigue methods, through sub-maximal exercises, could be used to take advantage of explosive capacity in middle-distance runners.",
"title": ""
},
{
"docid": "5f9da666504ade5b661becfd0a648978",
"text": "cefe.cnrs-mop.fr Under natural selection, individuals tend to adapt to their local environmental conditions, resulting in a pattern of LOCAL ADAPTATION (see Glossary). Local adaptation can occur if the direction of selection changes for an allele among habitats (antagonistic environmental effect), but it might also occur if the intensity of selection at several loci that are maintained as polymorphic by recurrent mutations covaries negatively among habitats. These two possibilities have been clearly identified in the related context of the evolution of senescence but have not have been fully appreciated in empirical and theoretical studies of local adaptation [1,2].",
"title": ""
},
{
"docid": "74770d8f7e0ac066badb9760a6a2b925",
"text": "Memristor-based synaptic network has been widely investigated and applied to neuromorphic computing systems for the fast computation and low design cost. As memristors continue to mature and achieve higher density, bit failures within crossbar arrays can become a critical issue. These can degrade the computation accuracy significantly. In this work, we propose a defect rescuing design to restore the computation accuracy. In our proposed design, significant weights in a specified network are first identified and retraining and remapping algorithms are described. For a two layer neural network with 92.64% classification accuracy on MNIST digit recognition, our evaluation based on real device testing shows that our design can recover almost its full performance when 20% random defects are present.",
"title": ""
},
{
"docid": "ff5c993fd071b31b6f639d1f64ce28b0",
"text": "We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.",
"title": ""
}
] |
scidocsrr
|
700f4f089e4bd53e8c2bcf3e9f6b8e3a
|
Digital Social Norm Enforcement: Online Firestorms in Social Media
|
[
{
"docid": "01b9bf49c88ae37de79b91edeae20437",
"text": "While online, some people self-disclose or act out more frequently or intensely than they would in person. This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority. Personality variables also will influence the extent of this disinhibition. Rather than thinking of disinhibition as the revealing of an underlying \"true self,\" we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation.",
"title": ""
},
{
"docid": "6d52a9877ddf18eb7e43c83000ed4da1",
"text": "Cyberbullying has recently emerged as a new form of bullying and harassment. 360 adolescents (12-20 years), were surveyed to examine the nature and extent of cyberbullying in Swedish schools. Four categories of cyberbullying (by text message, email, phone call and picture/video clip) were examined in relation to age and gender, perceived impact, telling others, and perception of adults becoming aware of such bullying. There was a significant incidence of cyberbullying in lower secondary schools, less in sixth-form colleges. Gender differences were few. The impact of cyberbullying was perceived as highly negative for picture/video clip bullying. Cybervictims most often chose to either tell their friends or no one at all about the cyberbullying, so adults may not be aware of cyberbullying, and (apart from picture/video clip bullying) this is how it was perceived by pupils. Findings are discussed in relation to similarities and differences between cyberbullying and the more traditional forms of bullying.",
"title": ""
}
] |
[
{
"docid": "6059cfa690c2de0a8c883aa741000f3a",
"text": "We study how a viewer can control a television set remotely by hand gestures. We address two fundamental issues of gesture{based human{computer interaction: (1) How can one communicate a rich set of commands without extensive user training and memorization of gestures? (2) How can the computer recognize the commands in a complicated visual environment? Our solution to these problems exploits the visual feedback of the television display. The user uses only one gesture: the open hand, facing the camera. He controls the television by moving his hand. On the display, a hand icon appears which follows the user's hand. The user can then move his own hand to adjust various graphical controls with the hand icon. The open hand presents a characteristic image which the computer can detect and track. We perform a normalized correlation of a template hand to the image to analyze the user's hand. A local orientation representation is used to achieve some robustness to lighting variations. We made a prototype of this system using a computer workstation and a television. The graphical overlays appear on the computer screen, although they could be mixed with the video to appear on the television. The computer controls the television set through serial port commands to an electronically controlled remote control. We describe knowledge we gained from building the prototype.",
"title": ""
},
{
"docid": "a83b6602e0d4a45e3bad60967890c46a",
"text": "In the present work, we tackle the issue of designing, prototyping and testing a general-purpose automated level editor for platform video games. Beside relieving level designers from the burden of repetitive work, Procedural Content Generation can be exploited for optimizing the development process, increasing re-playability, adapting games to specific audiences, and enabling new games mechanics. The tool proposed in this paper is aimed at producing levels that are both playable and fun. At the same time, it should guarantee maximum freedom to the level designer, and suggest corrections functional to the quality of the player experience.",
"title": ""
},
{
"docid": "ba3522be00805402629b4fb4a2c21cc4",
"text": "Successful electronic government requires the successful implementation of technology. This book lays out a framework for understanding a system of decision processes that have been shown to be associated with the successful use of technology. Peter Weill and Jeanne Ross are based at the Center for Information Systems Research at MIT’s Sloan School of Management, which has been doing research on the management of information technology since 1974. Understanding how to make decisions about information technology has been a primary focus of the Center for decades. Weill and Ross’ book is based on two primary studies and a number of related projects. The more recent study is a survey of 256 organizations from the Americas, Europe, and Asia Pacific that was led by Peter Weill between 2001 and 2003. This work also included 32 case studies. The second study is a set of 40 case studies developed by Jeanne Ross between 1999 and 2003 that focused on the relationship between information technology (IT) architecture and business strategy. This work identified governance issues associated with IT and organizational change efforts. Three other projects undertaken by Weill, Ross, and others between 1998 and 2001 also contributed to the material described in the book. Most of this work is available through the CISR Web site, http://mitsloan.mit.edu/cisr/rmain.php. Taken together, these studies represent a substantial body of work on which to base the development of a frameBOOK REVIEW",
"title": ""
},
{
"docid": "490785e55545eda74f3275a0a8b5da73",
"text": "This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent correct recognition rate (CRR) and perfect receiver-operating characteristic (ROC) curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the false acceptance rate (FAR) and false rejection rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical equal error rate (EER) is predicted to be as low as 2.59 times 10-1 available data sets",
"title": ""
},
{
"docid": "075b05396818b13eff77fdcf46053fa7",
"text": "Link (association) analysis has been used in the criminal justice domain to search large datasets for associations between crime entities in order to facilitate crime investigations. However, link analysis still faces many challenging problems, such as information overload, high search complexity, and heavy reliance on domain knowledge. To address these challenges, this article proposes several techniques for automated, effective, and efficient link analysis. These techniques include the co-occurrence analysis, the shortest path algorithm, and a heuristic approach to identifying associations and determining their importance. We developed a prototype system called CrimeLink Explorer based on the proposed techniques. Results of a user study with 10 crime investigators from the Tucson Police Department showed that our system could help subjects conduct link analysis more efficiently than traditional single-level link analysis tools. Moreover, subjects believed that association paths found based on the heuristic approach were more accurate than those found based solely on the co-occurrence analysis and that the automated link analysis system would be of great help in crime investigations.",
"title": ""
},
{
"docid": "a9709367bc84ececd98f65ed7359f6b0",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "321049dbe0d9bae5545de3d8d7048e01",
"text": "ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.",
"title": ""
},
{
"docid": "df610551aec503acd1a31fb519fdeabe",
"text": "A small form factor, 79 GHz, MIMO radar sensor with 2D angle of arrival estimation capabilities was designed for automotive applications. It offers a 0.05 m distance resolution required to make small minimum distance measurements. The radar dimensions are 42×44×20 mm3 enabling installation in novel side locations. This aspect, combined with a wide field of view, creates a coverage that compliments the near range coverage gaps of existing long and medium range radars. Therefore, this radar supports novel radar applications such as parking aid and can be used to create a 360 degrees safety cocoon around the car.",
"title": ""
},
{
"docid": "f331cb6d4b970829100bfe103a8d8762",
"text": "This paper presents lessons learned from an experiment to reverse engineer a program. A reverse engineering process was used as part of a project to develop an Ada implementation of a Fortran program and upgrade the existing documentation. To accomplish this, design information was extracted from the Fortran source code and entered into a software development environment. The extracted design information was used to implement a new version of the program written in Ada. This experiment revealed issues about recovering design information, such as, separating design details from implementation details, dealing with incomplete or erroneous information, traceability of information between implementation and recovered design, and re-engineering. The reverse engineering process used to recover the design, and the experience gained during the study are reported.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "61eb4d0961242bd1d1e59d889a84f89d",
"text": "Understanding and forecasting the health of an online community is of great value to its owners and managers who have vested interests in its longevity and success. Nevertheless, the association between community evolution and the behavioural patterns and trends of its members is not clearly understood, which hinders our ability of making accurate predictions of whether a community is flourishing or diminishing. In this paper we use statistical analysis, combined with a semantic model and rules for representing and computing behaviour in online communities. We apply this model on a number of forum communities from Boards.ie to categorise behaviour of community members over time, and report on how different behaviour compositions correlate with positive and negative community growth in these forums.",
"title": ""
},
{
"docid": "1162833be969a71b3d9b837d7e6f4464",
"text": "RaineR WaseR1,2* and Masakazu aono3,4 1Institut für Werkstoffe der Elektrotechnik 2, RWTH Aachen University, 52056 Aachen, Germany 2Institut für Festkörperforschung/CNI—Center of Nanoelectronics for Information Technology, Forschungszentrum Jülich, 52425 Jülich, Germany 3Nanomaterials Laboratories, National Institute for Material Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan 4ICORP/Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan *e-mail: [email protected]",
"title": ""
},
{
"docid": "1dcfd9b82cddb3111df067497febdd8b",
"text": "Studies investigating the prevalence of psychiatric disorders among trans individuals have identified elevated rates of psychopathology. Research has also provided conflicting psychiatric outcomes following gender-confirming medical interventions. This review identifies 38 cross-sectional and longitudinal studies describing prevalence rates of psychiatric disorders and psychiatric outcomes, pre- and post-gender-confirming medical interventions, for people with gender dysphoria. It indicates that, although the levels of psychopathology and psychiatric disorders in trans people attending services at the time of assessment are higher than in the cis population, they do improve following gender-confirming medical intervention, in many cases reaching normative values. The main Axis I psychiatric disorders were found to be depression and anxiety disorder. Other major psychiatric disorders, such as schizophrenia and bipolar disorder, were rare and were no more prevalent than in the general population. There was conflicting evidence regarding gender differences: some studies found higher psychopathology in trans women, while others found no differences between gender groups. Although many studies were methodologically weak, and included people at different stages of transition within the same cohort of patients, overall this review indicates that trans people attending transgender health-care services appear to have a higher risk of psychiatric morbidity (that improves following treatment), and thus confirms the vulnerability of this population.",
"title": ""
},
{
"docid": "b54045769ce80654400706a2489a2968",
"text": "This study aims to develop a methodology for predicting cycle time based on domain knowledge and data mining algorithms given production status including WIP, throughput. The proposed model and derived rules were validated with real data and demonstrated its practical viability for supporting production planning decisions",
"title": ""
},
{
"docid": "70745e8cdf957b1388ab38a485e98e60",
"text": "Network studies of large-scale brain connectivity have begun to reveal attributes that promote the segregation and integration of neural information: communities and hubs. Network communities are sets of regions that are strongly interconnected among each other while connections between members of different communities are less dense. The clustered connectivity of network communities supports functional segregation and specialization. Network hubs link communities to one another and ensure efficient communication and information integration. This review surveys a number of recent reports on network communities and hubs, and their role in integrative processes. An emerging focus is the shifting balance between segregation and integration over time, which manifest in continuously changing patterns of functional interactions between regions, circuits and systems.",
"title": ""
},
{
"docid": "b6d8e6b610eff993dfa93f606623e31d",
"text": "Data journalism designates journalistic work inspired by digital data sources. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; since 2012, however, it has increasingly focused on the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web (social and otherwise). These trends have been noticed by computer scientists working in the industry and academia. Thus, a very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools. Our tutorial: (i) Outlines the current state of affairs in the area of digital (or computational) fact-checking in newsrooms, by journalists, NGO workers, scientists and IT companies; (ii) Shows which areas of digital content management research, in particular those relying on the Web, can be leveraged to help fact-checking, and gives a comprehensive survey of efforts in this area; (iii) Highlights ongoing trends, unsolved problems, and areas where we envision future scientific and practical advances. PVLDB Reference Format: S. Cazalens, J. Leblay, P. Lamarre, I. Manolescu, X. Tannier. Computational Fact Checking: A Content Management Perspective. PVLDB, 11 (12): 2110-2113, 2018. DOI: https://doi.org/10.14778/3229863.3229880 This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For any use beyond those covered by this license, obtain permission by emailing [email protected]. Proceedings of the VLDB Endowment, Vol. 11, No. 12 Copyright 2018 VLDB Endowment 2150-8097/18/8. DOI: https://doi.org/10.14778/3229863.3229880 1. OUTLINE In Section 1.1, we provide a short history of journalistic fact-checking and presents its most recent and visible actors, from the media and/or NGO communities. Section 1.2 discusses the scientific content management areas which bring useful tools for computational fact-checking. 1.1 Data journalism and fact-checking While data of some form is a natural ingredient of all reporting, the increasing volumes and complexity of digital data lead to a qualitative jump, where technical skills, and in particular data science skills, are stringently needed in journalistic work. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community; it referred to the task of identifying and checking factual claims present in media content, which dedicated newsroom personnel would then check for factual accuracy. The goal of such checking was to avoid misinformation, to protect the journal reputation and avoid legal actions. Starting around 2012, first in the United States (FactCheck.org), then in Europe, and soon after in all areas of the world, journalists have started to take advantage of modern technologies for processing content, such as text, video, structured and unstructured data, in order to automate, at least partially, the knowledge finding, reasoning, and analysis tasks which had been previously performed completely by humans. Over time, the focus of fact-checking shifted from verifying claims made by media outlets, toward the claims made by politicians and other public figures. This trend coincided with the parallel (but distinct) evolution toward asking Government Open Data, that is: the idea that governing bodies should share with the public precise information describing their functioning, so that the people have means to assess the quality of their elected representation. Government Open Data became quickly available, in large volumes, e.g. through data.gov in the US, data.gov.uk in the UK, data.gouv.fr in France etc.; journalists turned out to be the missing link between the newly available data and comprehension by the public. Data journalism thus found http://factcheck.org",
"title": ""
},
{
"docid": "0ae5df7af64f0069d691922d391f3c60",
"text": "With the realization that more research is needed to explore external factors (e.g., pedagogy, parental involvement in the context of K-12 learning) and internal factors (e.g., prior knowledge, motivation) underlying student-centered mobile learning, the present study conceptually and empirically explores how the theories and methodologies of self-regulated learning (SRL) can help us analyze and understand the processes of mobile learning. The empirical data collected from two elementary science classes in Singapore indicates that the analytical SRL model of mobile learning proposed in this study can illuminate the relationships between three aspects of mobile learning: students’ self-reports of psychological processes, patterns of online learning behavior in the mobile learning environment (MLE), and learning achievement. Statistical analyses produce three main findings. First, student motivation in this case can account for whether and to what degree the students can actively engage in mobile learning activities metacognitively, motivationally, and behaviorally. Second, the effect of students’ self-reported motivation on their learning achievement is mediated by their behavioral engagement in a pre-designed activity in the MLE. Third, students’ perception of parental autonomy support is not only associated with their motivation in school learning, but also associated with their actual behaviors in self-regulating their learning. ! 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a478b6f7accfb227e6ee5a6b35cd7fa1",
"text": "This paper presents the development of an ultra-high-speed permanent magnet synchronous motor (PMSM) that produces output shaft power of 2000 W at 200 000 rpm with around 90% efficiency. Due to the guaranteed open-loop stability over the full operating speed range, the developed motor system is compact and low cost since it can avoid the design complexity of a closed-loop controller. This paper introduces the collaborative design approach of the motor system in order to ensure both performance requirements and stability over the full operating speed range. The actual implementation of the motor system is then discussed. Finally, computer simulation and experimental results are provided to validate the proposed design and its effectiveness",
"title": ""
},
{
"docid": "8375f143ff6b42e36e615a78a362304b",
"text": "The Ball and Beam system is a popular technique for the study of control systems. The system has highly non-linear characteristics and is an excellent tool to represent an unstable system. The control of such a system presents a challenging task. The ball and beam mirrors the real time unstable complex systems such as flight control, on a small laboratory level and provides for developing control algorithms which can be implemented at a higher scale. The objective of this paper is to design and implement cascade PD control of the ball and beam system in LabVIEW using data acquisition board and DAQmx and use the designed control circuit to verify results in real time.",
"title": ""
},
{
"docid": "bbbbe3f926de28d04328f1de9bf39d1a",
"text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.",
"title": ""
}
] |
scidocsrr
|
9a287355a9527e38a8c78f0e88362339
|
A Tutorial on Network Embeddings
|
[
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
}
] |
[
{
"docid": "cf6c2d8fac95d95998431fbb31953997",
"text": "Global software development (GSD) is a phenomenon that is receiving considerable interest from companies all over the world. In GSD, stakeholders from different national and organizational cultures are involved in developing software and the many benefits include access to a large labour pool, cost advantage and round-the-clock development. However, GSD is technologically and organizationally complex and presents a variety of challenges to be managed by the software development team. In particular, temporal, geographical and socio-cultural distances impose problems not experienced in traditional systems development. In this paper, we present findings from a case study in which we explore the particular challenges associated with managing GSD. Our study also reveals some of the solutions that are used to deal with these challenges. We do so by empirical investigation at three US based GSD companies operating in Ireland. Based on qualitative interviews we present challenges related to temporal, geographical and socio-cultural distance",
"title": ""
},
{
"docid": "b1f29f32ecc6aa2404cad271427675f2",
"text": "RATIONALE\nAnti-N-methyl-D-aspartate (NMDA) receptor encephalitis is an autoimmune disorder that can be controlled and reversed by immunotherapy. The presentation of NMDA receptor encephalitis varies, but NMDA receptor encephalitis is seldom reported in patients with both bilateral teratomas and preexisting brain injury.\n\n\nPATIENT CONCERNS\nA 28-year-old female with a history of traumatic intracranial hemorrhage presented acute psychosis, seizure, involuntary movement, and conscious disturbance with a fulminant course. Anti-NMDA receptor antibody was identified in both serum and cerebrospinal fluid, confirming the diagnosis of anti-NMDA receptor encephalitis. Bilateral teratomas were also identified during tumor survey. DIAGNOSES:: anti-N-methyl-D-aspartate receptor encephalitis.\n\n\nINTERVENTIONS\nTumor resection and immunotherapy were performed early during the course.\n\n\nOUTCOMES\nThe patient responded well to tumor resection and immunotherapy. Compared with other reports in the literature, her symptoms rapidly improved without further relapse.\n\n\nLESSONS\nThis case report demonstrates that bilateral teratomas may be related to high anybody titers and that the preexisting head injury may be responsible for lowering the threshold of neurological deficits. Early diagnosis and therapy are crucial for a good prognosis in such patients.",
"title": ""
},
{
"docid": "b803d626421c7e7eaf52635c58523e8f",
"text": "Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.",
"title": ""
},
{
"docid": "eabb50988aeb711995ff35833a47770d",
"text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.",
"title": ""
},
{
"docid": "62f8eb0e7eafe1c0d857dadc72008684",
"text": "In the current Web 2.0 era, the popularity of Web resources fluctuates ephemerally, based on trends and social interest. As a result, content-based relevance signals are insufficient to meet users' constantly evolving information needs in searching for Web 2.0 items. Incorporating future popularity into ranking is one way to counter this. However, predicting popularity as a third party (as in the case of general search engines) is difficult in practice, due to their limited access to item view histories. To enable popularity prediction externally without excessive crawling, we propose an alternative solution by leveraging user comments, which are more accessible than view counts. Due to the sparsity of comments, traditional solutions that are solely based on view histories do not perform well. To deal with this sparsity, we mine comments to recover additional signal, such as social influence. By modeling comments as a time-aware bipartite graph, we propose a regularization-based ranking algorithm that accounts for temporal, social influence and current popularity factors to predict the future popularity of items. Experimental results on three real-world datasets --- crawled from YouTube, Flickr and Last.fm --- show that our method consistently outperforms competitive baselines in several evaluation tasks.",
"title": ""
},
{
"docid": "a7f0f573b28b1fb82c3cba2d782e7d58",
"text": "This paper presents a meta-analysis of theory and research about writing and writing pedagogy, identifying six discourses – configurations of beliefs and practices in relation to the teaching of writing. It introduces and explains a framework for the analysis of educational data about writing pedagogy inwhich the connections are drawn across viewsof language, viewsofwriting, views of learning towrite,approaches to the teaching of writing, and approaches to the assessment of writing. The framework can be used for identifying discourses of writing in data such as policy documents, teaching and learning materials, recordings of pedagogic practice, interviews and focus groups with teachers and learners, and media coverage of literacy education. The paper also proposes that, while there are tensions and contradictions among these discourses, a comprehensive writing pedagogy might integrate teaching approaches from all six.",
"title": ""
},
{
"docid": "4e4560d1434ee05c30168e49ffc3d94a",
"text": "We present a tree data structure for fast nearest neighbor operations in general <i>n</i>-point metric spaces (where the data set consists of <i>n</i> points). The data structure requires <i>O</i>(<i>n</i>) space <i>regardless</i> of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant <i>c</i>, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in <i>O</i> (<i>c</i><sup>6</sup><i>n</i> log <i>n</i>) time. Furthermore, nearest neighbor queries require time only logarithmic in <i>n</i>, in particular <i>O</i> (<i>c</i><sup>12</sup> log <i>n</i>) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"title": ""
},
{
"docid": "a86840c1c1c6bef15889fd0e62815402",
"text": "The Web offers a corpus of over 100 million tables [6], but the meaning of each table is rarely explicit from the table itself. Header rows exist in few cases and even when they do, the attribute names are typically useless. We describe a system that attempts to recover the semantics of tables by enriching the table with additional annotations. Our annotations facilitate operations such as searching for tables and finding related tables. To recover semantics of tables, we leverage a database of class labels and relationships automatically extracted from the Web. The database of classes and relationships has very wide coverage, but is also noisy. We attach a class label to a column if a sufficient number of the values in the column are identified with that label in the database of class labels, and analogously for binary relationships. We describe a formal model for reasoning about when we have seen sufficient evidence for a label, and show that it performs substantially better than a simple majority scheme. We describe a set of experiments that illustrate the utility of the recovered semantics for table search and show that it performs substantially better than previous approaches. In addition, we characterize what fraction of tables on the Web can be annotated using our approach.",
"title": ""
},
{
"docid": "bd0b0cef8ef780a44ad92258ac705395",
"text": "This chapter introduces some of the theoretical foundations of swarm intelligence. We focus on the design and implementation of the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms for various types of function optimization problems, real world applications and data mining. Results are analyzed, discussed and their potentials are illustrated.",
"title": ""
},
{
"docid": "ddd3f4e9bf77a65c7b183d04905e1b68",
"text": "The immune system is built to defend an organism against both known and new attacks, and functions as an adaptive distributed defense system. Artificial Immune Systems abstract the structure of immune systems to incorporate memory, fault detection and adaptive learning. We propose an immune system based real time intrusion detection system using unsupervised clustering. The model consists of two layers: a probabilistic model based T-cell algorithm which identifies possible attacks, and a decision tree based B-cell model which uses the output from T-cells together with feature information to confirm true attacks. The algorithm is tested on the KDD 99 data, where it achieves a low false alarm rate while maintaining a high detection rate. This is true even in case of novel attacks,which is a significant improvement over other algorithms.",
"title": ""
},
{
"docid": "cb408e52b5e96669e08f70888b11b3e3",
"text": "Centrality is one of the most studied concepts in social network analysis. There is a huge literature regarding centrality measures, as ways to identify the most relevant users in a social network. The challenge is to find measures that can be computed efficiently, and that can be able to classify the users according to relevance criteria as close as possible to reality. We address this problem in the context of the Twitter network, an online social networking service with millions of users and an impressive flow of messages that are published and spread daily by interactions between users. Twitter has different types of users, but the greatest utility lies in finding the most influential ones. The purpose of this article is to collect and classify the different Twitter influence measures that exist so far in literature. These measures are very diverse. Some are based on simple metrics provided by the Twitter API, while others are based on complex mathematical models. Several measures are based on the PageRank algorithm, traditionally used to rank the websites on the Internet. Some others consider the timeline of publication, others the content of the messages, some are focused on specific topics, and others try to make predictions. We consider all these aspects, and some additional ones. Furthermore, we include measures of activity and popularity, the traditional mechanisms to correlate measures, and some important aspects of computational complexity for this particular context.",
"title": ""
},
{
"docid": "f0af49e37fa37cf74c79f6903ae05748",
"text": "We show that early vision can use monocular cues to rapidly complete partially-occluded objects. Visual search for easily-detected fragments becomes difficult when the completed shape is similar to others in the display; conversely, search for fragments that are difficult to detect becomes easy when the completed shape is distinctive. Results indicate that completion occurs via the occlusion-triggered removal of occlusion edges and linking of associated regions. We fail to find evidence for a visible filling-in of contours or surfaces, but do find evidence for a 'functional' filling-in that prevents the constituent fragments from being rapidly accessed. As such, it is only the completed structures--and not the fragments themselves--that serve as the basis for rapid recognition.",
"title": ""
},
{
"docid": "48568865b27e8edb88d4683e702dd4f8",
"text": "This study investigates how individuals process an online product review when an avatar is included to represent the peer reviewer. The researchers predicted that both perceived avatar and textual credibility would have a positive influence on perceptions of source trustworthiness and the data supported this prediction. Expectancy violations theory also predicted that discrepancies between the perceived avatar and textual credibility would produce violations. Violations were statistically captured using a residual analysis. The results of this research ultimately demonstrated that discrepancies in perceived avatar and textual credibility can have a significant impact on perceptions of source trustworthiness. These findings suggest that predicting perceived source trustworthiness in an online consumer review setting goes beyond the linear effects of avatar and textual credibility.",
"title": ""
},
{
"docid": "76849958320dde148b7dadcb6113d9d3",
"text": "Numerous recent approaches attempt to remove image blur due to camera shake, either with one or multiple input images, by explicitly solving an inverse and inherently ill-posed deconvolution problem. If the photographer takes a burst of images, a modality available in virtually all modern digital cameras, we show that it is possible to combine them to get a clean sharp version. This is done without explicitly solving any blur estimation and subsequent inverse problem. The proposed algorithm is strikingly simple: it performs a weighted average in the Fourier domain, with weights depending on the Fourier spectrum magnitude. The method's rationale is that camera shake has a random nature and therefore each image in the burst is generally blurred differently. Experiments with real camera data show that the proposed Fourier Burst Accumulation algorithm achieves state-of-the-art results an order of magnitude faster, with simplicity for on-board implementation on camera phones.",
"title": ""
},
{
"docid": "ac41c57bcb533ab5dabcc733dd69a705",
"text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.",
"title": ""
},
{
"docid": "f1ebd840092228e48a3ab996287e7afd",
"text": "Negative emotions are reliably associated with poorer health (e.g., Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002), but only recently has research begun to acknowledge the important role of positive emotions for our physical health (Fredrickson, 2003). We examine the link between dispositional positive affect and one potential biological pathway between positive emotions and health-proinflammatory cytokines, specifically levels of interleukin-6 (IL-6). We hypothesized that greater trait positive affect would be associated with lower levels of IL-6 in a healthy sample. We found support for this hypothesis across two studies. We also explored the relationship between discrete positive emotions and IL-6 levels, finding that awe, measured in two different ways, was the strongest predictor of lower levels of proinflammatory cytokines. These effects held when controlling for relevant personality and health variables. This work suggests a potential biological pathway between positive emotions and health through proinflammatory cytokines.",
"title": ""
},
{
"docid": "27ddea786e06ffe20b4f526875cdd76b",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
},
{
"docid": "444bcff9a7fdcb80041aeb01b8724eed",
"text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.",
"title": ""
},
{
"docid": "f734f6059c849c88e5b53d3584bf0a97",
"text": "In three studies (two representative nationwide surveys, N = 1,007, N = 682; and one experimental, N = 76) we explored the effects of exposure to hate speech on outgroup prejudice. Following the General Aggression Model, we suggest that frequent and repetitive exposure to hate speech leads to desensitization to this form of verbal violence and subsequently to lower evaluations of the victims and greater distancing, thus increasing outgroup prejudice. In the first survey study, we found that lower sensitivity to hate speech was a positive mediator of the relationship between frequent exposure to hate speech and outgroup prejudice. In the second study, we obtained a crucial confirmation of these effects. After desensitization training individuals were less sensitive to hate speech and more prejudiced toward hate speech victims than their counterparts in the control condition. In the final study, we replicated several previous effects and additionally found that the effects of exposure to hate speech on prejudice were mediated by a lower sensitivity to hate speech, and not by lower sensitivity to social norms. Altogether, our studies are the first to elucidate the effects of exposure to hate speech on outgroup prejudice.",
"title": ""
},
{
"docid": "d6a0dbdfda18a11e3a39d3f27e915426",
"text": "Concepts embody the knowledge to facilitate our cognitive processes of learning. Mapping short texts to a large set of open domain concepts has gained many successful applications. In this paper, we unify the existing conceptualization methods from a Bayesian perspective, and discuss the three modeling approaches: descriptive, generative, and discriminative models. Motivated by the discussion of their advantages and shortcomings, we develop a generative + descriptive modeling approach. Our model considers term relatedness in the context, and will result in disambiguated conceptualization. We show the results of short text clustering using a news title data set and a Twitter message data set, and demonstrate the effectiveness of the developed approach compared with the state-of-the-art conceptualization and topic modeling approaches.",
"title": ""
}
] |
scidocsrr
|
03be7f6e3f8b9acff302b1a0cf206146
|
Exploring STT-MRAM Based In-Memory Computing Paradigm with Application of Image Edge Extraction
|
[
{
"docid": "d59023b9644186a81c3bb5aa8f4254fd",
"text": "This paper proposes a dual (1R/1W) port spin-orbit torque magnetic random access memory (1R/1W SOT-MRAM) for energy efficient on-chip cache applications. Our proposed dual port memory can alleviate the impact of write latency on system performance by supporting simultaneous read and write accesses. The spin-orbit device leverages the high spin current injection efficiency of spin Hall metal to achieve low critical switching current to program a magnetic tunnel junction. The low write current reduces the write power consumption, and the size of the access transistors, leading to higher integration density. Furthermore, the decoupled read and write current paths of the spin-orbit device improves oxide barrier reliability, because the write current does not flow through the oxide barrier. Device, circuit, and system level co-simulations show that a 1R/1W SOT-MRAM based L2 cache can improve the performance and energy-efficiency of the computing systems compared to SRAM and standard STT-MRAM based L2 caches.",
"title": ""
}
] |
[
{
"docid": "0e83d6d4ba37a6262c464ade8b29f157",
"text": "We propose a novel approach for instance segmentation given an image of homogeneous object cluster (HOC). Our learning approach is one-shot because a single video of an object instance is captured and it requires no human annotation. Our intuition is that images of homogeneous objects can be effectively synthesized based on structure and illumination priors derived from real images. A novel solver is proposed that iteratively maximizes our structured likelihood to generate realistic images of HOC. Illumination transformation scheme is applied to make the real and synthetic images share the same illumination condition. Extensive experiments and comparisons are performed to verify our method. We build a dataset consisting of pixel-level annotated images of HOC. The dataset and code will be published with the paper.",
"title": ""
},
{
"docid": "0a66ced2f77134e7252d63843f59bfed",
"text": "We study the extent to which online social networks can be connected to knowledge bases. The problem is referred to as learning social knowledge graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn latent topics that generate word embeddings and network embeddings simultaneously. GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities—i.e., social network users and knowledge concepts—in a shared latent topic space. Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, an online academic search system to connect with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate of learning social knowledge graphs in an online A/B test with live users.",
"title": ""
},
{
"docid": "a2238524731bf855a1edb9ad874740a6",
"text": "Lack of trust has been identified as a major obstacle to the adoption of online shopping. However, there is paucity of research that investigates the effectiveness of various trust building mechanisms, especially the interactions amongst these mechanisms. In this study, three trust building mechanisms (i.e., third-party certification, reputation, and return policy) were examined. Scenario survey method was used for data collection. 463 usable questionnaires were collected from respondents with diverse backgrounds. Regression results show that all three trust building mechanisms have significant positive effects on trust in the online vendor. Their effects are not simple ones; the different trust building mechanisms interact with one another to produce an overall effect on the level of trust. These results have both theoretical and practical implications.",
"title": ""
},
{
"docid": "90b56b9168a02e20f42449ffa84f35b6",
"text": "Malignant melanomas of the penis and urethra are rare.1–7 These lesions typically appear in an older age group.1–3,5–8 Presentation is usually late,1,6 with thick primary lesions9 and a high incidence of regional metastatic disease.1,5,8 Little is known about risk factors for or the pathogenesis of this disease. Furthermore, there is lack of consensus as to the extent of treatment that is indicated. The authors present a case of multifocal melanoma of the glans penis that has not been previously described. A review of the literature with emphasis on the pathogenesis and treatment of melanoma of the penis and urethra is discussed. Further efforts are needed to identify those at increased risk so that earlier diagnosis, surgical intervention, and an improved prognosis for those afflicted with this disease will be possible in the future.",
"title": ""
},
{
"docid": "d89c7f2f24679d357032480f09ac1711",
"text": "Psychologists distinguish between extrinsically motivated behavior, which is behavior undertaken to achieve some externally supplied reward, such as a prize, a high grade, or a high-paying job, and intrinsically motivated behavior, which is behavior done for its own sake. Is an analogous distinction meaningful for machine learning systems? Can we say of a machine learning system that it is motivated to learn, and if so, is it possible to provide it with an analog of intrinsic motivation? Despite the fact that a formal distinction between extrinsic and intrinsic motivation is elusive, this chapter argues that the answer to both questions is assuredly “yes” and that the machine learning framework of reinforcement learning is particularly appropriate for bringing learning together with what in animals one would call motivation. Despite the common perception that a reinforcement learning agent’s reward has to be extrinsic because the agent has a distinct input channel for reward signals, reinforcement learning provides a natural framework for incorporating principles of intrinsic motivation.",
"title": ""
},
{
"docid": "fec345f9a3b2b31bd76507607dd713d4",
"text": "E-government is a relatively new branch of study within the Information Systems (IS) field. This paper examines the factors influencing adoption of e-government services by citizens. Factors that have been explored in the extant literature present inadequate understanding of the relationship that exists between ‘adopter characteristics’ and ‘behavioral intention’ to use e-government services. These inadequacies have been identified through a systematic and thorough review of empirical studies that have considered adoption of government to citizen (G2C) electronic services by citizens. This paper critically assesses key factors that influence e-government service adoption; reviews limitations of the research methodologies; discusses the importance of 'citizen characteristics' and 'organizational factors' in adoption of e-government services; and argues for the need to examine e-government service adoption in the developing world.",
"title": ""
},
{
"docid": "0ce767b0e1495da25fead35a939cc6fc",
"text": "We seek to detect visual relations in images of the form of triplets t = (subject, predicate, object), such as ‘person riding dog’, where training examples of the individual entities are available but their combinations are rare or unseen at training. This is an important set-up due to the combinatorial nature of visual relations : collecting sufficient training data for all possible triplets would be very hard. The contributions of this work are three-fold. First, we learn a representation of visual relations that combines (i) individual embeddings for subject, object and predicate together with (ii) a visual phrase embedding that represents the relation triplet. Second, we learn how to transfer visual phrase embeddings from existing training triplets to unseen test triplets using analogies between relations that involve similar objects. Third, we demonstrate the benefits of our approach on two challenging datasets involving rare and unseen relations : on HICO-DET, our model achieves significant improvement over a strong baseline, and we confirm this improvement on retrieval of unseen triplets on the UnRel rare relation dataset.",
"title": ""
},
{
"docid": "007816416b09d8e37cb8d22f96355e9b",
"text": "Irregular or infrequent voiding due to avoiding school toilets can contribute to a number of urinary problems among school children. There is, however, a lack of studies on younger women. The aim of this study was to investigate toileting behavior and the correlation to lower urinary tract symptoms (LUTS) among young women (age 18–25 years). A further aim was to validate the Swedish version of the Toileting Behavior scale (TB scale). Quantitative descriptive design was used with two questionnaires: the International Consultation on Incontinence Questionnaire Female Lower Urinary Tract Symptoms (ICIQ-FLUTS) and the TB scale, together with six background questions. The questionnaires were distributed in November 2014 to 550 women aged 18–25 years randomly selected from the population register in southern Sweden. A total of 173 (33%) women responded. Mean age was 21.6 years (range 18–25). The Swedish version of TB scale showed good construct validity and reliability, similar to the original. Most toileting behavior was significantly correlated with LUTS, which were common, as 34.2% reported urgency and 35.9% urine leakage at least sometimes or more often. LUTS were quite common in this group of young women. Toileting behaviors were also significantly related to urinary tract symptoms. Thus, TB scale was useful in this population, and the translated Swedish version showed good construct validity and reliability.",
"title": ""
},
{
"docid": "3f6cbad208a819fc8fc6a46208197d59",
"text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.",
"title": ""
},
{
"docid": "a79d4b0a803564f417236f2450658fe0",
"text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "6c75e0532f637448cdec57bf30e76a4e",
"text": "A wide range of machine learning problems, including astronomical inference about galaxy clusters, natural image scene classification, parametric statistical inference, and predictions of public opinion, can be well-modeled as learning a function on (samples from) distributions. This thesis explores problems in learning such functions via kernel methods. The first challenge is one of computational efficiency when learning from large numbers of distributions: the computation of typicalmethods scales between quadratically and cubically, and so they are not amenable to large datasets. We investigate the approach of approximate embeddings into Euclidean spaces such that inner products in the embedding space approximate kernel values between the source distributions. We present a new embedding for a class of information-theoretic distribution distances, and evaluate it and existing embeddings on several real-world applications. We also propose the integration of these techniques with deep learning models so as to allow the simultaneous extraction of rich representations for inputs with the use of expressive distributional classifiers. In a related problem setting, common to astrophysical observations, autonomous sensing, and electoral polling, we have the following challenge: when observing samples is expensive, but we can choose where we would like to do so, how do we pick where to observe? We propose the development of a method to do so in the distributional learning setting (which has a natural application to astrophysics), as well as giving a method for a closely related problem where we search for instances of patterns by making point observations. Our final challenge is that the choice of kernel is important for getting good practical performance, but how to choose a good kernel for a given problem is not obvious. We propose to adapt recent kernel learning techniques to the distributional setting, allowing the automatic selection of good kernels for the task at hand. Integration with deep networks, as previously mentioned, may also allow for learning the distributional distance itself. Throughout, we combine theoretical results with extensive empirical evaluations to increase our understanding of the methods.",
"title": ""
},
{
"docid": "8c1c9ba389d0e76f1dfafedcb8e3e095",
"text": "Recommender system has become an effective tool for information filtering, which usually provides the most useful items to users by a top-k ranking list. Traditional recommendation techniques such as Nearest Neighbors (NN) and Matrix Factorization (MF) have been widely used in real recommender systems. However, neither approaches can well accomplish recommendation task since that: (1) most NN methods leverage the neighbor's behaviors for prediction, which may suffer the severe data sparsity problem; (2) MF methods are less sensitive to sparsity, but neighbors' influences on latent factors are not fully explored, since the latent factors are often used independently. To overcome the above problems, we propose a new framework for recommender systems, called collaborative factorization. It expresses the user as the combination of his own factors and those of the neighbors', called collaborative latent factors, and a ranking loss is then utilized for optimization. The advantage of our approach is that it can both enjoy the merits of NN and MF methods. In this paper, we take the logistic loss in RankNet and the likelihood loss in ListMLE as examples, and the corresponding collaborative factorization methods are called CoF-Net and CoF-MLE. Our experimental results on three benchmark datasets show that they are more effective than several state-of-the-art recommendation methods.",
"title": ""
},
{
"docid": "a086686928333e06592cd901e8a346bd",
"text": "BACKGROUND\nClosed-loop artificial pancreas device (APD) systems are externally worn medical devices that are being developed to enable people with type 1 diabetes to regulate their blood glucose levels in a more automated way. The innovative concept of this emerging technology is that hands-free, continuous, glycemic control can be achieved by using digital communication technology and advanced computer algorithms.\n\n\nMETHODS\nA horizon scanning review of this field was conducted using online sources of intelligence to identify systems in development. The systems were classified into subtypes according to their level of automation, the hormonal and glycemic control approaches used, and their research setting.\n\n\nRESULTS\nEighteen closed-loop APD systems were identified. All were being tested in clinical trials prior to potential commercialization. Six were being studied in the home setting, 5 in outpatient settings, and 7 in inpatient settings. It is estimated that 2 systems may become commercially available in the EU by the end of 2016, 1 during 2017, and 2 more in 2018.\n\n\nCONCLUSIONS\nThere are around 18 closed-loop APD systems progressing through early stages of clinical development. Only a few of these are currently in phase 3 trials and in settings that replicate real life.",
"title": ""
},
{
"docid": "5ba86ad281d4cce3c59b949810e5430b",
"text": "This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.",
"title": ""
},
{
"docid": "471471cfc90e7f212dd7bbbee08d714e",
"text": "Every year, a large number of children in the United States enter the foster care system. Many of them are eventually reunited with their biological parents or quickly adopted. A significant number, however, face long-term foster care, and some of these children are eventually adopted by their foster parents. The decision by foster parents to adopt their foster child carries significant economic consequences, including forfeiting foster care payments while also assuming responsibility for medical, legal, and educational expenses, to name a few. Since 1980, U.S. states have begun to offer adoption subsidies to offset some of these expenses, significantly lowering the cost of adopting a child who is in the foster care system. This article presents empirical evidence of the role that these economic incentives play in foster parents’ decision of when, or if, to adopt their foster child. We find that adoption subsidies increase adoptions through two distinct price mechanisms: by lowering the absolute cost of adoption, and by lowering the relative cost of adoption versus long-term foster care.",
"title": ""
},
{
"docid": "b05f2cc1590857e7a50d54f6201c8f82",
"text": "Holograms display a 3D image in high resolution and allow viewers to focus freely as if looking through a virtual window, yet computer generated holography (CGH) hasn't delivered the same visual quality under plane wave illumination and due to heavy computational cost. Light field displays have been popular due to their capability to provide continuous focus cues. However, light field displays must trade off between spatial and angular resolution, and do not model diffraction.\n We present a light field-based CGH rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view-dependent occlusion. Our rendering accurately accounts for diffraction and supports various types of reference illuminations for hologram. We avoid under- and over-sampling and geometric clipping effects seen in previous work. We also demonstrate an implementation of light field rendering plus the Fresnel diffraction integral based CGH calculation which is orders of magnitude faster than the state of the art [Zhang et al. 2015], achieving interactive volumetric 3D graphics.\n To verify our computational results, we build a see-through, near-eye, color CGH display prototype which enables co-modulation of both amplitude and phase. We show that our rendering accurately models the spherical illumination introduced by the eye piece and produces the desired 3D imagery at the designated depth. We also analyze aliasing, theoretical resolution limits, depth of field, and other design trade-offs for near-eye CGH.",
"title": ""
},
{
"docid": "2e42ab12b43022d22b9459cfaea6f436",
"text": "Treemaps provide an interesting solution for representing hierarchical data. However, most studies have mainly focused on layout algorithms and paid limited attention to the interaction with treemaps. This makes it difficult to explore large data sets and to get access to details, especially to those related to the leaves of the trees. We propose the notion of zoomable treemaps (ZTMs), an hybridization between treemaps and zoomable user interfaces that facilitates the navigation in large hierarchical data sets. By providing a consistent set of interaction techniques, ZTMs make it possible for users to browse through very large data sets (e.g., 700,000 nodes dispatched amongst 13 levels). These techniques use the structure of the displayed data to guide the interaction and provide a way to improve interactive navigation in treemaps.",
"title": ""
},
{
"docid": "745cdbb442c73316f691dc20cc696f31",
"text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.",
"title": ""
},
{
"docid": "f489708f15f3e5cdd15f669fb9979488",
"text": "Humans learn to play video games significantly faster than state-of-the-art reinforcement learning (RL) algorithms. Inspired by this, we introduce strategic object oriented reinforcement learning (SOORL) to learn simple dynamics model through automatic model selection and perform efficient planning with strategic exploration. We compare different exploration strategies in a model-based setting in which exact planning is impossible. Additionally, we test our approach on perhaps the hardest Atari game Pitfall! and achieve significantly improved exploration and performance over prior methods.",
"title": ""
},
{
"docid": "7435d1591725bbcd86fe93c607d5683c",
"text": "This study evaluated the role of breast magnetic resonance (MR) imaging in the selective study breast implant integrity. We retrospectively analysed the signs of breast implant rupture observed at breast MR examinations of 157 implants and determined the sensitivity and specificity of the technique in diagnosing implant rupture by comparing MR data with findings at surgical explantation. The linguine and the salad-oil signs were statistically the most significant signs for diagnosing intracapsular rupture; the presence of siliconomas/seromas outside the capsule and/or in the axillary lymph nodes calls for immediate explantation. In agreement with previous reports, we found a close correlation between imaging signs and findings at explantation. Breast MR imaging can be considered the gold standard in the study of breast implants. Scopo del nostro lavoro è stato quello di valutare il ruolo della risonanza magnetica (RM) mammaria nello studio selettivo dell’integrità degli impianti protesici. è stata eseguita una valutazione retrospettiva dei segni di rottura documentati all’esame RM effettuati su 157 protesi mammarie, al fine di stabilire la sensibilità e specificità nella diagnosi di rottura protesica, confrontando tali dati RM con i reperti riscontrati in sala operatoria dopo la rimozione della protesi stessa. Il linguine sign e il salad-oil sign sono risultati i segni statisticamente più significativi nella diagnosi di rottura protesica intracapsulare; la presenza di siliconomi/sieromi extracapsulari e/o nei linfonodi ascellari impone l’immediato intervento chirurgico di rimozione della protesi rotta. I dati ottenuti dimostrano, in accordo con la letteratura, una corrispondenza tra i segni dell’imaging e i reperti chirurgici, confermando il ruolo di gold standard della RM nello studio delle protesi mammarie.",
"title": ""
}
] |
scidocsrr
|
cbcc696e3af1af8899b4958e67ba6741
|
Towards rain detection through use of in-vehicle multipurpose cameras
|
[
{
"docid": "2ee5e5ecd9304066b12771f3349155f8",
"text": "An intelligent wiper speed adjustment system can be found in most middle and upper class cars. A core piece of this gadget is the rain sensor on the windshield. With the upcoming number of cars being equipped with an in-vehicle camera for vision-based applications the call for integrating all sensors in the area of the rearview mirror into one device rises to reduce the number of parts and variants. In this paper, functionality of standard rain sensors and different vision-based approaches are explained and a novel rain sensing concept based on an automotive in-vehicle camera for Driver Assistance Systems (DAS) is developed to enhance applicability. Hereby, the region at the bottom of the field of view (FOV) of the imager is used to detect raindrops, while the upper part of the image is still usable for other vision-based applications. A simple algorithm is set up to keep the additional processing time low and to quantitatively gather the rain intensity. Mechanisms to avoid false activations of the wipers are introduced. First experimental experiences based on real scenarios show promising results.",
"title": ""
}
] |
[
{
"docid": "ea1a56c7bcf4871d1c6f2f9806405827",
"text": "—Prior to the successful use of non-contact photoplethysmography, several engineering issues regarding this monitoring technique must be considered. These issues include ambient light and motion artefacts, the wide dynamic signal range and the effect of direct light source coupling. The latter issue was investigated and preliminary results show that direct coupling can cause attenuation of the detected PPG signal. It is shown that a physical offset can be introduced between the light source and the detector in order to reduce this effect.",
"title": ""
},
{
"docid": "a4b57037235e306034211e07e8500399",
"text": "As wireless devices boom and bandwidth-hungry applications (e.g., video and cloud uploading) get popular, today's wireless local area networks (WLANs) become not only crowded but also stressed at throughput. Multiuser multiple-input-multiple-output (MU-MIMO), an advanced form of MIMO, has gained attention due to its huge potential in improving the performance of WLANs. This paper surveys random access-based medium access control (MAC) protocols for MU-MIMO-enabled WLANs. It first provides background information about the evolution and the fundamental MAC schemes of IEEE 802.11 Standards and Amendments, and then identifies the key requirements of designing MU-MIMO MAC protocols for WLANs. After this, the most representative MU-MIMO MAC proposals in the literature are overviewed by benchmarking their MAC procedures and examining the key components, such as the channel state information acquisition, decoding/precoding, and scheduling schemes. Classifications and discussions on important findings of the surveyed MAC protocols are provided, based on which, the research challenges for designing effective MU-MIMO MAC protocols, as well as the envisaged MAC's role in the future heterogeneous networks, are highlighted.",
"title": ""
},
{
"docid": "f5d25ff18b9a5308fe45a2fe3e8c9ff8",
"text": "Synopsis: Using B cells from patients with chronic lymphocytic leukemia (CLL), Nakahara and colleagues have produced a lamprey monoclonal antibody with CLL idiotope specificity that can be used for early detection of leukemia recurrence. Lamprey antibodies can be generated rapidly and offer a complementary approach to the use of classical Ig-based anti-idiotope antibodies in the monitoring and management of patients with CLL.",
"title": ""
},
{
"docid": "0209132c7623c540c125a222552f33ac",
"text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "cf04d97cf2b0b8d1a8b1f4318c553856",
"text": "We analyze how network effects affect competition in the nascent cryptocurrency market. We do so by examining the changes over time in exchange rate data among cryptocurrencies. Specifically, we look at two aspects: (1) competition among different currencies, and (2) competition among exchanges where those currencies are traded. Our data suggest that the winner-take-all effect is dominant early in the market. During this period, when Bitcoin becomes more valuable against the U.S. dollar, it also becomes more valuable against other cryptocurrencies. This trend is reversed in the later period. The data in the later period are consistent with the use of cryptocurrencies as financial assets (popularized by Bitcoin), and not consistent with “winner-take-all”",
"title": ""
},
{
"docid": "6db6dccccbdcf77068ae4270a1d6b408",
"text": "In many engineering disciplines, abstract models are used to describe systems on a high level of abstraction. On this abstract level, it is often easier to gain insights about that system that is being described. When models of a system change – for example because the system itself has changed – any analyses based on these models have to be invalidated and thus have to be reevaluated again in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. However, as most often only small parts of the model change, large parts of this reevaluation could be saved by using previous results but such an incremental execution is barely done in practice as it is non-trivial and error-prone. The approach of implicit incrementalization o ers a solution by deriving an incremental evaluation strategy implicitly from a batch speci cation of the analysis. This works by deducing a dynamic dependency graph that allows to only reevaluate those parts of an analysis that are a ected by a given model change. Thus advantages of an incremental execution can be gained without changes to the code that would potentially degrade its understandability. However, current approaches to implicit incremental computation only support narrow classes of analysis, are restricted to an incremental derivation at instruction level or require an explicit state management. In addition, changes are only propagated sequentially, meanwhile modern multi-core architectures would allow parallel change propagation. Even with such improvements, it is unclear whether incremental execution in fact brings advantages as changes may easily cause butter y e ects, making a reuse of previous analysis results pointless (i.e. ine cient). This thesis deals with the problems of implicit incremental model analyses by proposing multiple approaches that mostly can be combined. Further, the",
"title": ""
},
{
"docid": "554d0255aef7ffac9e923da5d93b97e3",
"text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.",
"title": ""
},
{
"docid": "c393e229c735648e8469fe81014634a4",
"text": "Multivariate time series data are becoming increasingly common in numerous real world applications, e.g., power plant monitoring, health care, wearable devices, automobile, etc. As a result, multivariate time series retrieval, i.e., given the current multivariate time series segment, how to obtain its relevant time series segments in the historical data (or in the database), attracts significant amount of interest in many fields. Building such a system, however, is challenging since it requires a compact representation of the raw time series which can explicitly encode the temporal dynamics as well as the correlations (interactions) between different pairs of time series (sensors). Furthermore, it requires query efficiency and expects a returned ranking list with high precision on the top. Despite the fact that various approaches have been developed, few of them can jointly resolve these two challenges. To cope with this issue, in this paper we propose a Deep r-th root of Rank Supervised Joint Binary Embedding (Deep r-RSJBE) to perform multivariate time series retrieval. Given a raw multivariate time series segment, we employ Long Short-Term Memory (LSTM) units to encode the temporal dynamics and utilize Convolutional Neural Networks (CNNs) to encode the correlations (interactions) between different pairs of time series (sensors). Subsequently, a joint binary embedding is pursued to incorporate both the temporal dynamics and the correlations. Finally, we develop a novel r-th root ranking loss to optimize the precision at the top of a Hamming distance ranking list. Thoroughly empirical studies based upon three publicly available time series datasets demonstrate the effectiveness and the efficiency of Deep r-RSJBE.",
"title": ""
},
{
"docid": "d25a34b3208ee28f9cdcddb9adf46eb4",
"text": "1 Umeå University, Department of Computing Science, SE-901 87 Umeå, Sweden, {jubo,thomasj,marie}@cs.umu.se Abstract The transition to object-oriented programming is more than just a matter of programming language. Traditional syllabi fail to teach students the “big picture” and students have difficulties taking advantage of objectoriented concepts. In this paper we present a holistic approach to a CS1 course in Java favouring general objectoriented concepts over the syntactical details of the language. We present goals for designing such a course and a case study showing interesting results.",
"title": ""
},
{
"docid": "e5e2d26950e0a75014ffdbeabf55668e",
"text": "Agriculture is the most important sector that influences the economy of India. It contributes to 18% of India's Gross Domestic Product (GDP) and gives employment to 50% of the population of India. People of India are practicing Agriculture for years but the results are never satisfying due to various factors that affect the crop yield. To fulfill the needs of around 1.2 billion people, it is very important to have a good yield of crops. Due to factors like soil type, precipitation, seed quality, lack of technical facilities etc the crop yield is directly influenced. Hence, new technologies are necessary for satisfying the growing need and farmers must work smartly by opting new technologies rather than going for trivial methods. This paper focuses on implementing crop yield prediction system by using Data Mining techniques by doing analysis on agriculture dataset. Different classifiers are used namely J48, LWL, LAD Tree and IBK for prediction and then the performance of each is compared using WEKA tool. For evaluating performance Accuracy is used as one of the factors. The classifiers are further compared with the values of Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and Relative Absolute Error (RAE). Lesser the value of error, more accurate the algorithm will work. The result is based on comparison among the classifiers.",
"title": ""
},
{
"docid": "471eca6664d0ae8f6cdfb848bc910592",
"text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "b3bab7639acde03cbe12253ebc6eba31",
"text": "Autism spectrum disorder (ASD) is a wide-ranging collection of developmental diseases with varying symptoms and degrees of disability. Currently, ASD is diagnosed mainly with psychometric tools, often unable to provide an early and reliable diagnosis. Recently, biochemical methods are being explored as a means to meet the latter need. For example, an increased predisposition to ASD has been associated with abnormalities of metabolites in folate-dependent one carbon metabolism (FOCM) and transsulfuration (TS). Multiple metabolites in the FOCM/TS pathways have been measured, and statistical analysis tools employed to identify certain metabolites that are closely related to ASD. The prime difficulty in such biochemical studies comes from (i) inefficient determination of which metabolites are most important and (ii) understanding how these metabolites are collectively related to ASD. This paper presents a new method based on scores produced in Support Vector Machine (SVM) modeling combined with High Dimensional Model Representation (HDMR) sensitivity analysis. The new method effectively and efficiently identifies the key causative metabolites in FOCM/TS pathways, ranks their importance, and discovers their independent and correlative action patterns upon ASD. Such information is valuable not only for providing a foundation for a pathological interpretation but also for potentially providing an early, reliable diagnosis ideally leading to a subsequent comprehensive treatment of ASD. With only tens of SVM model runs, the new method can identify the combinations of the most important metabolites in the FOCM/TS pathways that lead to ASD. Previous efforts to find these metabolites required hundreds of thousands of model runs with the same data.",
"title": ""
},
{
"docid": "c227f76c42ae34af11193e3ecb224ecb",
"text": "Antibiotics and antibiotic resistance determinants, natural molecules closely related to bacterial physiology and consistent with an ancient origin, are not only present in antibiotic-producing bacteria. Throughput sequencing technologies have revealed an unexpected reservoir of antibiotic resistance in the environment. These data suggest that co-evolution between antibiotic and antibiotic resistance genes has occurred since the beginning of time. This evolutionary race has probably been slow because of highly regulated processes and low antibiotic concentrations. Therefore to understand this global problem, a new variable must be introduced, that the antibiotic resistance is a natural event, inherent to life. However, the industrial production of natural and synthetic antibiotics has dramatically accelerated this race, selecting some of the many resistance genes present in nature and contributing to their diversification. One of the best models available to understand the biological impact of selection and diversification are β-lactamases. They constitute the most widespread mechanism of resistance, at least among pathogenic bacteria, with more than 1000 enzymes identified in the literature. In the last years, there has been growing concern about the description, spread, and diversification of β-lactamases with carbapenemase activity and AmpC-type in plasmids. Phylogenies of these enzymes help the understanding of the evolutionary forces driving their selection. Moreover, understanding the adaptive potential of β-lactamases contribute to exploration the evolutionary antagonists trajectories through the design of more efficient synthetic molecules. In this review, we attempt to analyze the antibiotic resistance problem from intrinsic and environmental resistomes to the adaptive potential of resistance genes and the driving forces involved in their diversification, in order to provide a global perspective of the resistance problem.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "4faacdbc093ac8dea97403355f95e504",
"text": "What frameworks and architectures are necessary to create a vision system for AGI? In this paper, we propose a formal model that states the task of perception within AGI. We show the role of discriminative and generative models in achieving efficient and general solution of this task, thus specifying the task in more detail. We discuss some existing generative and discriminative models and demonstrate their insufficiency for our purposes. Finally, we discuss some architectural dilemmas and open questions.",
"title": ""
},
{
"docid": "72e4984c05e6b68b606775bbf4ce3b33",
"text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"title": ""
},
{
"docid": "3a011bdec6531de3f0f9718f35591e52",
"text": "Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/ her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ec93b4c61694916dd494e9376102726b",
"text": "In 1969 Barlow introduced the phrase economy of impulses to express the tendency for successive neural systems to use lower and lower levels of cell firings to produce equivalent encodings. From this viewpoint, the ultimate economy of impulses is a neural code of minimal redundancy. The hypothesis motivating our research is that energy expenditures, e.g., the metabolic cost of recovering from an action potential relative to the cost of inactivity, should also be factored into the economy of impulses. In fact, coding schemes with the largest representational capacity are not, in general, optimal when energy expenditures are taken into account. We show that for both binary and analog neurons, increased energy expenditure per neuron implies a decrease in average firing rate if energy efficient information transmission is to be maintained.",
"title": ""
},
{
"docid": "94d2c88b11c79e2f4bf9fdc3ed8e1861",
"text": "The advent of pulsed power technology in the 1960s has enabled the development of very high peak power sources of electromagnetic radiation in the microwave and millimeter wave bands of the electromagnetic spectrum. Such sources have applications in plasma physics, particle acceleration techniques, fusion energy research, high-power radars, and communications, to name just a few. This article describes recent ongoing activity in this field in both Russia and the United States. The overview of research in Russia focuses on high-power microwave (HPM) sources that are powered using SINUS accelerators, which were developed at the Institute of High Current Electronics. The overview of research in the United States focuses more broadly on recent accomplishments of a multidisciplinary university research initiative on HPM sources, which also involved close interactions with Department of Defense laboratories and industry. HPM sources described in this article have generated peak powers exceeding several gigawatts in pulse durations typically on the order of 100 ns in frequencies ranging from about 1 GHz to many tens of gigahertz.",
"title": ""
}
] |
scidocsrr
|
503aa97c0359cae1435c4884ddb807f3
|
An agile strategy for implementing CMMI project management practices in software organizations
|
[
{
"docid": "0cf1f63fd39c8c74465fad866958dac6",
"text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.",
"title": ""
}
] |
[
{
"docid": "b3003a6ae429ecccb257ab26af548790",
"text": "This paper presents a high-accuracy local positioning system (LPS) for an autonomous robotic greens mower. The LPS uses a sensor tower mounted on top of the robot and four active beacons surrounding a target area. The proposed LPS concurrently determines robot location using a lateration technique and calculates orientation using angle measurements. To perform localization, the sensor tower emits an ultrasonic pulse that is received by the beacons. The time of arrival is measured by each beacon and transmitted back to the sensor tower. To determine the robot's orientation, the sensor tower has a circular receiver array that detects infrared signals emitted by each beacon. Using the direction and strength of the received infrared signals, the relative angles to each beacon are obtained and the robot orientation can be determined. Experimental data show that the LPS achieves a position accuracy of 3.1 cm RMS, and an orientation accuracy of 0.23° RMS. Several prototype robotic mowers utilizing the proposed LPS have been deployed for field testing, and the mowing results are comparable to an experienced professional human worker.",
"title": ""
},
{
"docid": "cde0e673d8446037c6f1d8b301a68093",
"text": "Continuous innovation is a key ingredient in maintaining a competitive advantage in the current dynamic and demanding marketplace. It requires an organization to regularly update and create knowledge for the current generation, and reuse it later for the next generation of a product. In this regard, an integrated dynamic knowledge model is targeted to structurally define a practical knowledge creation process in the product development domain. This model primarily consists of three distinct elements; SECI (socialization–externalization–combination–internalization) modes, ‘ba’, and knowledge assets. The model involves tacit knowledge and explicit knowledge interplay in ‘ba’ to generate new knowledge during the four SECI modes and update the knowledge assets. It is believed that lean tools and methods can also promote learning and knowledge creation. Therefore, a set of ten lean tools and methods is proposed in order to support and improve the efficiency of the knowledge creation process. The approach first establishes a framework to create knowledge in the product development environment, and then systematically demonstrates how these ten lean tools and methods conceptually fit into and play a significant role in that process. Following this, each of them is analyzed and appropriately positioned in a SECI mode depending on best fit. The merits of each tool/method are discussed from the perspective of selecting the individual mode. The managerial implication is that correct and quick knowledge creation can result in faster development and improved quality of products. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7f415c10d8c57a9c3d202f7a36b8071a",
"text": "Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.",
"title": ""
},
{
"docid": "3182542aa5b500780bb8847178b8ec8d",
"text": "The United States is a diverse country with constantly changing demographics. The noticeable shift in demographics is even more phenomenal among the school-aged population. The increase of ethnic-minority student presence is largely credited to the national growth of the Hispanic population, which exceeded the growth of all other ethnic minority group students in public schools. Scholars have pondered over strategies to assist teachers in teaching about diversity (multiculturalism, racism, etc.) as well as interacting with the diversity found within their classrooms in order to ameliorate the effects of cultural discontinuity. One area that has developed in multicultural education literature is culturally relevant pedagogy (CRP). CRP maintains that teachers need to be non-judgmental and inclusive of the cultural backgrounds of their students in order to be effective facilitators of learning in the classroom. The plethora of literature on CRP, however, has not been presented as a testable theoretical model nor has it been systematically viewed through the lens of critical race theory (CRT). By examining the evolution of CRP among some of the leading scholars, the authors broaden this work through a CRT infusion which includes race and indeed racism as normal parts of American society that have been integrated into the educational system and the systematic aspects of school relationships. Their purpose is to infuse the tenets of CRT into an overview of the literature that supports a conceptual framework for understanding and studying culturally relevant pedagogy. They present a conceptual framework of culturally relevant pedagogy that is grounded in over a quarter of a century of research scholarship. By synthesizing the literature into the five areas and infusing it with the tenets of CRT, the authors have developed a collection of principles that represents culturally relevant pedagogy. (Contains 1 figure and 1 note.) culturally relevant pedagogy | teacher education | student-teacher relationships |",
"title": ""
},
{
"docid": "60dd1689962a702e72660b33de1f2a17",
"text": "A grammar formalism called GHRG based on CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. A CHRG executes as a robust bottom-up parser with an inherent treatment of ambiguity. The rules of a CHRG may refer to grammar symbols on either side of a sequence to be matched and this provides a powerful way to let parsing and attribute evaluation depend on linguistic context; examples show disambiguation of simple and ambiguous context-free rules and a handling of coordination in natural language. CHRGs may have rules to produce and consume arbitrary hypothesis and as an important application is shown an implementation of Assumption Grammars.",
"title": ""
},
{
"docid": "086ae308f849990e927a510f00da1b98",
"text": "This demonstration paper presents TouchCORE, a multi-touch enabled software design modelling tool aimed at developing scalable and reusable software design models following the concerndriven software development paradigm. After a quick review of concern-orientation, this paper primarily focusses on the new features that were added to TouchCORE since the last demonstration at Modularity 2014 (were the tool was still called TouchRAM). TouchCORE now provides full support for concern-orientation. This includes support for feature model editing and different modes for feature model and impact model visualization and assessment to best assist the concern designers as well as the concern users. To help the modeller understand the interactions between concerns, TouchCORE now also collects tracing information when concerns are reused and stores that information with the woven models. This makes it possible to visualize from which concern(s) a model element in the woven model has originated.",
"title": ""
},
{
"docid": "47de44e54c000911684a804c10497e39",
"text": "Electric storage units constitute a key element in the emerging smart grid system. In this paper, the interactions and energy trading decisions of a number of geographically distributed storage units are studied using a novel framework based on game theory. In particular, a noncooperative game is formulated between storage units, such as plug-in hybrid electric vehicles, or an array of batteries that are trading their stored energy. Here, each storage unit's owner can decide on the maximum amount of energy to sell in a local market so as to maximize a utility that reflects the tradeoff between the revenues from energy trading and the accompanying costs. Then in this energy exchange market between the storage units and the smart grid elements, the price at which energy is traded is determined via an auction mechanism. The game is shown to admit at least one Nash equilibrium and a novel algorithm that is guaranteed to reach such an equilibrium point is proposed. Simulation results show that the proposed approach yields significant performance improvements, in terms of the average utility per storage unit, reaching up to 130.2% compared to a conventional greedy approach.",
"title": ""
},
{
"docid": "4cde522275c034a8025c75d144a74634",
"text": "Novel sentence detection aims at identifying novel information from an incoming stream of sentences. Our research applies named entity recognition (NER) and part-of-speech (POS) tagging on sentence-level novelty detection and proposes a mixed method to utilize these two techniques. Furthermore, we discuss the performance when setting different history sentence sets. Experimental results of different approaches on TREC'04 Novelty Track show that our new combined method outperforms some other novelty detection methods in terms of precision and recall. The experimental observations of each approach are also discussed.",
"title": ""
},
{
"docid": "d07ebefd02d5e7e732a5570aa6a7dec8",
"text": "Starting from the principle and strategy of changing the diameter of walking wheel, applying ANSYS \"C FEMBS \"C SIMPACK to construct model of flexible connectors with geometric stiffness. Based on the theory of continuous colLision, we introduce mobile marker to define the nonLinear contact between the wheel's caster and ground. The rigid-flexible simulation analysis was carried out to wheel body system which consisting of six flexible spring connectors, to get the dynamic response in the mode of changing the diameter in situ and to get the minimum size of torque needing for changing diameter. The simulation's results provides theoretical basis for subsequent prototype development, and can be used as the references for further experiment's studying.",
"title": ""
},
{
"docid": "577f373477f6b8a8bee6a694dab6d3c9",
"text": "The YouTube-8M video classification challenge requires teams to classify 0.7 million videos into one or more of 4,716 classes. In this Kaggle competition, we placed in the top 3% out of 650 participants using released video and audio features . Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text. The newly introduced text data is termed as YouTube-8M-Text. We present a classification framework for the joint use of text, visual and audio features, and conduct an extensive set of experiments to quantify the benefit that this additional mode brings. The inclusion of text yields state-of-the-art results, e.g. 86.7% GAP on the YouTube-8M-Text validation dataset.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "41f3817ccac0bad1adbfd15d78056de5",
"text": "Seeking to reframe computational thinking as computational participation.",
"title": ""
},
{
"docid": "e315a7e8e83c4130f9a53dec21598ae6",
"text": "Modern techniques for data analysis and machine learning are so called kernel methods. The most famous and successful one is represented by the support vector machine (SVM) for classification or regression tasks. Further examples are kernel principal component analysis for feature extraction or other linear classifiers like the kernel perceptron. The fundamental ingredient in these methods is the choice of a kernel function, which computes a similarity measure between two input objects. For good generalization abilities of a learning algorithm it is indispensable to incorporate problem-specific a-priori knowledge into the learning process. The kernel function is an important element for this. This thesis focusses on a certain kind of a-priori knowledge namely transformation knowledge. This comprises explicit knowledge of pattern variations that do not or only slightly change the pattern’s inherent meaning e.g. rigid movements of 2D/3D objects or transformations like slight stretching, shifting, rotation of characters in optical character recognition etc. Several methods for incorporating such knowledge in kernel functions are presented and investigated. 1. Invariant distance substitution kernels (IDS-kernels): In many practical questions the transformations are implicitly captured by sophisticated distance measures between objects. Examples are nonlinear deformation models between images. Here an explicit parameterization would require an arbitrary number of parameters. Such distances can be incorporated in distanceand inner-product-based kernels. 2. Tangent distance kernels (TD-kernels): Specific instances of IDS-kernels are investigated in more detail as these can be efficiently computed. We assume differentiable transformations of the patterns. Given such knowledge, one can construct linear approximations of the transformation manifolds and use these efficiently for kernel construction by suitable distance functions. 3. Transformation integration kernels (TI-kernels): The technique of integration over transformation groups for feature extraction can be extended to kernel functions and more general group, non-group, discrete or continuous transformations in a suitable way. Theoretically, these approaches differ in the way the transformations are represented and in the adjustability of the transformation extent. More fundamentally, kernels from category 3 turn out to be positive definite, kernels of types 1 and 2 are not positive definite, which is generally required for being usable in kernel methods. This is the",
"title": ""
},
{
"docid": "c61e25e5896ff588764639b6a4c18d2e",
"text": "Social media is continually emerging as a platform of information exchange around health challenges. We study mental health discourse on the popular social media: reddit. Building on findings about health information seeking and sharing practices in online forums, and social media like Twitter, we address three research challenges. First, we present a characterization of self-disclosure in mental illness communities on reddit. We observe individuals discussing a variety of concerns ranging from the daily grind to specific queries about diagnosis and treatment. Second, we build a statistical model to examine the factors that drive social support on mental health reddit communities. We also develop language models to characterize mental health social support, which are observed to bear emotional, informational, instrumental, and prescriptive information. Finally, we study disinhibition in the light of the dissociative anonymity that reddit’s throwaway accounts provide. Apart from promoting open conversations, such anonymity surprisingly is found to gather feedback that is more involving and emotionally engaging. Our findings reveal, for the first time, the kind of unique information needs that a social media like reddit might be fulfilling when it comes to a stigmatic illness. They also expand our understanding of the role of the social web in behavioral therapy.",
"title": ""
},
{
"docid": "8976eea8c39d9cb9dea21c42bae8ebea",
"text": "Continuously monitoring schizophrenia patients’ psychiatric symptoms is crucial for in-time intervention and treatment adjustment. The Brief Psychiatric Rating Scale (BPRS) is a survey administered by clinicians to evaluate symptom severity in schizophrenia. The CrossCheck symptom prediction system is capable of tracking schizophrenia symptoms based on BPRS using passive sensing from mobile phones. We present results from an ongoing randomized control trial, where passive sensing data, self-reports, and clinician administered 7-item BPRS surveys are collected from 36 outpatients with schizophrenia recently discharged from hospital over a period ranging from 2-12 months. We show that our system can predict a symptom scale score based on a 7-item BPRS within ±1.45 error on average using automatically tracked behavioral features from phones (e.g., mobility, conversation, activity, smartphone usage, the ambient acoustic environment) and user supplied self-reports. Importantly, we show our system is also capable of predicting an individual BPRS score within ±1.59 error purely based on passive sensing from phones without any self-reported information from outpatients. Finally, we discuss how well our predictive system reflects symptoms experienced by patients by reviewing a number of case studies.",
"title": ""
},
{
"docid": "df63f01ed7b35b9e4e5638305d1aa87c",
"text": "Most prior work on information extraction has focused on extracting information from text in digital documents. However, often, the most important information being reported in an article is presented in tabular form in a digital document. If the data reported in tables can be extracted and stored in a database, the data can be queried and joined with other data using database management systems. In order to prepare the data source for table search, accurately detecting the table boundary plays a crucial role for the later table structure decomposition. Table boundary detection and content extraction is a challenging problem because tabular formats are not standardized across all documents. In this paper, we propose a simple but effective preprocessing method to improve the table boundary detection performance by considering the sparse-line property of table rows. Our method easily simplifies the table boundary detection problem into the sparse line analysis problem with much less noise. We design eight line label types and apply two machine learning techniques, Conditional Random Field (CRF) and Support Vector Machines (SVM), on the table boundary detection field. The experimental results not only compare the performances between the machine learning methods and the heuristics-based method, but also demonstrate the effectiveness of the sparse line analysis in the table boundary detection.",
"title": ""
},
{
"docid": "f2fd1bee7b2770bbf808d8902f4964b4",
"text": "Antimicrobial and antiquorum sensing (AQS) activities of fourteen ethanolic extracts of different parts of eight plants were screened against four Gram-positive, five Gram-negative bacteria and four fungi. Depending on the plant part extract used and the test microorganism, variable activities were recorded at 3 mg per disc. Among the Grampositive bacteria tested, for example, activities of Laurus nobilis bark extract ranged between a 9.5 mm inhibition zone against Bacillus subtilis up to a 25 mm one against methicillin resistant Staphylococcus aureus. Staphylococcus aureus and Aspergillus fumigatus were the most susceptible among bacteria and fungi tested towards other plant parts. Of interest is the tangible antifungal activity of a Tecoma capensis flower extract, which is reported for the first time. However, minimum inhibitory concentrations (MIC's) for both bacteria and fungi were relatively high (0.5-3.0 mg). As for antiquorum sensing activity against Chromobacterium violaceum, superior activity (>17 mm QS inhibition) was associated with Sonchus oleraceus and Laurus nobilis extracts and weak to good activity (8-17 mm) was recorded for other plants. In conclusion, results indicate the potential of these plant extracts in treating microbial infections through cell growth inhibition or quorum sensing antagonism, which is reported for the first time, thus validating their medicinal use.",
"title": ""
},
{
"docid": "daf14df75f870cf9e22eb51828c89e95",
"text": "The authors present a new model of free recall on the basis of M. W. Howard and M. J. Kahana's temporal context model and M. Usher and J. L. McClelland's leaky-accumulator decision model. In this model, contextual drift gives rise to both short-term and long-term recency effects, and contextual retrieval gives rise to short-term and long-term contiguity effects. Recall decisions are controlled by a race between competitive leaky accumulators. The model captures the dynamics of immediate, delayed, and continual distractor free recall, demonstrating that dissociations between short- and long-term recency can naturally arise from a model in which an internal contextual state is used as the sole cue for retrieval across time scales.",
"title": ""
},
{
"docid": "ed39af901c58a8289229550084bc9508",
"text": "Digital elevation maps are simple yet powerful representations of complex 3-D environments. These maps can be built and updated using various sensors and sensorial data processing algorithms. This paper describes a novel approach for modeling the dynamic 3-D driving environment, the particle-based dynamic elevation map, each cell in this map having, in addition to height, a probability distribution of speed in order to correctly describe moving obstacles. The dynamic elevation map is represented by a population of particles, each particle having a position, a height, and a speed. Particles move from one cell to another based on their speed vectors, and they are created, multiplied, or destroyed using an importance resampling mechanism. The importance resampling mechanism is driven by the measurement data provided by a stereovision sensor. The proposed model is highly descriptive for the driving environment, as it can easily provide an estimation of the height, speed, and occupancy of each cell in the grid. The system was proven robust and accurate in real driving scenarios, by comparison with ground truth data.",
"title": ""
},
{
"docid": "8a20feb22ce8797fa77b5d160919789c",
"text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.",
"title": ""
}
] |
scidocsrr
|
8d0f890590d41d3e24f7463ed329ccad
|
Blockchain-Based Database to Ensure Data Integrity in Cloud Computing Environments
|
[
{
"docid": "016a07d2ddb55149708409c4c62c67e3",
"text": "Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing with the aim of providing reliable, resilient infrastructure, and with high quality of services for cloud users in both academic and business environments. However, the outsourced data in the cloud and the computation results are not always trustworthy because of the lack of physical possession and control over the data for data owners as a result of using to virtualization, replication and migration techniques. Since that the security protection the threats to outsourced data have become a very challenging and potentially formidable task in cloud computing, many researchers have focused on ameliorating this problem and enabling public auditability for cloud data storage security using remote data auditing (RDA) techniques. This paper presents a comprehensive survey on the remote data storage auditing in single cloud server domain and presents taxonomy of RDA approaches. The objective of this paper is to highlight issues and challenges to current RDA protocols in the cloud and the mobile cloud computing. We discuss the thematic taxonomy of RDA based on significant parameters such as security requirements, security metrics, security level, auditing mode, and update mode. The state-of-the-art RDA approaches that have not received much coverage in the literature are also critically analyzed and classified into three groups of provable data possession, proof of retrievability, and proof of ownership to present a taxonomy. It also investigates similarities and differences in such framework and discusses open research issues as the future directions in RDA research. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "b76e466d4b446760bf3fd5d70e2edc1b",
"text": "Cloud computing has emerged as a long-dreamt vision of the utility computing paradigm that provides reliable and resilient infrastructure for users to remotely store data and use on-demand applications and services. Currently, many individuals and organizations mitigate the burden of local data storage and reduce the maintenance cost by outsourcing data to the cloud. However, the outsourced data is not always trustworthy due to the loss of physical control and possession over the data. As a result, many scholars have concentrated on relieving the security threats of the outsourced data by designing the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the stored data in the cloud. The RDA is a useful technique to check the reliability and integrity of data outsourced to a single or distributed servers. This is because all of the RDA techniques for single cloud servers are unable to support data recovery; such techniques are complemented with redundant storage mechanisms. The article also reviews techniques of remote data auditing more comprehensively in the domain of the distributed clouds in conjunction with the presentation of classifying ongoing developments within this specified area. The thematic taxonomy of the distributed storage auditing is presented based on significant parameters, such as scheme nature, security pattern, objective functions, auditing mode, update mode, cryptography model, and dynamic data structure. The more recent remote auditing approaches, which have not gained considerable attention in distributed cloud environments, are also critically analyzed and further categorized into three different classes, namely, replication based, erasure coding based, and network coding based, to present a taxonomy. This survey also aims to investigate similarities and differences of such a framework on the basis of the thematic taxonomy to diagnose significant and explore major outstanding issues.",
"title": ""
}
] |
[
{
"docid": "8fcb30825553e58ff66fd85ded10111e",
"text": "Most ecological processes now show responses to anthropogenic climate change. In terrestrial, freshwater, and marine ecosystems, species are changing genetically, physiologically, morphologically, and phenologically and are shifting their distributions, which affects food webs and results in new interactions. Disruptions scale from the gene to the ecosystem and have documented consequences for people, including unpredictable fisheries and crop yields, loss of genetic diversity in wild crop varieties, and increasing impacts of pests and diseases. In addition to the more easily observed changes, such as shifts in flowering phenology, we argue that many hidden dynamics, such as genetic changes, are also taking place. Understanding shifts in ecological processes can guide human adaptation strategies. In addition to reducing greenhouse gases, climate action and policy must therefore focus equally on strategies that safeguard biodiversity and ecosystems.",
"title": ""
},
{
"docid": "6c04e51492224fa3dc2c5bbf6608266b",
"text": "In many applications, one can obtain descriptions about the same objects or events from a variety of sources. As a result, this will inevitably lead to data or information conflicts. One important problem is to identify the true information (i.e., the truths) among conflicting sources of data. It is intuitive to trust reliable sources more when deriving the truths, but it is usually unknown which one is more reliable a priori. Moreover, each source possesses a variety of properties with different data types. An accurate estimation of source reliability has to be made by modeling multiple properties in a unified model. Existing conflict resolution work either does not conduct source reliability estimation, or models multiple properties separately. In this paper, we propose to resolve conflicts among multiple sources of heterogeneous data types. We model the problem using an optimization framework where truths and source reliability are defined as two sets of unknown variables. The objective is to minimize the overall weighted deviation between the truths and the multi-source observations where each source is weighted by its reliability. Different loss functions can be incorporated into this framework to recognize the characteristics of various data types, and efficient computation approaches are developed. Experiments on real-world weather, stock and flight data as well as simulated multi-source data demonstrate the necessity of jointly modeling different data types in the proposed framework.",
"title": ""
},
{
"docid": "644de61e0da130aafcd65691a8e1f47a",
"text": "We report on the first implementation of a single photon avalanche diode (SPAD) in 130 nm complementary metal-oxide-semiconductor (CMOS) technology. The SPAD is fabricated as p+/n-well junction with octagonal shape. A guard ring of p-well around the p+ anode is used to prevent premature discharge. To investigate the dynamics of the new device, both active and passive quenching methods have been used. Single photon detection is achieved by sensing the avalanche using a fast comparator. The SPAD exhibits a maximum photon detection probability of 41% and a typical dark count rate of 100 kHz at room temperature. Thanks to its timing resolution of 144 ps full-width at half-maximum (FWHM), the SPAD has several uses in disparate disciplines, including medical imaging, 3D vision, biophotonics, low-light illumination imaging, etc.",
"title": ""
},
{
"docid": "8f70026ff59ed1ae54ab5b6dadd2a3da",
"text": "Exoskeleton suit is a kind of human-machine robot, which combines the humans intelligence with the powerful energy of mechanism. It can help people to carry heavy load, walking on kinds of terrains and have a broadly apply area. Though many exoskeleton suits has been developed, there need many complex sensors between the pilot and the exoskeleton system, which decrease the comfort of the pilot. Sensitivity amplification control (SAC) is a method applied in exoskeleton system without any sensors between the pilot and the exoskeleton. In this paper simulation research was made to verify the feasibility of SAC include a simple 1-dof model and a swing phase model of 3-dof. A PID controller was taken to describe the human-machine interface model. Simulation results show the human only need to exert a scale-down version torque compared with the actuator and decrease the power consumes of the pilot.",
"title": ""
},
{
"docid": "5407b8e976d7e6e1d7aa1e00c278a400",
"text": "In his paper a 7T SRAM cell operating well in low voltages is presented. Suitable read operation structure is provided by controlling the drain induced barrier lowering (DIBL) effect and body-source voltage in the hold `1' state. The read-operation structure of the proposed cell utilizes the single transistor which leads to a larger write margin. The simulation results at 90nm TSMC CMOS demonstrate the outperforms of the proposed SRAM cell in terms of power dissipation, write margin, sensitivity to process variations as compared with the other most efficient low-voltage SRAM cells.",
"title": ""
},
{
"docid": "73e27f751c8027bac694f2e876d4d910",
"text": "The numerous and diverse applications of the Internet of Things (IoT) have the potential to change all areas of daily life of individuals, businesses, and society as a whole. The vision of a pervasive IoT spans a wide range of application domains and addresses the enabling technologies needed to meet the performance requirements of various IoT applications. In order to accomplish this vision, this paper aims to provide an analysis of literature in order to propose a new classification of IoT applications, specify and prioritize performance requirements of such IoT application classes, and give an insight into state-of-the-art technologies used to meet these requirements, all from telco’s perspective. A deep and comprehensive understanding of the scope and classification of IoT applications is an essential precondition for determining their performance requirements with the overall goal of defining the enabling technologies towards fifth generation (5G) networks, while avoiding over-specification and high costs. Given the fact that this paper presents an overview of current research for the given topic, it also targets the research community and other stakeholders interested in this contemporary and attractive field for the purpose of recognizing research gaps and recommending new research directions.",
"title": ""
},
{
"docid": "ca4100a8c305c064ea8716702859f11b",
"text": "It is widely believed, in the areas of optics, image analysis, and visual perception, that the Hilbert transform does not extend naturally and isotropically beyond one dimension. In some areas of image analysis, this belief has restricted the application of the analytic signal concept to multiple dimensions. We show that, contrary to this view, there is a natural, isotropic, and elegant extension. We develop a novel two-dimensional transform in terms of two multiplicative operators: a spiral phase spectral (Fourier) operator and an orientational phase spatial operator. Combining the two operators results in a meaningful two-dimensional quadrature (or Hilbert) transform. The new transform is applied to the problem of closed fringe pattern demodulation in two dimensions, resulting in a direct solution. The new transform has connections with the Riesz transform of classical harmonic analysis. We consider these connections, as well as others such as the propagation of optical phase singularities and the reconstruction of geomagnetic fields.",
"title": ""
},
{
"docid": "f250e8879618f73d5e23676a96f02e81",
"text": "Brain oscillatory activity is associated with different cognitive processes and plays a critical role in meditation. In this study, we investigated the temporal dynamics of oscillatory changes during Sahaj Samadhi meditation (a concentrative form of meditation that is part of Sudarshan Kriya yoga). EEG was recorded during Sudarshan Kriya yoga meditation for meditators and relaxation for controls. Spectral and coherence analysis was performed for the whole duration as well as specific blocks extracted from the initial, middle, and end portions of Sahaj Samadhi meditation or relaxation. The generation of distinct meditative states of consciousness was marked by distinct changes in spectral powers especially enhanced theta band activity during deep meditation in the frontal areas. Meditators also exhibited increased theta coherence compared to controls. The emergence of the slow frequency waves in the attention-related frontal regions provides strong support to the existing claims of frontal theta in producing meditative states along with trait effects in attentional processing. Interestingly, increased frontal theta activity was accompanied reduced activity (deactivation) in parietal–occipital areas signifying reduction in processing associated with self, space and, time.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "db7bc8bbfd7dd778b2900973f2cfc18d",
"text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.",
"title": ""
},
{
"docid": "f3a838d6298c8ae127e548ba62e872eb",
"text": "Plasmodium falciparum resistance to artemisinins, the most potent and fastest acting anti-malarials, threatens malaria elimination strategies. Artemisinin resistance is due to mutation of the PfK13 propeller domain and involves an unconventional mechanism based on a quiescence state leading to parasite recrudescence as soon as drug pressure is removed. The enhanced P. falciparum quiescence capacity of artemisinin-resistant parasites results from an increased ability to manage oxidative damage and an altered cell cycle gene regulation within a complex network involving the unfolded protein response, the PI3K/PI3P/AKT pathway, the PfPK4/eIF2α cascade and yet unidentified transcription factor(s), with minimal energetic requirements and fatty acid metabolism maintained in the mitochondrion and apicoplast. The detailed study of these mechanisms offers a way forward for identifying future intervention targets to fend off established artemisinin resistance.",
"title": ""
},
{
"docid": "20563a2f75e074fe2a62a5681167bc01",
"text": "The introduction of a new generation of attractive touch screen-based devices raises many basic usability questions whose answers may influence future design and market direction. With a set of current mobile devices, we conducted three experiments focusing on one of the most basic interaction actions on touch screens: the operation of soft buttons. Issues investigated in this set of experiments include: a comparison of soft button and hard button performance; the impact of audio and vibrato-tactile feedback; the impact of different types of touch sensors on use, behavior, and performance; a quantitative comparison of finger and stylus operation; and an assessment of the impact of soft button sizes below the traditional 22 mm recommendation as well as below finger width.",
"title": ""
},
{
"docid": "cbc2b592efc227a5c6308edfbca51bd6",
"text": "The rapidly growing presence of Internet of Things (IoT) devices is becoming a continuously alluring playground for malicious actors who try to harness their vast numbers and diverse locations. One of their primary goals is to assemble botnets that can serve their nefarious purposes, ranging from Denial of Service (DoS) to spam and advertisement fraud. The most recent example that highlights the severity of the problem is the Mirai family of malware, which is accountable for a plethora of massive DDoS attacks of unprecedented volume and diversity. The aim of this paper is to offer a comprehensive state-of-the-art review of the IoT botnet landscape and the underlying reasons of its success with a particular focus on Mirai and major similar worms. To this end, we provide extensive details on the internal workings of IoT malware, examine their interrelationships, and elaborate on the possible strategies for defending against them.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "8d350cc11997b6a0dc96c9fef2b1919f",
"text": "Task-parameterized models of movements aims at automatically adapting movements to new situations encountered by a robot. The task parameters can for example take the form of positions of objects in the environment, or landmark points that the robot should pass through. This tutorial aims at reviewing existing approaches for task-adaptive motion encoding. It then narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems, or basis functions, which are most commonly encountered in service robotics. Each section of the paper is accompanied with source codes designed as simple didactic examples implemented in Matlab with a full compatibility with GNU Octave, closely following the notation and equations of the article. It also presents ongoing work and further challenges that remain to be addressed, with examples provided in simulation and on a real robot (transfer of manipulation behaviors to the Baxter bimanual robot). The repository for the accompanying source codes is available at http://www.idiap.ch/software/pbdlib/.",
"title": ""
},
{
"docid": "6442c9e4eb9034abf90fcd697c32a343",
"text": "With the increasing popularity and demand for mobile applications, there has been a significant increase in the number of mobile application development projects. Highly volatile requirements of mobile applications require adaptive software development methods. The Agile approach is seen as a natural fit for mobile application and there is a need to explore various Agile methodologies for the development of mobile applications. This paper evaluates how adopting various Agile approaches improves the development of mobile applications and if they can be used in order to provide more tailor-made process improvements within an organization. A survey related to mobile application development process improvement was developed. The use of various Agile approaches for success in mobile application development were evaluated by determining the significance of the most used Agile engineering paradigms such as XP, Scrum, and Lean. The findings of the study show that these Agile methods have the potential to help deliver enhanced speed and quality for mobile application development.",
"title": ""
},
{
"docid": "c6f173f75917ee0632a934103ca7566c",
"text": "Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2 − 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT. We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.",
"title": ""
},
{
"docid": "77b4cb00c3a72fdeefa99aa504f492d8",
"text": "This article considers a short survey of basic methods of social networks analysis, which are used for detecting cyber threats. The main types of social network threats are presented. Basic methods of graph theory and data mining, that deals with social networks analysis are described. Typical security tasks of social network analysis, such as community detection in network, detection of leaders in communities, detection experts in networks, clustering text information and others are considered.",
"title": ""
},
{
"docid": "3669d58dc1bed1d83e5d0d6747771f0e",
"text": "To cite: He A, Kwatra SG, Kim N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016214761 DESCRIPTION A 26-year-old woman with a reported history of tinea versicolour presented for persistent hypopigmentation on her bilateral forearms. Detailed examination revealed multiple small (5–10 mm), irregularly shaped white macules on the extensor surfaces of the bilateral forearms overlying slightly erythaematous skin. The surrounding erythaematous skin blanched with pressure and with elevation of the upper extremities the white macules were no longer visible (figures 1 and 2). A clinical diagnosis of Bier spots was made based on the patient’s characteristic clinical features. Bier spots are completely asymptomatic and are often found on the extensor surfaces of the upper and lower extremities, although they are sometimes generalised. They are a benign physiological vascular anomaly, arising either from cutaneous vessels responding to venous hypertension or from small vessel vasoconstriction leading to tissue hypoxia. 3 Our patient had neither personal nor family history of vascular disease. Bier spots are easily diagnosed by a classic sign on physical examination: the pale macules disappear with pressure applied on the surrounding skin or by elevating the affected limbs (figure 2). However, Bier spots can be easily confused with a variety of other disorders associated with hypopigmented macules. The differential diagnosis includes vitiligo, postinflammatory hypopigmentation and tinea versicolour, which was a prior diagnosis in this case. Bier spots are often idiopathic and regress spontaneously, although there are reports of Bier spots heralding systemic diseases, such as scleroderma renal crisis, mixed cryoglobulinaemia or lymphoma. Since most Bier spots are idiopathic and transient, no treatment is required.",
"title": ""
},
{
"docid": "f331337a19cff2cf29e89a87d7ab234f",
"text": "This paper presents an investigation of lexical chaining (Morris and Hirst, 1991) for measuring discourse coherence quality in test-taker essays. We hypothesize that attributes of lexical chains, as well as interactions between lexical chains and explicit discourse elements, can be harnessed for representing coherence. Our experiments reveal that performance achieved by our new lexical chain features is better than that of previous discourse features used for this task, and that the best system performance is achieved when combining lexical chaining features with complementary discourse features, such as those provided by a discourse parser based on rhetorical structure theory, and features that reflect errors in grammar, word usage, and mechanics.",
"title": ""
}
] |
scidocsrr
|
4f70a710f54f5b340055d06c8d703ee6
|
Influence of immediate post-extraction socket irrigation on development of alveolar osteitis after mandibular third molar removal: a prospective split-mouth study, preliminary report
|
[
{
"docid": "accbfd3c4caade25329a2a5743559320",
"text": "PURPOSE\nThe purpose of this investigation was to assess the frequency of complications of third molar surgery, both intraoperatively and postoperatively, specifically for patients 25 years of age or older.\n\n\nMATERIALS AND METHODS\nThis prospective study evaluated 3,760 patients, 25 years of age or older, who were to undergo third molar surgery by oral and maxillofacial surgeons practicing in the United States. The predictor variables were categorized as demographic (age, gender), American Society of Anesthesiologists classification, chronic conditions and medical risk factors, and preoperative description of third molars (present or absent, type of impaction, abnormalities or association with pathology). Outcome variables were intraoperative and postoperative complications, as well as quality of life issues (days of work missed or normal activity curtailed). Frequencies for data collected were tabulated.\n\n\nRESULTS\nThe sample was provided by 63 surgeons, and was composed of 3,760 patients with 9,845 third molars who were 25 years of age or older, of which 8,333 third molars were removed. Alveolar osteitis was the most frequently encountered postoperative problem (0.2% to 12.7%). Postoperative inferior alveolar nerve anesthesia/paresthesia occurred with a frequency of 1.1% to 1.7%, while lingual nerve anesthesia/paresthesia was calculated as 0.3%. All other complications also occurred with a frequency of less than 1%.\n\n\nCONCLUSION\nThe findings of this study indicate that third molar surgery in patients 25 years of age or older is associated with minimal morbidity, a low incidence of postoperative complications, and minimal impact on the patients quality of life.",
"title": ""
}
] |
[
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "c02f98ba21ed80995e810c77a6def394",
"text": "Forensic and Security Laboratory School of Computer Engineering, Nanyang Technological University, Block N4, Nanyang Avenue, Singapore 639798 Biometrics Research Centre Department of Computing, The Hong Kong Polytechnic University Kowloon, Hong Kong Pattern Analysis and Machine Intelligence Research Group Department of Electrical and Computer Engineering University of Waterloo, 200 University Avenue West, Ontario, Canada",
"title": ""
},
{
"docid": "85b1fe5c3d6d68791345d32eda99055b",
"text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.",
"title": ""
},
{
"docid": "d961378b22aae8d793b38c40b66318de",
"text": "Socio-economic hardships put children in an underprivileged position. This systematic review was conducted to identify factors linked to underachievement of disadvantaged pupils in school science and maths. What could be done as evidence-based practice to make the lives of these young people better? The protocol from preferred reporting items for systematic reviews and meta-analyses (PRISMA) was followed. Major electronic educational databases were searched. Papers meeting pre-defined selection criteria were identified. Studies included were mainly large-scale evaluations with a clearly defined comparator group and robust research design. All studies used a measure of disadvantage such as lower SES, language barrier, ethnic minority or temporary immigrant status and an outcome measure like attainment in standardised national tests. A majority of papers capable of answering the research question were correlational studies. The review reports findings from 771 studies published from 2005 to 2014 in English language. Thirtyfour studies were synthesised. Results suggest major factors linking deprivation to underachievement can be thematically categorised into a lack of positive environment and support. Recommendations from the research reports are discussed. Subjects: Behavioral Sciences; Education; International & Comparative Education; Social Sciences",
"title": ""
},
{
"docid": "9e4da48d0fa4c7ff9566f30b73da3dc3",
"text": "Yang Song; Robert van Boeschoten University of Amsterdam Plantage Muidergracht 12, 1018 TV Amsterdam, the Netherlands [email protected]; [email protected] Abstract: Crowdfunding has been used as one of the effective ways for entrepreneurs to raise funding especially in creative industries. Individuals as well as organizations are paying more attentions to the emergence of new crowdfunding platforms. In the Netherlands, the government is also trying to help artists access financial resources through crowdfunding platforms. This research aims at discovering the success factors for crowdfunding projects from both founders’ and funders’ perspective. We designed our own website for founders and funders to observe crowdfunding behaviors. We linked our self-designed website to Google analytics in order to collect our data. Our research will contribute to crowdfunding success factors and provide practical recommendations for practitioners and researchers.",
"title": ""
},
{
"docid": "9779c9f4f15d9977a20592cabb777059",
"text": "Expert search or recommendation involves the retrieval of people (experts) in response to a query and on occasion, a given set of constraints. In this paper, we address expert recommendation in academic domains that are different from web and intranet environments studied in TREC. We propose and study graph-based models for expertise retrieval with the objective of enabling search using either a topic (e.g. \"Information Extraction\") or a name (e.g. \"Bruce Croft\"). We show that graph-based ranking schemes despite being \"generic\" perform on par with expert ranking models specific to topic-based and name-based querying.",
"title": ""
},
{
"docid": "df6c7f13814178d7b34703757899d6b1",
"text": "Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",
"title": ""
},
{
"docid": "7b5331b0e6ad693fc97f5f3b543bf00c",
"text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.",
"title": ""
},
{
"docid": "ea6eecdaed8e76c28071ad1d9c1c39f9",
"text": "When it comes to taking the public transportation, time and patience are of essence. In other words, many people using public transport buses have experienced time loss because of waiting at the bus stops. In this paper, we proposed smart bus tracking system that any passenger with a smart phone or mobile device with the QR (Quick Response) code reader can scan QR codes placed at bus stops to view estimated bus arrival times, buses' current locations, and bus routes on a map. Anyone can access these maps and have the option to sign up to receive free alerts about expected bus arrival times for the interested buses and related routes via SMS and e-mails. We used C4.5 (a statistical classifier) algorithm for the estimation of bus arrival times to minimize the passengers waiting time. GPS (Global Positioning System) and Google Maps are used for navigation and display services, respectively.",
"title": ""
},
{
"docid": "35a85d6652bd333d93f8112aff83ab83",
"text": "For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is modelagnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems.",
"title": ""
},
{
"docid": "587f6e73ca6653860cda66238d2ba146",
"text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots more challenging than their counterpart parallel-actuated robots. In the case with redundant cables, feedback control laws can be designed to make all tensions positive while attaining desired control performance. This paper presents approaches to design positive tension controllers for cable suspended robots with redundant cables. Their effectiveness is demonstrated through simulations and experiments on a three degree-of-freedom cable suspended robots.",
"title": ""
},
{
"docid": "0829cf1fb1654525627fdc61d1814196",
"text": "The selection of indexing terms for representing documents is a key decision that limits how effective subsequent retrieval can be. Often stemming algorithms are used to normalize surface forms, and thereby address the problem of not finding documents that contain words related to query terms through infectional or derivational morphology. However, rule-based stemmers are not available for every language and it is unclear which methods for coping with morphology are most effective. In this paper we investigate an assortment of techniques for representing text and compare these approaches using data sets in eighteen languages and five different writing systems.\n We find character n-gram tokenization to be highly effective. In half of the languages examined n-grams outperform unnormalized words by more than 25%; in highly infective languages relative improvements over 50% are obtained. In languages with less morphological richness the choice of tokenization is not as critical and rule-based stemming can be an attractive option, if available. We also conducted an experiment to uncover the source of n-gram power and a causal relationship between the morphological complexity of a language and n-gram effectiveness was demonstrated.",
"title": ""
},
{
"docid": "bc7c5ab8ec28e9a5917fc94b776b468a",
"text": "Reasonable house price prediction is a meaningful task, and the house clustering is an important process in the prediction. In this paper, we propose the method of Multi-Scale Affinity Propagation(MSAP) aggregating the house appropriately by the landmark and the facility. Then in each cluster, using Linear Regression model with Normal Noise(LRNN) predicts the reasonable price, which is verified by the increasing number of the renting reviews. Experiments show that the precision of the reasonable price prediction improved greatly via the method of MSAP.",
"title": ""
},
{
"docid": "592ceee67b3f8b3e8333cb104f56bd2f",
"text": "The goal of this paper is to study the team formation of multiple UAVs and UGVs for collaborative surveillance and crowd control under uncertain scenarios (e.g. crowd splitting). A comprehensive and coherent dynamic data driven adaptive multi-scale simulation (DDDAMS) framework is adopted, with the focus on simulation-based planning and control strategies related to the surveillance problem considered in this paper. To enable the team formation of multiple UAVs and UGVs, a two stage approach involving 1) crowd clustering and 2) UAV/UGV team assignment is proposed during the system operations by considering the geometry of the crowd clusters and solving a multi-objective optimization problem. For the experiment, an integrated testbed has been developed based on agent-based hardware-in-the-loop simulation involving seamless communications among simulated and real vehicles. Preliminary results indicate the effectiveness and efficiency of the proposed approach for the team formation of multiple UAVs and UGVs.",
"title": ""
},
{
"docid": "aa69409c1bddc7693ba2ed36206ac767",
"text": "Popularity of data-driven software engineering has led to an increasing demand on the infrastructures to support efficient execution of tasks that require deeper source code analysis. While task optimization and parallelization are the adopted solutions, other research directions are less explored. We present collective program analysis (CPA), a technique for scaling large scale source code analyses, especially those that make use of control and data flow analysis, by leveraging analysis specific similarity. Analysis specific similarity is about, whether two or more programs can be considered similar for a given analysis. The key idea of collective program analysis is to cluster programs based on analysis specific similarity, such that running the analysis on one candidate in each cluster is sufficient to produce the result for others. For determining analysis specific similarity and clustering analysis-equivalent programs, we use a sparse representation and a canonical labeling scheme. Our evaluation shows that for a variety of source code analyses on a large dataset of programs, substantial reduction in the analysis time can be achieved; on average a 69% reduction when compared to a baseline and on average a 36% reduction when compared to a prior technique. We also found that a large amount of analysis-equivalent programs exists in large datasets.",
"title": ""
},
{
"docid": "8f570416ceecf87310b7780ec935d814",
"text": "BACKGROUND\nInguinal lymph node involvement is an important prognostic factor in penile cancer. Inguinal lymph node dissection allows staging and treatment of inguinal nodal disease. However, it causes morbidity and is associated with complications, such as lymphocele, skin loss and infection. Video Endoscopic Inguinal Lymphadenectomy (VEIL) is an endoscopic procedure, and it seems to be a new and attractive approach duplicating the standard open procedure with less morbidity. We present here a critical perioperative assessment with points of technique.\n\n\nMETHODS\nTen patients with moderate to high grade penile carcinoma with clinically negative inguinal lymph nodes were subjected to elective VEIL. VEIL was done in standard surgical steps. Perioperative parameters were assessed that is - duration of the surgery, lymph-related complications, time until drain removal, lymph node yield, surgical emphysema and histopathological positivity of lymph nodes.\n\n\nRESULTS\nOperative time for VEIL was 120 to 180 minutes. Lymph node yield was 7 to 12 lymph nodes. No skin related complications were seen with VEIL. Lymph related complications, that is, lymphocele, were seen in only two patients. The suction drain was removed after four to eight days (mean 5.1). Overall morbidity was 20% with VEIL.\n\n\nCONCLUSION\nIn our early experience, VEIL was a safe and feasible technique in patients with penile carcinoma with non palpable inguinal lymph nodes. It allows the removal of inguinal lymph nodes within the same limits as in conventional surgical dissection and potentially reduces surgical morbidity.",
"title": ""
},
{
"docid": "f1ae820d7e067dabfda5efc1229762d8",
"text": "Data from 574 participants were used to assess perceptions of message, site, and sponsor credibility across four genres of websites; to explore the extent and effects of verifying web-based information; and to measure the relative influence of sponsor familiarity and site attributes on perceived credibility.The results show that perceptions of credibility differed, such that news organization websites were rated highest and personal websites lowest, in terms of message, sponsor, and overall site credibility, with e-commerce and special interest sites rated between these, for the most part.The results also indicated that credibility assessments appear to be primarily due to website attributes (e.g. design features, depth of content, site complexity) rather than to familiarity with website sponsors. Finally, there was a negative relationship between self-reported and observed information verification behavior and a positive relationship between self-reported verification and internet/web experience. The findings are used to inform the theoretical development of perceived web credibility. 319 new media & society Copyright © 2007 SAGE Publications Los Angeles, London, New Delhi and Singapore Vol9(2):319–342 [DOI: 10.1177/1461444807075015] ARTICLE 319-342 NMS-075015.qxd 9/3/07 11:54 AM Page 319 © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at Universiteit van Amsterdam SAGE on April 25, 2007 http://nms.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "18ce27c1840596779805efaeec18f3ed",
"text": "Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST) is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS) is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD) were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy. OPEN ACCESS Remote Sens. 2014, 6 9830",
"title": ""
},
{
"docid": "ba66e377db4ef2b3c626a0a2f19da8c3",
"text": "A challenging aspect of scene text recognition is to handle text with distortions or irregular layout. In particular, perspective text and curved text are common in natural scenes and are difficult to recognize. In this work, we introduce ASTER, an end-to-end neural network model that comprises a rectification network and a recognition network. The rectification network adaptively transforms an input image into a new one, rectifying the text in it. It is powered by a flexible Thin-Plate Spline transformation which handles a variety of text irregularities and is trained without human annotations. The recognition network is an attentional sequence-to-sequence model that predicts a character sequence directly from the rectified image. The whole model is trained end to end, requiring only images and their groundtruth text. Through extensive experiments, we verify the effectiveness of the rectification and demonstrate the state-of-the-art recognition performance of ASTER. Furthermore, we demonstrate that ASTER is a powerful component in end-to-end recognition systems, for its ability to enhance the detector.",
"title": ""
}
] |
scidocsrr
|
cd29f37bb07b52331b86fb689077b87f
|
What is the right way to represent document images?
|
[
{
"docid": "dc424d2dc407e504d962c557325f035e",
"text": "Document image classification is an important step in Office Automation, Digital Libraries, and other document image analysis applications. There is great diversity in document image classifiers: they differ in the problems they solve, in the use of training data to construct class models, and in the choice of document features and classification algorithms. We survey this diverse literature using three components: the problem statement, the classifier architecture, and performance evaluation. This brings to light important issues in designing a document classifier, including the definition of document classes, the choice of document features and feature representation, and the choice of classification algorithm and learning mechanism. We emphasize techniques that classify single-page typeset document images without using OCR results. Developing a general, adaptable, high-performance classifier is challenging due to the great variety of documents, the diverse criteria used to define document classes, and the ambiguity that arises due to ill-defined or fuzzy document classes.",
"title": ""
},
{
"docid": "8ccb6c767704bc8aee424d17cf13d1e3",
"text": "In this paper, we present a page classification application in a banking workflow. The proposed architecture represents administrative document images by merging visual and textual descriptions. The visual description is based on a hierarchical representation of the pixel intensity distribution. The textual description uses latent semantic analysis to represent document content as a mixture of topics. Several off-the-shelf classifiers and different strategies for combining visual and textual cues have been evaluated. A final step uses an $$n$$ n -gram model of the page stream allowing a finer-grained classification of pages. The proposed method has been tested in a real large-scale environment and we report results on a dataset of 70,000 pages.",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "721a64c9a5523ba836318edcdb8de021",
"text": "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.",
"title": ""
},
{
"docid": "39c2c3e7f955425cd9aaad1951d13483",
"text": "This paper proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole. The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively. The MVO algorithm is first benchmarked on 19 challenging test problems. It is then applied to five real engineering problems to further confirm its performance. To validate the results, MVO is compared with four well-known algorithms: Grey Wolf Optimizer, Particle Swarm Optimization, Genetic Algorithm, and Gravitational Search Algorithm. The results prove that the proposed algorithm is able to provide very competitive results and outperforms the best algorithms in the literature on the majority of the test beds. The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces. Note that the source codes of the proposed MVO algorithm are publicly available at http://www.alimirjalili.com/MVO.html .",
"title": ""
},
{
"docid": "f1dd866b1cdd79716f2bbc969c77132a",
"text": "Fiber optic sensor technology offers the possibility of sensing different parameters like strain, temperature, pressure in harsh environment and remote locations. these kinds of sensors modulates some features of the light wave in an optical fiber such an intensity and phase or use optical fiber as a medium for transmitting the measurement information. The advantages of fiber optic sensors in contrast to conventional electrical ones make them popular in different applications and now a day they consider as a key component in improving industrial processes, quality control systems, medical diagnostics, and preventing and controlling general process abnormalities. This paper is an introduction to fiber optic sensor technology and some of the applications that make this branch of optic technology, which is still in its early infancy, an interesting field. Keywords—Fiber optic sensors, distributed sensors, sensor application, crack sensor.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "755f2d11ad9653806f26e5ae7beaf49b",
"text": "Deep Neural Networks (DNNs) have shown remarkable success in pattern recognition tasks. However, parallelizing DNN training across computers has been difficult. We present the Deep Stacking Network (DSN), which overcomes the problem of parallelizing learning algorithms for deep architectures. The DSN provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module. Additional fine tuning further improves the DSN, while introducing minor non-convexity. Full learning in the DSN is batch-mode, making it amenable to parallel training over many machines and thus be scalable over the potentially huge size of the training data. Experimental results on both the MNIST (image) and TIMIT (speech) classification tasks demonstrate that the DSN learning algorithm developed in this work is not only parallelizable in implementation but it also attains higher classification accuracy than the DNN.",
"title": ""
},
{
"docid": "332acb4b9ad2b278ff2af20399cf85e7",
"text": "The Character recognition is one of the most important areas in the field of pattern recognition. Recently Indian Handwritten character recognition is getting much more attention and researchers are contributing a lot in this field. But Malayalam, a South Indian language has very less works in this area and needs further attention. Malayalam OCR is a complex task owing to the various character scripts available and more importantly the difference in ways in which the characters are written. The dimensions are never the same and may be never mapped on to a square grid unlike English characters. Selection of a feature extraction method is the most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representation of characters. As an important component of pattern recognition, feature extraction has been paid close attention by many scholars, and currently has become one of the research hot spots in the field of pattern recognition. This article gives a general discussion of feature extraction techniques used in handwritten character recognition of other Indian languages and some of them are implemented for Malayalam handwritten characters.",
"title": ""
},
{
"docid": "a38cf37fc60e1322e391680037ff6d4e",
"text": "Robot-aided gait training is an emerging clinical tool for gait rehabilitation of neurological patients. This paper deals with a novel method of offering gait assistance, using an impedance controlled exoskeleton (LOPES). The provided assistance is based on a recent finding that, in the control of walking, different modules can be discerned that are associated with different subtasks. In this study, a Virtual Model Controller (VMC) for supporting one of these subtasks, namely the foot clearance, is presented and evaluated. The developed VMC provides virtual support at the ankle, to increase foot clearance. Therefore, we first developed a new method to derive reference trajectories of the ankle position. These trajectories consist of splines between key events, which are dependent on walking speed and body height. Subsequently, the VMC was evaluated in twelve healthy subjects and six chronic stroke survivors. The impedance levels, of the support, were altered between trials to investigate whether the controller allowed gradual and selective support. Additionally, an adaptive algorithm was tested, that automatically shaped the amount of support to the subjects’ needs. Catch trials were introduced to determine whether the subjects tended to rely on the support. We also assessed the additional value of providing visual feedback. With the VMC, the step height could be selectively and gradually influenced. The adaptive algorithm clearly shaped the support level to the specific needs of every stroke survivor. The provided support did not result in reliance on the support for both groups. All healthy subjects and most patients were able to utilize the visual feedback to increase their active participation. The presented approach can provide selective control on one of the essential subtasks of walking. This module is the first in a set of modules to control all subtasks. This enables the therapist to focus the support on the subtasks that are impaired, and leave the other subtasks up to the patient, encouraging him to participate more actively in the training. Additionally, the speed-dependent reference patterns provide the therapist with the tools to easily adapt the treadmill speed to the capabilities and progress of the patient.",
"title": ""
},
{
"docid": "594110683d0d38ba7fd7345a8c24fa81",
"text": "Wayfinding in the public transportation infrastructure takes place on traffic networks. These consist of lines that are interconnected at nodes. The network is the basis for routing decisions; it is usually presented in maps and through digital interfaces. But to the traveller, the stops and stations that make up the nodes are at least as important as the network, for it is there that the complexity of the system is experienced. These observations suggest that there are two cognitively different environments involved, which we will refer to as network space and scene space. Network space consists of the public transport network. Scene space consists of the environment at the nodes of the public transport system, through which travellers enter and leave the system and in which they change means of transport. We explore properties of the two types of spaces and how they interact to assist wayfinding. We also show how they can be modelled: for network space, graphs can be used; for scene space we propose a novel model based on cognitive schemata and",
"title": ""
},
{
"docid": "7bd539fecbfec5db45b0f2b52cec23a7",
"text": "In this paper, we consider the restoration of images with signal-dependent noise. The filter is noise smoothing and adapts to local changes in image statistics based on a nonstationary mean, nonstationary variance (NMNV) image model. For images degraded by a class of uncorrelated, signal-dependent noise without blur, the adaptive noise smoothing filter becomes a point processor and is similar to Lee's local statistics algorithm [16]. The filter is able to adapt itself to the nonstationary local image statistics in the presence of different types of signal-dependent noise. For multiplicative noise, the adaptive noise smoothing filter is a systematic derivation of Lee's algorithm with some extensions that allow different estimators for the local image variance. The advantage of the derivation is its easy extension to deal with various types of signal-dependent noise. Film-grain and Poisson signal-dependent restoration problems are also considered as examples. All the nonstationary image statistical parameters needed for the filter can be estimated from the noisy image and no a priori information about the original image is required.",
"title": ""
},
{
"docid": "0ce57a66924192a50728fb67023e0ed2",
"text": "Most studies on TCP over multi-hop wireless ad hoc networks have only addressed the issue of performance degradation due to temporarily broken routes, which results in TCP inability to distinguish between losses due to link failures or congestion. This problem tends to become more serious as network mobility increases. In this work, we tackle the equally important capture problem to which there has been little or no solution, and is present mostly in static and low mobility multihop wireless networks. This is a result of the interplay between the MAC layer and TCP backoff policies, which causes nodes to unfairly capture the wireless shared medium, hence preventing neighboring nodes to access the channel. This has been shown to have major negative effects on TCP performance comparable to the impact of mobility. We propose a novel algorithm, called COPAS (COntention-based PAth Selection), which incorporates two mechanisms to enhance TCP performance by avoiding capture conditions. First, it uses disjoint forward (sender to receiver for TCP data) and reverse (receiver to sender for TCP ACKs) paths in order to minimize the conflicts of TCP data and ACK packets. Second, COPAS employs a dynamic contentionbalancing scheme where it continuously monitors and changes forward and reverse paths according to the level of MAC layer contention, hence minimizing the likelihood of capture. Through extensive simulation, COPAS is shown to improve TCP throughput by up to 90% while keeping routing overhead low.",
"title": ""
},
{
"docid": "cbb6c80bc986b8b1e1ed3e70abb86a79",
"text": "CD44 is a cell surface adhesion receptor that is highly expressed in many cancers and regulates metastasis via recruitment of CD44 to the cell surface. Its interaction with appropriate extracellular matrix ligands promotes the migration and invasion processes involved in metastases. It was originally identified as a receptor for hyaluronan or hyaluronic acid and later to several other ligands including, osteopontin (OPN), collagens, and matrix metalloproteinases. CD44 has also been identified as a marker for stem cells of several types. Beside standard CD44 (sCD44), variant (vCD44) isoforms of CD44 have been shown to be created by alternate splicing of the mRNA in several cancer. Addition of new exons into the extracellular domain near the transmembrane of sCD44 increases the tendency for expressing larger size vCD44 isoforms. Expression of certain vCD44 isoforms was linked with progression and metastasis of cancer cells as well as patient prognosis. The expression of CD44 isoforms can be correlated with tumor subtypes and be a marker of cancer stem cells. CD44 cleavage, shedding, and elevated levels of soluble CD44 in the serum of patients is a marker of tumor burden and metastasis in several cancers including colon and gastric cancer. Recent observations have shown that CD44 intracellular domain (CD44-ICD) is related to the metastatic potential of breast cancer cells. However, the underlying mechanisms need further elucidation.",
"title": ""
},
{
"docid": "b27f43bf472e44cf393d21781c3341cd",
"text": "A massive hybrid array consists of multiple analog subarrays, with each subarray having its digital processing chain. It offers the potential advantage of balancing cost and performance for massive arrays and therefore serves as an attractive solution for future millimeter-wave (mm- Wave) cellular communications. On one hand, using beamforming analog subarrays such as phased arrays, the hybrid configuration can effectively collect or distribute signal energy in sparse mm-Wave channels. On the other hand, multiple digital chains in the configuration provide multiplexing capability and more beamforming flexibility to the system. In this article, we discuss several important issues and the state-of-the-art development for mm-Wave hybrid arrays, such as channel modeling, capacity characterization, applications of various smart antenna techniques for single-user and multiuser communications, and practical hardware design. We investigate how the hybrid array architecture and special mm-Wave channel property can be exploited to design suboptimal but practical massive antenna array schemes. We also compare two main types of hybrid arrays, interleaved and localized arrays, and recommend that the localized array is a better option in terms of overall performance and hardware feasibility.",
"title": ""
},
{
"docid": "2613ec5a77cfe296f7d16340ce133c27",
"text": "Learned feature representations and sub-phoneme posterio r from Deep Neural Networks (DNNs) have been used separately to produce significant performance gains for speaker and language recognition tasks. In this work we show how these gains are possible using a single DNN for both speaker and language recognition. The unified DNN approach is shown to yield substantial performance improvements on the the 2013 Domain Adaptation Challenge speaker recognition task (55% reduction in EER for the out-of-domain condition) and on the NIST 2011 Language Recognition Evaluation (48% reduction in EER for the 30s test condition).",
"title": ""
},
{
"docid": "11644dafde30ee5608167c04cb1f511c",
"text": "Dynamic Adaptive Streaming over HTTP (DASH) enables the video player to adapt the bitrate of the video while streaming to ensure playback without interruptions even with varying throughput. A DASH server hosts multiple representations of the same video, each of which is broken down into small segments of fixed playback duration. The video bitrate adaptation is purely driven by the player at the endhost. Typically, the player employs an Adaptive Bitrate (ABR) algorithm, that determines the most appropriate representation for the next segment to be downloaded, based on the current network conditions and user preferences. The aim of an ABR algorithm is to dynamically manage the Quality of Experience (QoE) of the user during the playback. ABR algorithms manage the QoE by maximizing the bitrate while at the same time trying to minimize the other QoE metrics: playback start time, duration and number of buffering events, and the number of bitrate switching events. Typically, the ABR algorithms manage the QoE by using the measured network throughput and buffer occupancy to adapt the playback bitrate. However, due to the video encoding schemes employed, the sizes of the individual segments may vary significantly. For low bandwidth networks, fluctuation in the segment sizes results in inaccurate estimation the expected segment fetch times, thereby resulting in inaccurate estimation of the optimum bitrate. In this paper we demonstrate how the Segment-Aware Rate Adaptation (SARA) algorithm, that considers the measured throughput, buffer occupancy, and the variation in segment sizes helps in better management of the users' QoE in a DASH system. By comparing with a typical throughput-based and buffer-based adaptation algorithm under varying network conditions, we demonstrate that SARA manages the QoE better, especially in a low bandwidth network. We also developed AStream, an open-source Python-based emulated DASH-video player that was used to evaluate three different ABR algorithms and measure the QoE metrics with each of them.",
"title": ""
},
{
"docid": "06525bcc03586c8d319f5d6f1d95b852",
"text": "Many different automatic color correction approaches have been proposed by different research communities in the past decade. However, these approaches are seldom compared, so their relative performance and applicability are unclear. For multi-view image and video stitching applications, an ideal color correction approach should be effective at transferring the color palette of the source image to the target image, and meanwhile be able to extend the transferred color from the overlapped area to the full target image without creating visual artifacts. In this paper we evaluate the performance of color correction approaches for automatic multi-view image and video stitching. We consider nine color correction algorithms from the literature applied to 40 synthetic image pairs and 30 real mosaic image pairs selected from different applications. Experimental results show that both parametric and non-parametric approaches have members that are effective at transferring colors, while parametric approaches are generally better than non-parametric approaches in extendability.",
"title": ""
},
{
"docid": "924146534d348e7a44970b1d78c97e9c",
"text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.",
"title": ""
},
{
"docid": "a0358cfc6166fbd45d35cbb346c56b7a",
"text": "a Pontificia Universidad Católica de Valparaíso, Av. Brasil 2950, Valparaíso, Chile b Universidad Autónoma de Chile, Av. Pedro de Valdivia 641, Santiago, Chile c Universidad Finis Terrae, Av. Pedro de Valdivia 1509, Santiago, Chile d CNRS, LINA, University of Nantes, 2 rue de la Houssinière, Nantes, France e Escuela de Ingeniería Industrial, Universidad Diego Portales, Manuel Rodríguez Sur 415, Santiago, Chile",
"title": ""
},
{
"docid": "815098e9ed06dfa5335f0c2c595f4059",
"text": "Effectively managing risk is an essential element of successful project management. It is imperative that project management team consider all possible risks to establish corrective actions in the right time. So far, several techniques have been proposed for project risk analysis. Failure Mode and Effect Analysis (FMEA) is recognized as one of the most useful techniques in this field. The main goal is identifying all failure modes within a system, assessing their impact, and planning for corrective actions. In traditional FMEA, the risk priorities of failure modes are determined by using Risk Priority Numbers (RPN), which can be obtained by multiplying the scores of risk factors like occurrence (O), severity (S), and detection (D). This technique has some limitations, though in this paper, Fuzzy logic and Analytical Hierarchy Process (AHP) are used to address the limitations of traditional FMEA. Linguistic variables, expressed in fuzzy numbers, are used to assess the ratings of risk factors O, S, and D. Each factor consists of seven membership functions and on the whole there are 343 rules for fuzzy system. The analytic hierarchy process (AHP) is applied to determine the relative weightings of risk impacts on time, cost, quality and safety. A case study is presented to validate the concept. The feedbacks are showing the advantages of the proposed approach in project risk management.",
"title": ""
},
{
"docid": "a769b8f56d699b3f6eca54aeeb314f84",
"text": "Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile manipulator EL-E with a focus on the subsystem that enables the robot to retrieve objects from and deliver objects to flat surfaces. Once provided with a 3D location via brief illumination with a laser pointer, the robot autonomously approaches the location and then either grasps the nearest object or places an object. We describe our implementation in detail, while highlighting design principles and themes, including the use of specialized behaviors, task-relevant features, and low-dimensional representations. We also present evaluations of EL-E’s performance relative to common forms of variation. We tested EL-E’s ability to approach and grasp objects from the 25 object categories that were ranked most important for robotic retrieval by motor-impaired patients from the Emory ALS Center. Although reliability varied, EL-E succeeded at least once with objects from 21 out of 25 of these categories. EL-E also approached and grasped a cordless telephone on 12 different surfaces including floors, tables, and counter tops with 100% success. The same test using a vitamin pill (ca. 15mm ×5mm ×5mm) resulted in 58% success.",
"title": ""
}
] |
scidocsrr
|
ef40484cb8399d22d793fb4cb714570b
|
Competition in the Cryptocurrency Market
|
[
{
"docid": "f6fc0992624fd3b3e0ce7cc7fc411154",
"text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.",
"title": ""
},
{
"docid": "165aa4bad30a95866be4aff878fbd2cf",
"text": "This paper reviews some recent developments in digital currency, focusing on platform-sponsored currencies such as Facebook Credits. In a model of platform management, we find that it will not likely be profitable for such currencies to expand to become fully convertible competitors to state-sponsored currencies. JEL Classification: D42, E4, L51 Bank Classification: bank notes, economic models, payment clearing and settlement systems * Rotman School of Management, University of Toronto and NBER (Gans) and Bank of Canada (Halaburda). The views here are those of the authors and no responsibility for them should be attributed to the Bank of Canada. We thank participants at the NBER Economics of Digitization Conference, Warren Weber and Glen Weyl for helpful comments on an earlier draft of this paper. Please send any comments to [email protected].",
"title": ""
}
] |
[
{
"docid": "bbb91ddd9df0d5f38b8c1317a8e84f60",
"text": "Poisson regression model is widely used in software quality modeling. W h e n the response variable of a data set includes a large number of zeros, Poisson regression model will underestimate the probability of zeros. A zero-inflated model changes the mean structure of the pure Poisson model. The predictive quality is therefore improved. I n this paper, we examine a full-scale industrial software system and develop two models, Poisson regression and zero-inflated Poisson regression. To our knowledge, this is the first study that introduces the zero-inflated Poisson regression model in software reliability. Comparing the predictive qualities of the two competing models, we conclude that for this system, the zero-inflated Poisson regression model is more appropriate in theory and practice.",
"title": ""
},
{
"docid": "7d197033396c7a55593da79a5a70fa96",
"text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.",
"title": ""
},
{
"docid": "4690d2b1dbde438329644b3e76b6427f",
"text": "In this work, we investigate how illuminant estimation can be performed exploiting the color statistics extracted from the faces automatically detected in the image. The proposed method is based on two observations: first, skin colors tend to form a cluster in the color space, making it a cue to estimate the illuminant in the scene; second, many photographic images are portraits or contain people. The proposed method has been tested on a public dataset of images in RAW format, using both a manual and a real face detector. Experimental results demonstrate the effectiveness of our approach. The proposed method can be directly used in many digital still camera processing pipelines with an embedded face detector working on gray level images.",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
},
{
"docid": "6eda7075de9d47851b2b5be026af7d84",
"text": "Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design.",
"title": ""
},
{
"docid": "2f471c24ccb38e70627eba6383c003e0",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "21a2347f9bb5b5638d63239b37c9d0e6",
"text": "This paper presents new circuits for realizing both current-mode and voltage-mode proportional-integralderivative (PID), proportional-derivative (PD) and proportional-integral (PI) controllers employing secondgeneration current conveyors (CCIIs) as active elements. All of the proposed PID, PI and PD controllers have grounded passive elements and adjustable parameters. The controllers employ reduced number of active and passive components with respect to the traditional op-amp-based PID, PI and PD controllers. A closed loop control system using the proposed PID controller is designed and simulated with SPICE.",
"title": ""
},
{
"docid": "5297929e65e662360d8ff262e877b08a",
"text": "Frontal electroencephalographic (EEG) alpha asymmetry is widely researched in studies of emotion, motivation, and psychopathology, yet it is a metric that has been quantified and analyzed using diverse procedures, and diversity in procedures muddles cross-study interpretation. The aim of this article is to provide an updated tutorial for EEG alpha asymmetry recording, processing, analysis, and interpretation, with an eye towards improving consistency of results across studies. First, a brief background in alpha asymmetry findings is provided. Then, some guidelines for recording, processing, and analyzing alpha asymmetry are presented with an emphasis on the creation of asymmetry scores, referencing choices, and artifact removal. Processing steps are explained in detail, and references to MATLAB-based toolboxes that are helpful for creating and investigating alpha asymmetry are noted. Then, conceptual challenges and interpretative issues are reviewed, including a discussion of alpha asymmetry as a mediator/moderator of emotion and psychopathology. Finally, the effects of two automated component-based artifact correction algorithms-MARA and ADJUST-on frontal alpha asymmetry are evaluated.",
"title": ""
},
{
"docid": "dea3bce3f636c87fad95f255aceec858",
"text": "In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).",
"title": ""
},
{
"docid": "046ae00fa67181dff54e170e48a9bacf",
"text": "For the evaluation of grasp quality, different measures have been proposed that are based on wrench spaces. Almost all of them have drawbacks that derive from the non-uniformity of the wrench space, composed of force and torque dimensions. Moreover, many of these approaches are computationally expensive. We address the problem of choosing a proper task wrench space to overcome the problems of the non-uniform wrench space and show how to integrate it in a well-known, high precision and extremely fast computable grasp quality measure.",
"title": ""
},
{
"docid": "00bf4f81944c1e98e58b891ace95797e",
"text": "Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the l1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.",
"title": ""
},
{
"docid": "5e9d63bfc3b4a66e0ead79a2d883adfe",
"text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.",
"title": ""
},
{
"docid": "a95f77c59a06b2d101584babc74896fb",
"text": "Magnetic wall and ceiling climbing robots have been proposed in many industrial applications where robots must move over ferromagnetic material surfaces. The magnetic circuit design with magnetic attractive force calculation of permanent magnetic wheel plays an important role which significantly affects the system reliability, payload ability and power consumption of the robot. In this paper, a flexible wall and ceiling climbing robot with six permanent magnetic wheels is proposed to climb along the vertical wall and overhead ceiling of steel cargo containers as part of an illegal contraband inspection system. The permanent magnetic wheels are designed to apply to the wall and ceiling climbing robot, whilst finite element method is employed to estimate the permanent magnetic wheels with various wheel rims. The distributions of magnetic flux lines and magnetic attractive forces are compared on both plane and corner scenarios so that the robot can adaptively travel through the convex and concave surfaces of the cargo container. Optimisation of wheel rims is presented to achieve the equivalent magnetic adhesive forces along with the estimation of magnetic ring dimensions in the axial and radial directions. Finally, the practical issues correlated with the applications of the techniques are discussed and the conclusions are drawn with further improvement and prototyping.",
"title": ""
},
{
"docid": "45cee79008d25916e8f605cd85dd7f3a",
"text": "In exploring the emotional climate of long-term marriages, this study used an observational coding system to identify specific emotional behaviors expressed by middle-aged and older spouses during discussions of a marital problem. One hundred and fifty-six couples differing in age and marital satisfaction were studied. Emotional behaviors expressed by couples differed as a function of age, gender, and marital satisfaction. In older couples, the resolution of conflict was less emotionally negative and more affectionate than in middle-aged marriages. Differences between husbands and wives and between happy and unhappy marriages were also found. Wives were more affectively negative than husbands, whereas husbands were more defensive than wives, and unhappy marriages involved greater exchange of negative affect than happy marriages.",
"title": ""
},
{
"docid": "fbe1e6b899b1a2e9d53d25e3fa70bd86",
"text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.",
"title": ""
},
{
"docid": "3ced47ece49eeec3edc5d720df9bb864",
"text": "Complex space systems typically provide the operator a means to understand the current state of system components. The operator often has to manually determine whether the system is able to perform a given set of high level objectives based on this information. The operations team needs a way for the system to quantify its capability to successfully complete a mission objective and convey that information in a clear, concise way. A mission-level space cyber situational awareness tool suite integrates the data into a complete picture to display the current state of the mission. The Johns Hopkins University Applied Physics Laboratory developed the Spyder tool suite for such a purpose. The Spyder space cyber situation awareness tool suite allows operators to understand the current state of their systems, allows them to determine whether their mission objectives can be completed given the current state, and provides insight into any anomalies in the system. Spacecraft telemetry, spacecraft position, ground system data, ground computer hardware, ground computer software processes, network connections, and network data flows are all combined into a system model service that serves the data to various display tools. Spyder monitors network connections, port scanning, and data exfiltration to determine if there is a cyber attack. The Spyder Tool Suite provides multiple ways of understanding what is going on in a system. Operators can see the logical and physical relationships between system components to better understand interdependencies and drill down to see exactly where problems are occurring. They can quickly determine the state of mission-level capabilities. The space system network can be analyzed to find unexpected traffic. Spyder bridges the gap between infrastructure and mission and provides situational awareness at the mission level.",
"title": ""
},
{
"docid": "b952967acb2eaa9c780bffe211d11fa0",
"text": "Cryptographic message authentication is a growing need for FPGA-based embedded systems. In this paper a customized FPGA implementation of a GHASH function that is used in AES-GCM, a widely-used message authentication protocol, is described. The implementation limits GHASH logic utilization by specializing the hardware implementation on a per-key basis. The implemented module can generate a 128bit message authentication code in both pipelined and unpipelined versions. The pipelined GHASH version achieves an authentication throughput of more than 14 Gbit/s on a Spartan-3 FPGA and 292 Gbit/s on a Virtex-6 device. To promote adoption in the field, the complete source code for this work has been made publically-available.",
"title": ""
},
{
"docid": "5cc666e8390b0d3cefaee2d55ad7ee38",
"text": "The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight < 1000 grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller). Keywords—Incubator; neonatal; model; temperature; Arduino; GPC",
"title": ""
},
{
"docid": "7b36abede1967f89b79975883074a34d",
"text": "In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding-based kernel achieves the best performance. Furthermore, we present episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for VIN and GVIN. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and realworld street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image).",
"title": ""
},
{
"docid": "8014e07969adad7e6db3bb222afaf7d2",
"text": "Scratch is a visual programming environment that is widely used by young people. We investigated if Scratch can be used to teach concepts of computer science. We developed new learning materials for middle-school students that were designed according to the constructionist philosophy of Scratch and evaluated them in two schools. The classes were normal classes, not extracurricular activities whose participants are self-selected. Questionnaires and a test were constructed based upon a novel combination of the Revised Bloom Taxonomy and the SOLO taxonomy. These quantitative instruments were augmented with a qualitative analysis of observations within the classes. The results showed that in general students could successfully learn important concepts of computer science, although there were some problems with initialization, variables and concurrency; these problems can be overcome by modifications to the teaching process.",
"title": ""
}
] |
scidocsrr
|
124e4bf43f120613c8532b111157ea96
|
Encrypted accelerated least squares regression
|
[
{
"docid": "4e0e6ca2f4e145c17743c42944da4cc8",
"text": "We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data. Since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data, we define a new class of machine learning algorithms in which the algorithm’s predictions, viewed as functions of the input data, can be expressed as polynomials of bounded degree. We propose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps. We present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity, prediction accuracy and cryptographic security.",
"title": ""
},
{
"docid": "ef444570c043be67453317e26600972f",
"text": "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X’X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X’X to obtain biased estimates with smaller mean square error.",
"title": ""
}
] |
[
{
"docid": "7432009332e13ebc473c9157505cb59c",
"text": "The use of future contextual information is typically shown to be helpful for acoustic modeling. However, for the recurrent neural network (RNN), it’s not so easy to model the future temporal context effectively, meanwhile keep lower model latency. In this paper, we attempt to design a RNN acoustic model that being capable of utilizing the future context effectively and directly, with the model latency and computation cost as low as possible. The proposed model is based on the minimal gated recurrent unit (mGRU) with an input projection layer inserted in it. Two context modules, temporal encoding and temporal convolution, are specifically designed for this architecture to model the future context. Experimental results on the Switchboard task and an internal Mandarin ASR task show that, the proposed model performs much better than long short-term memory (LSTM) and mGRU models, whereas enables online decoding with a maximum latency of 170 ms. This model even outperforms a very strong baseline, TDNN-LSTM, with smaller model latency and almost half less parameters.",
"title": ""
},
{
"docid": "4eca3018852fd3107cb76d1d95f76a0a",
"text": "Within the past decade, empirical evidence has emerged supporting the use of Acceptance and Commitment Therapy (ACT) targeting shame and self-stigma. Little is known about the role of self-compassion in ACT, but evidence from other approaches indicates that self-compassion is a promising means of reducing shame and self-criticism. The ACT processes of defusion, acceptance, present moment, values, committed action, and self-as-context are to some degree inherently self-compassionate. However, it is not yet known whether the self-compassion inherent in the ACT approach explains ACT’s effectiveness in reducing shame and stigma, and/or whether focused self-compassion work may improve ACT outcomes for highly self-critical, shame-prone people. We discuss how ACT for shame and stigma may be enhanced by existing approaches specifically targeting self-compassion.",
"title": ""
},
{
"docid": "8ef1592544071c485d82c0848d02a2d0",
"text": "Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.",
"title": ""
},
{
"docid": "9f9302cf8560b65bed7688f5339a865c",
"text": "Understanding short texts is crucial to many applications, but challenges abound. First, short texts do not always observe the syntax of a written language. As a result, traditional natural language processing tools, ranging from part-of-speech tagging to dependency parsing, cannot be easily applied. Second, short texts usually do not contain sufficient statistical signals to support many state-of-the-art approaches for text mining such as topic modeling. Third, short texts are more ambiguous and noisy, and are generated in an enormous volume, which further increases the difficulty to handle them. We argue that semantic knowledge is required in order to better understand short texts. In this work, we build a prototype system for short text understanding which exploits semantic knowledge provided by a well-known knowledgebase and automatically harvested from a web corpus. Our knowledge-intensive approaches disrupt traditional methods for tasks such as text segmentation, part-of-speech tagging, and concept labeling, in the sense that we focus on semantics in all these tasks. We conduct a comprehensive performance evaluation on real-life data. The results show that semantic knowledge is indispensable for short text understanding, and our knowledge-intensive approaches are both effective and efficient in discovering semantics of short texts.",
"title": ""
},
{
"docid": "aace50c8446403a9f72b24bce1e88c30",
"text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.",
"title": ""
},
{
"docid": "e5b8368f13bf0f5e1969910d1ef81ac4",
"text": "BACKGROUND\nIn girls who present with vaginal trauma, sexual abuse is often the primary diagnosis. The differential diagnosis must include patterns and the mechanism of injury that differentiate accidental injuries from inflicted trauma.\n\n\nCASE\nA 7-year-old prepubertal girl presented to the emergency department with genital bleeding after a serious accidental impaling injury from inline skating. After rapid abduction of the legs and a fall onto the blade of an inline skate this child incurred an impaling genital injury consistent with an accidental mechanism. The dramatic genital injuries when repaired healed with almost imperceptible residual evidence of previous trauma.\n\n\nSUMMARY AND CONCLUSION\nTo our knowledge, this case report represents the first in the medical literature of an impaling vaginal trauma from an inline skate and describes its clinical and surgical management.",
"title": ""
},
{
"docid": "d55aae728991060ed4ba1f9a6b59e2fe",
"text": "Evolutionary algorithms have become robust tool in data processing and modeling of dynamic, complex and non-linear processes due to their flexible mathematical structure to yield optimal results even with imprecise, ambiguity and noise at its input. The study investigates evolutionary algorithms for solving Sudoku task. Various hybrids are presented here as veritable algorithm for computing dynamic and discrete states in multipoint search in CSPs optimization with application areas to include image and video analysis, communication and network design/reconstruction, control, OS resource allocation and scheduling, multiprocessor load balancing, parallel processing, medicine, finance, security and military, fault diagnosis/recovery, cloud and clustering computing to mention a few. Solution space representation and fitness functions (as common to all algorithms) were discussed. For support and confidence model adopted π1=0.2 and π2=0.8 respectively yields better convergence rates – as other suggested value combinations led to either a slower or non-convergence. CGA found an optimal solution in 32 seconds after 188 iterations in 25runs; while GSAGA found its optimal solution in 18seconds after 402 iterations with a fitness progression achieved in 25runs and consequently, GASA found an optimal solution 2.112seconds after 391 iterations with fitness progression after 25runs respectively.",
"title": ""
},
{
"docid": "063287a98a5a45bc8e38f8f8c193990e",
"text": "This paper investigates the relationship between the contextual factors related to the firm’s decision-maker and the process of international strategic decision-making. The analysis has been conducted focusing on small and medium-sized enterprises (SME). Data for the research came from 111 usable responses to a survey on a sample of SME decision-makers in international field. The results of regression analysis indicate that the context variables, both internal and external, exerted more influence on international strategic decision making process than the decision-maker personality characteristics. DOI: 10.4018/ijabe.2013040101 2 International Journal of Applied Behavioral Economics, 2(2), 1-22, April-June 2013 Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The purpose of this paper is to reverse this trend and to explore the different dimensions of SMEs’ strategic decision-making process in international decisions and, within these dimensions, we want to understand if are related to the decision-maker characteristics and also to broader contextual factors characteristics. The paper is organized as follows. In the second section the concepts of strategic decision-making process and factors influencing international SDMP are approached. Next, the research methodology, findings analysis and discussion will be presented. Finally, conclusions, limitations of the study and suggestions for future research are explored. THEORETICAL BACKGROUND Strategic Decision-Making Process The process of making strategic decisions has emerged as one of the most important themes of strategy research over the last two decades (Papadakis, 2006; Papadakis & Barwise, 2002). According to Harrison (1996), the SMDP can be defined as a combination of the concepts of strategic gap and management decision making process, with the former “determined by comparing the organization’s inherent capabilities with the opportunities and threats in its external environment”, while the latter is composed by a set of decision-making functions logically connected, that begins with the setting of managerial objective, followed by the search for information to develop a set of alternatives, that are consecutively compared and evaluated, and selected. Afterward, the selected alternative is implemented and, finally, it is subjected to follow-up and control. Other authors (Fredrickson, 1984; Mintzberg, Raisinghani, & Theoret, 1976) developed several models of strategic decision-making process since 1970, mainly based on the number of stages (Nooraie, 2008; Nutt, 2008). Although different researches investigated SDMP with specific reference to either small firms (Brouthers, et al., 1998; Gibcus, Vermeulen, & Jong, 2009; Huang, 2009; Jocumsen, 2004), or internationalization process (Aharoni, Tihanyi, & Connelly, 2011; Dimitratos, et al., 2011; Nielsen & Nielsen, 2011), there is a lack of studies that examine the SDMP in both perspectives. In this study we decided to mainly follow the SDMP defined by Harrison (1996) adapted to the international arena and particularly referred to market development decisions. Thus, for the definition of objectives (first phase) we refer to those in international field, for search for information, development and comparison of alternatives related to foreign markets (second phase) we refer to the systematic International Market Selection (IMS), and to the Entry Mode Selection (EMS) methodologies. For the implementation of the selected alternative (third phase) we mainly mean the entering in a particular foreign market with a specific entry mode, and finally, for follow-up and control (fourth phase) we refer to the control and evaluation of international activities. Dimensions of the Strategic Decision-Making Process Several authors attempted to implement a set of dimensions in approaching strategic process characteristics, and the most adopted are: • Rationality; • Formalization; • Hierarchical Decentralization and lateral communication; • Political Behavior.",
"title": ""
},
{
"docid": "ceb725186e5312601091157769c07b5f",
"text": "Much of the focus in the design of deep neural networks has been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios, particularly on edge devices such as mobile and other consumer devices, given their high computational and memory requirements. As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical usage. In particular, we propose a new balanced metric called NetScore, which is designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 50 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. The proposed NetScore metric, along with the other tested metrics, are by no means perfect, but the hope is to push the conversation towards better universal metrics for evaluating deep neural networks for use in practical scenarios to help guide practitioners in model design.",
"title": ""
},
{
"docid": "a91a57326a2d961e24d13b844a3556cf",
"text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.",
"title": ""
},
{
"docid": "1d1eeb2f5a16fd8e1deed16a5839505b",
"text": "Searchable symmetric encryption (SSE) is a widely popular cryptographic technique that supports the search functionality over encrypted data on the cloud. Despite the usefulness, however, most of existing SSE schemes leak the search pattern, from which an adversary is able to tell whether two queries are for the same keyword. In recent years, it has been shown that the search pattern leakage can be exploited to launch attacks to compromise the confidentiality of the client’s queried keywords. In this paper, we present a new SSE scheme which enables the client to search encrypted cloud data without disclosing the search pattern. Our scheme uniquely bridges together the advanced cryptographic techniques of chameleon hashing and indistinguishability obfuscation. In our scheme, the secure search tokens for plaintext keywords are generated in a randomized manner, so it is infeasible to tell whether the underlying plaintext keywords are the same given two secure search tokens. In this way, our scheme well avoids using deterministic secure search tokens, which is the root cause of the search pattern leakage. We provide rigorous security proofs to justify the security strengths of our scheme. In addition, we also conduct extensive experiments to demonstrate the performance. Although our scheme for the time being is not immediately applicable due to the current inefficiency of indistinguishability obfuscation, we are aware that research endeavors on making indistinguishability obfuscation practical is actively ongoing and the practical efficiency improvement of indistinguishability obfuscation will directly lead to the applicability of our scheme. Our paper is a new attempt that pushes forward the research on SSE with concealed search pattern.",
"title": ""
},
{
"docid": "53c0564d82737d51ca9b7ea96a624be4",
"text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.",
"title": ""
},
{
"docid": "176386fd6f456d818d7ebf81f65d5030",
"text": "Event-driven architecture is gaining momentum in research and application areas as it promises enhanced responsiveness and asynchronous communication. The combination of event-driven and service-oriented architectural paradigms and web service technologies provide a viable possibility to achieve these promises. This paper outlines an architectural design and accompanying implementation technologies for its realization as a web services-based event-driven SOA.",
"title": ""
},
{
"docid": "ad2d21232d8a9af42ea7339574739eb3",
"text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"title": ""
},
{
"docid": "cae2b62afbecedc995612ed3a710e9d9",
"text": "Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economicbased systems for peer-to-peer distributed computing by developing users’ quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.",
"title": ""
},
{
"docid": "fe842f2857bf3a60166c8f52e769585a",
"text": "We study the problem of explaining a rich class of behavioral properties of deep neural networks. Distinctively, our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on a quantity and distribution of interest, using an axiomatically-justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by demonstrating a number of its unique capabilities on convolutional neural networks trained on ImageNet. Our evaluation demonstrates that influence-directed explanations (1) identify influential concepts that generalize across instances, (2) can be used to extract the “essence” of what the network learned about a class, and (3) isolate individual features the network uses to make decisions and distinguish related classes.",
"title": ""
},
{
"docid": "43bfbebda8dcb788057e1c98b7fccea6",
"text": "Der Beitrag stellt mit Quasar Enterprise einen durchgängigen, serviceorientierten Ansatz zur Gestaltung großer Anwendungslandschaften vor. Er verwendet ein Architektur-Framework zur Strukturierung der methodischen Schritte und führt ein Domänenmodell zur Präzisierung der Begrifflichkeiten und Entwicklungsartefakte ein. Die dargestellten methodischen Bausteine und Richtlinien beruhen auf langjährigen Erfahrungen in der industriellen Softwareentwicklung. 1 Motivation und Hintergrund sd&m beschäftigt sich seit seiner Gründung vor 25 Jahren mit dem Bau von individuellen Anwendungssystemen. Als konsolidierte Grundlage der Arbeit in diesem Bereich wurde Quasar (Quality Software Architecture) entwickelt – die sd&m StandardArchitektur für betriebliche Informationssysteme [Si04]. Quasar dient sd&m als Referenz für seine Disziplin des Baus einzelner Anwendungen. Seit einigen Jahren beschäftigt sich sd&m im Auftrag seiner Kunden mehr und mehr mit Fragestellungen auf der Ebene ganzer Anwendungslandschaften. Das Spektrum reicht von IT-Beratung zur Unternehmensarchitektur, über die Systemintegration querschnittlicher technischer, aber auch dedizierter fachlicher COTS-Produkte bis hin zum Bau einzelner großer Anwendungssysteme auf eine Art und Weise, dass eine perfekte Passung in eine moderne Anwendungslandschaft gegeben ist. Zur Abdeckung dieses breiten Spektrums an Aufgaben wurde eine neue Disziplin zur Gestaltung von Anwendungslandschaften benötigt. sd&m entwickelte hierzu eine neue Referenz – Quasar Enterprise – ein Quasar auf Unternehmensebene.",
"title": ""
},
{
"docid": "d35c176cfe5c8296862513c26f0fdffa",
"text": "Vertical scar mammaplasty, first described by Lötsch in 1923 and Dartigues in 1924 for mastopexy, was extended later to breast reduction by Arié in 1957. It was otherwise lost to surgical history until Lassus began experimenting with it in 1964. It then was extended by Marchac and de Olarte, finally to be popularized by Lejour. Despite initial skepticism, vertical reduction mammaplasty is becoming increasingly popular in recent years because it best incorporates the two concepts of minimal scarring and a satisfactory breast shape. At the moment, vertical scar techniques seem to be more popular in Europe than in the United States. A recent survey, however, has demonstrated that even in the United States, it has surpassed the rate of inverted T-scar breast reductions. The technique, however, is not without major drawbacks, such as long vertical scars extending below the inframammary crease and excessive skin gathering and “dog-ear” at the lower end of the scar that may require long periods for resolution, causing extreme distress to patients and surgeons alike. Efforts are being made to minimize these complications and make the procedure more user-friendly either by modifying it or by replacing it with an alternative that retains the same advantages. Although conceptually opposed to the standard vertical design, the circumvertical modification probably is the most important maneuver for shortening vertical scars. Residual dog-ears often are excised, resulting in a short transverse scar (inverted T- or L-scar). The authors describe limited subdermal undermining of the skin at the inferior edge of the vertical incisions with liposculpture of the inframammary crease, avoiding scar extension altogether. Simplified circumvertical drawing that uses the familiar Wise pattern also is described.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
},
{
"docid": "96029f6daa55fff7a76ab9bd48ebe7b9",
"text": "According to the principle of compositionality, the meaning of a sentence is computed from the meaning of its parts and the way they are syntactically combined. In practice, however, the syntactic structure is computed by automatic parsers which are far-from-perfect and not tuned to the specifics of the task. Current recursive neural network (RNN) approaches for computing sentence meaning therefore run into a number of practical difficulties, including the need to carefully select a parser appropriate for the task, deciding how and to what extent syntactic context modifies the semantic composition function, as well as on how to transform parse trees to conform to the branching settings (typically, binary branching) of the RNN. This paper introduces a new model, the Forest Convolutional Network, that avoids all of these challenges, by taking a parse forest as input, rather than a single tree, and by allowing arbitrary branching factors. We report improvements over the state-of-the-art in sentiment analysis and question classification.",
"title": ""
}
] |
scidocsrr
|
936f998587b76ff8b57021398cccb750
|
How Software Project Risk Affects Project Performance: An Investigation of the Dimensions of Risk and an Exploratory Model
|
[
{
"docid": "4506bc1be6e7b42abc34d79dc426688a",
"text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.",
"title": ""
},
{
"docid": "02b6bcef39a21b14ce327f3dc9671fef",
"text": "We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the …",
"title": ""
}
] |
[
{
"docid": "24167db00908c65558e8034d94dfb8da",
"text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.",
"title": ""
},
{
"docid": "fedbeb9d39ce91c96d93e05b5856f09e",
"text": "Devices for continuous glucose monitoring (CGM) are currently a major focus of research in the area of diabetes management. It is envisioned that such devices will have the ability to alert a diabetes patient (or the parent or medical care giver of a diabetes patient) of impending hypoglycemic/hyperglycemic events and thereby enable the patient to avoid extreme hypoglycemic/hyperglycemic excursions as well as minimize deviations outside the normal glucose range, thus preventing both life-threatening events and the debilitating complications associated with diabetes. It is anticipated that CGM devices will utilize constant feedback of analytical information from a glucose sensor to activate an insulin delivery pump, thereby ultimately realizing the concept of an artificial pancreas. Depending on whether the CGM device penetrates/breaks the skin and/or the sample is measured extracorporeally, these devices can be categorized as totally invasive, minimally invasive, and noninvasive. In addition, CGM devices are further classified according to the transduction mechanisms used for glucose sensing (i.e., electrochemical, optical, and piezoelectric). However, at present, most of these technologies are plagued by a variety of issues that affect their accuracy and long-term performance. This article presents a critical comparison of existing CGM technologies, highlighting critical issues of device accuracy, foreign body response, calibration, and miniaturization. An outlook on future developments with an emphasis on long-term reliability and performance is also presented.",
"title": ""
},
{
"docid": "0d43f72f92a73b648edd2dc3d1f0d141",
"text": "While egocentric video is becoming increasingly popular, browsing it is very difficult. In this paper we present a compact 3D Convolutional Neural Network (CNN) architecture for long-term activity recognition in egocentric videos. Recognizing long-term activities enables us to temporally segment (index) long and unstructured egocentric videos. Existing methods for this task are based on hand tuned features derived from visible objects, location of hands, as well as optical flow. Given a sparse optical flow volume as input, our CNN classifies the camera wearer's activity. We obtain classification accuracy of 89%, which outperforms the current state-of-the-art by 19%. Additional evaluation is performed on an extended egocentric video dataset, classifying twice the amount of categories than current state-of-the-art. Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99.2% accuracy, up by 24% from current state-of-the-art. To better understand what the network actually learns, we propose a novel visualization of CNN kernels as flow fields.",
"title": ""
},
{
"docid": "d40a1b72029bdc8e00737ef84fdf5681",
"text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "b0cc7d5313acaa47eb9cba9e830fa9af",
"text": "Data-driven intelligent transportation systems utilize data resources generated within intelligent systems to improve the performance of transportation systems and provide convenient and reliable services. Traffic data refer to datasets generated and collected on moving vehicles and objects. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of related data processing techniques, and summarizes existing methods for depicting the temporal, spatial, numerical, and categorical properties of traffic data.",
"title": ""
},
{
"docid": "f4b270b09649ba05dd22d681a2e3e3b7",
"text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.",
"title": ""
},
{
"docid": "b75f793f4feac0b658437026d98a1e8b",
"text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,",
"title": ""
},
{
"docid": "d63543712b2bebfbd0ded148225bb289",
"text": "This paper surveys recent literature in the area of Neural Network, Data Mining, Hidden Markov Model and Neuro-Fuzzy system used to predict the stock market fluctuation. Neural Networks and Neuro-Fuzzy systems are identified to be the leading machine learning techniques in stock market index prediction area. The Traditional techniques are not cover all the possible relation of the stock price fluctuations. There are new approaches to known in-depth of an analysis of stock price variations. NN and Markov Model can be used exclusively in the finance markets and forecasting of stock price. In this paper, we propose a forecasting method to provide better an accuracy rather traditional method. Forecasting stock return is an important financial subject that has attracted researchers’ attention for many years. It involves an assumption that fundamental information publicly available in the past has some predictive relationships to the future stock returns.",
"title": ""
},
{
"docid": "41261cf72d8ee3bca4b05978b07c1c4f",
"text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.",
"title": ""
},
{
"docid": "f741eb8ca9fb9798fb89674a0e045de9",
"text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.",
"title": ""
},
{
"docid": "516f4b7bea87fad16b774a7f037efaec",
"text": "BACKGROUND\nOperating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency.\n\n\nSTUDY DESIGN\nA multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation.\n\n\nRESULTS\nAcross 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day.\n\n\nCONCLUSIONS\nUse of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties.",
"title": ""
},
{
"docid": "9098d40a9e16a1bd1ed0a9edd96f3258",
"text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: [email protected] (Luiz Souza), [email protected] (Luciano Oliveira), [email protected] (Mauricio Pamplona), [email protected] (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "91c937ddfcf7aa0957e1c9a997149f87",
"text": "Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.",
"title": ""
},
{
"docid": "976064ba00f4eb2020199f264d29dae2",
"text": "Social network analysis is a large and growing body of research on the measurement and analysis of relational structure. Here, we review the fundamental concepts of network analysis, as well as a range of methods currently used in the field. Issues pertaining to data collection, analysis of single networks, network comparison, and analysis of individual-level covariates are discussed, and a number of suggestions are made for avoiding common pitfalls in the application of network methods to substantive questions.",
"title": ""
},
{
"docid": "6b5bde39af1260effa0587d8c6afa418",
"text": "This survey highlights the major issues concerning privacy and security in online social networks. Firstly, we discuss research that aims to protect user data from the various attack vantage points including other users, advertisers, third party application developers, and the online social network provider itself. Next we cover social network inference of user attributes, locating hubs, and link prediction. Because online social networks are so saturated with sensitive information, network inference plays a major privacy role. As a response to the issues brought forth by client-server architectures, distributed social networks are discussed. We then cover the challenges that providers face in maintaining the proper operation of an online social network including minimizing spam messages, and reducing the number of sybil accounts. Finally, we present research in anonymizing social network data. This area is of particular interest in order to continue research in this field both in academia and in industry.",
"title": ""
},
{
"docid": "01b147cb417ceedf40dadcb3ee31a1b2",
"text": "BACKGROUND\nPurposeful and timely rounding is a best practice intervention to routinely meet patient care needs, ensure patient safety, decrease the occurrence of patient preventable events, and proactively address problems before they occur. The Institute for Healthcare Improvement (IHI) endorsed hourly rounding as the best way to reduce call lights and fall injuries, and increase both quality of care and patient satisfaction. Nurse knowledge regarding purposeful rounding and infrastructure supporting timeliness are essential components for consistency with this patient centred practice.\n\n\nOBJECTIVES\nThe project aimed to improve patient satisfaction and safety through implementation of purposeful and timely nursing rounds. Goals for patient satisfaction scores and fall volume were set. Specific objectives were to determine current compliance with evidence-based criteria related to rounding times and protocols, improve best practice knowledge among staff nurses, and increase compliance with these criteria.\n\n\nMETHODS\nFor the objectives of this project the Joanna Briggs Institute's Practical Application of Clinical Evidence System and Getting Research into Practice audit tool were used. Direct observation of staff nurses on a medical surgical unit in the United States was employed to assess timeliness and utilization of a protocol when rounding. Interventions were developed in response to baseline audit results. A follow-up audit was conducted to determine compliance with the same criteria. For the project aims, pre- and post-intervention unit-level data related to nursing-sensitive elements of patient satisfaction and safety were compared.\n\n\nRESULTS\nRounding frequency at specified intervals during awake and sleeping hours nearly doubled. Use of a rounding protocol increased substantially to 64% compliance from zero. Three elements of patient satisfaction had substantive rate increases but the hospital's goals were not reached. Nurse communication and pain management scores increased modestly (5% and 11%, respectively). Responsiveness of hospital staff increased moderately (15%) with a significant sub-element increase in toileting (41%). Patient falls decreased by 50%.\n\n\nCONCLUSIONS\nNurses have the ability to improve patient satisfaction and patient safety outcomes by utilizing nursing round interventions which serve to improve patient communication and staff responsiveness. Having a supportive infrastructure and an organized approach, encompassing all levels of staff, to meet patient needs during their hospital stay was a key factor for success. Hard-wiring of new practices related to workflow takes time as staff embrace change and understand how best practice interventions significantly improve patient outcomes.",
"title": ""
},
{
"docid": "ec681bc427c66adfad79008840ea9b60",
"text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.",
"title": ""
},
{
"docid": "1733a6f167e7e13bc816b7fc546e19e3",
"text": "As many other machine learning driven medical image analysis tasks, skin image analysis suffers from a chronic lack of labeled data and skewed class distributions, which poses problems for the training of robust and well-generalizing models. The ability to synthesize realistic looking images of skin lesions could act as a reliever for the aforementioned problems. Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking medical images, however limited to low resolution, whereas machine learning models for challenging tasks such as skin lesion segmentation or classification benefit from much higher resolution data. In this work, we successfully synthesize realistically looking images of skin lesions with GANs at such high resolution. Therefore, we utilize the concept of progressive growing, which we both quantitatively and qualitatively compare to other GAN architectures such as the DCGAN and the LAPGAN. Our results show that with the help of progressive growing, we can synthesize highly realistic dermoscopic images of skin lesions that even expert dermatologists find hard to distinguish from real ones.",
"title": ""
}
] |
scidocsrr
|
7096fe493a51cd3c4c428cb55b83ffbf
|
Oligarchic Control of Business-to-Business Blockchains
|
[
{
"docid": "668953b5f6fbfc440bb6f3a91ee7d06b",
"text": "Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters.\n In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.",
"title": ""
},
{
"docid": "4fc67f5a4616db0906b943d7f13c856d",
"text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].",
"title": ""
}
] |
[
{
"docid": "7747ea744400418a9003f8bd0990fe71",
"text": "0747-5632/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.chb.2009.06.001 * Tel.: +82 02 74",
"title": ""
},
{
"docid": "0c7891f543b79f8b196504fbf81493ba",
"text": "Twenty-five years of consumer socialization research have yielded an impressive set of findings. The purpose of our article is to review these findings and assess what we know about children’s development as consumers. Our focus is on the developmental sequence characterizing the growth of consumer knowledge, skills, and values as children mature throughout childhood and adolescence. In doing so, we present a conceptual framework for understanding consumer socialization as a series of stages, with transitions between stages occurring as children grow older and mature in cognitive and social terms. We then review empirical findings illustrating these stages, including children’s knowledge of products, brands, advertising, shopping, pricing, decision-making strategies, parental influence strategies, and consumption motives and values. Based on the evidence reviewed, implications are drawn for future theoretical and empirical development in the field of consumer socialization.",
"title": ""
},
{
"docid": "3bf9e696755c939308efbcca363d4f49",
"text": "Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.",
"title": ""
},
{
"docid": "6b3e0cd49c05c43abd7c8d0b6db093b0",
"text": "We present a new image reconstruction method that replaces the projector in a projected gradient descent (PGD) with a convolutional neural network (CNN). Recently, CNNs trained as image-to-image regressors have been successfully used to solve inverse problems in imaging. However, unlike existing iterative image reconstruction algorithms, these CNN-based approaches usually lack a feedback mechanism to enforce that the reconstructed image is consistent with the measurements. We propose a relaxed version of PGD wherein gradient descent enforces measurement consistency, while a CNN recursively projects the solution closer to the space of desired reconstruction images. We show that this algorithm is guaranteed to converge and, under certain conditions, converges to a local minimum of a non-convex inverse problem. Finally, we propose a simple scheme to train the CNN to act like a projector. Our experiments on sparse-view computed-tomography reconstruction show an improvement over total variation-based regularization, dictionary learning, and a state-of-the-art deep learning-based direct reconstruction technique.",
"title": ""
},
{
"docid": "a999bf3da879dde7fc2acb8794861daf",
"text": "Most OECD Member countries have sought to renew their systems and structures of public management in the last 10-15 years. Some started earlier than others and the emphasis will vary among Member countries according to their historic traditions and institutions. There is no single best model of public management, but what stands out most clearly is the extent to which countries have pursued and are pursuing broadly common approaches to public management reform. This is most probably because countries have been responding to essentially similar pressures to reform.",
"title": ""
},
{
"docid": "88b89521775ba2d8570944a54e516d0f",
"text": "The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the “physiological envelope” during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.",
"title": ""
},
{
"docid": "8a9cf6b4d7d6d2be1d407ef41ceb23e5",
"text": "A highly discriminative and computationally efficient descriptor is needed in many computer vision applications involving human action recognition. This paper proposes a hand-crafted skeleton-based descriptor for human action recognition. It is constructed from five fixed size covariance matrices calculated using strongly related joints coordinates over five body parts (spine, left/ right arms, and left/ right legs). Since covariance matrices are symmetric, the lower/ upper triangular parts of these matrices are concatenated to generate an efficient descriptor. It achieves a saving from 78.26 % to 80.35 % in storage space and from 75 % to 90 % in processing time (depending on the dataset) relative to techniques adopting a covariance descriptor based on all the skeleton joints. To show the effectiveness of the proposed method, its performance is evaluated on five public datasets: MSR-Action3D, MSRC-12 Kinect Gesture, UTKinect-Action, Florence3D-Action, and NTU RGB+D. The obtained recognition rates on all datasets outperform many existing methods and compete with the current state of the art techniques.",
"title": ""
},
{
"docid": "40d46bc75d11b6d4139cb7a1267ac234",
"text": "10 Abstract This paper introduces the third generation of Pleated Pneumatic Artificial Muscles (PPAM), which has been developed to simplify the production over the first and second prototype. This type of artificial muscle was developed to overcome dry friction and material deformation, which is present in the widely used McKibben muscle. The essence of the PPAM is its pleated membrane structure which enables the 15 muscle to work at low pressures and at large contractions. In order to validate the new PPAM generation, it has been compared with the mathematical model and the previous generation. The new production process and the use of new materials introduce improvements such as 55% reduction in the actuator’s weight, a higher reliability, a 75% reduction in the production time and PPAMs can now be produced in all sizes from 4 to 50 cm. This opens the possibility to commercialize this type of muscles 20 so others can implement it. Furthermore, a comparison with experiments between PPAM and Festo McKibben muscles is discussed. Small PPAMs present similar force ranges and larger contractions than commercially available McKibben-like muscles. The use of series arrangements of PPAMs allows for large strokes and relatively small diameters at the same time and, since PPAM 3.0 is much more lightweight than the commong McKibben models made by Festo, it presents better force-to-mass and energy 25 to mass ratios than Festo models. 2012 Taylor & Francis and The Robotics Society of Japan",
"title": ""
},
{
"docid": "e995ed011dedd9e543f07a4af78e27bb",
"text": "Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together.",
"title": ""
},
{
"docid": "631d2c75377517fed1864e3a47ae873e",
"text": "Choi, Wiemer-Hastings, and Moore (2001) proposed to use Latent Semantic Analysis (LSA) to extract semantic knowledge from corpora in order to improve the accuracy of a text segmentation algorithm. By comparing the accuracy of the very same algorithm, depending on whether or not it takes into account complementary semantic knowledge, they were able to show the benefit derived from such knowledge. In their experiments, semantic knowledge was, however, acquired from a corpus containing the texts to be segmented in the test phase. If this hyper-specificity of the LSA corpus explains the largest part of the benefit, one may wonder if it is possible to use LSA to acquire generic semantic knowledge that can be used to segment new texts. The two experiments reported here show that the presence of the test materials in the LSA corpus has an important effect, but also that the generic semantic knowledge derived from large corpora clearly improves the segmentation accuracy.",
"title": ""
},
{
"docid": "77c8dc928492524cbf665422bbcce60d",
"text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact [email protected]. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2016, INFORMS",
"title": ""
},
{
"docid": "2cebd9275e30da41a97f6d77207cc793",
"text": "Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem’s structure to guarantee efficient learning.",
"title": ""
},
{
"docid": "259339e228c4b569f3813d3f3c7c832f",
"text": "BACKGROUND\nPrevention and management of work-related stress and related mental problems is a great challenge. Mobile applications are a promising way to integrate prevention strategies into the everyday lives of citizens.\n\n\nOBJECTIVE\nThe objectives of this study was to study the usage, acceptance, and usefulness of a mobile mental wellness training application among working-age individuals, and to derive preliminary design implications for mobile apps for stress management.\n\n\nMETHODS\nOiva, a mobile app based on acceptance and commitment therapy (ACT), was designed to support active learning of skills related to mental wellness through brief ACT-based exercises in the daily life. A one-month field study with 15 working-age participants was organized to study the usage, acceptance, and usefulness of Oiva. The usage of Oiva was studied based on the usage log files of the application. Changes in wellness were measured by three validated questionnaires on stress, satisfaction with life (SWLS), and psychological flexibility (AAQ-II) at the beginning and at end of the study and by user experience questionnaires after one week's and one month's use. In-depth user experience interviews were conducted after one month's use to study the acceptance and user experiences of Oiva.\n\n\nRESULTS\nOiva was used actively throughout the study. The average number of usage sessions was 16.8 (SD 2.4) and the total usage time per participant was 3 hours 12 minutes (SD 99 minutes). Significant pre-post improvements were obtained in stress ratings (mean 3.1 SD 0.2 vs mean 2.5 SD 0.1, P=.003) and satisfaction with life scores (mean 23.1 SD 1.3 vs mean 25.9 SD 0.8, P=.02), but not in psychological flexibility. Oiva was perceived easy to use, acceptable, and useful by the participants. A randomized controlled trial is ongoing to evaluate the effectiveness of Oiva on working-age individuals with stress problems.\n\n\nCONCLUSIONS\nA feasibility study of Oiva mobile mental wellness training app showed good acceptability, usefulness, and engagement among the working-age participants, and provided increased understanding on the essential features of mobile apps for stress management. Five design implications were derived based on the qualitative findings: (1) provide exercises for everyday life, (2) find proper place and time for challenging content, (3) focus on self-improvement and learning instead of external rewards, (4) guide gently but do not restrict choice, and (5) provide an easy and flexible tool for self-reflection.",
"title": ""
},
{
"docid": "b65ca87f617d8ddf451a4d9dab470d17",
"text": "Artificial neural network is one of the intelligent methods in Artificial Intelligence. There are many decisions of different tasks using neural network approach. The forecasting problems are high challenge and researchers use different methods to solve them. The financial tasks related to forecasting, classification and management using artificial neural network are considered. The technology and methods for prediction of financial data as well as the developed system for forecasting of financial markets via neural network are described in the paper. The designed architecture of a neural network using four different technical indicators is presented. The developed neural network is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is a training algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise. The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data. Key-Words: neural networks, forecasting, training algorithm, financial indicators, backpropagation",
"title": ""
},
{
"docid": "fa82b75a3244ef2407c2d14c8a3a5918",
"text": "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "0a05cfa04d520fcf1db6c4aafb9b65b6",
"text": "Motor learning can be defined as changing performance so as to optimize some function of the task, such as accuracy. The measure of accuracy that is optimized is called a loss function and specifies how the CNS rates the relative success or cost of a particular movement outcome. Models of pointing in sensorimotor control and learning usually assume a quadratic loss function in which the mean squared error is minimized. Here we develop a technique for measuring the loss associated with errors. Subjects were required to perform a task while we experimentally controlled the skewness of the distribution of errors they experienced. Based on the change in the subjects' average performance, we infer the loss function. We show that people use a loss function in which the cost increases approximately quadratically with error for small errors and significantly less than quadratically for large errors. The system is thus robust to outliers. This suggests that models of sensorimotor control and learning that have assumed minimizing squared error are a good approximation but tend to penalize large errors excessively.",
"title": ""
},
{
"docid": "c83db87d7ac59e1faf75b408953e1324",
"text": "PURPOSE\nThis project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text.\n\n\nMETHOD\nParticipants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks.\n\n\nRESULTS\nThe TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions.\n\n\nCONCLUSION\nThe results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention.",
"title": ""
},
{
"docid": "1ffc6db796b8e8a03165676c1bc48145",
"text": "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. e result is a practical scalable algorithm that applies to real world data. e UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
}
] |
scidocsrr
|
8c58ca781d2b58f59f1cc48311396108
|
Scalable Kernel TCP Design and Implementation for Short-Lived Connections
|
[
{
"docid": "baa59c53346e16f4c55b6fef20f19a89",
"text": "Incoming and outgoing processing for a given TCP connection often execute on different cores: an incoming packet is typically processed on the core that receives the interrupt, while outgoing data processing occurs on the core running the relevant user code. As a result, accesses to read/write connection state (such as TCP control blocks) often involve cache invalidations and data movement between cores' caches. These can take hundreds of processor cycles, enough to significantly reduce performance.\n We present a new design, called Affinity-Accept, that causes all processing for a given TCP connection to occur on the same core. Affinity-Accept arranges for the network interface to determine the core on which application processing for each new connection occurs, in a lightweight way; it adjusts the card's choices only in response to imbalances in CPU scheduling. Measurements show that for the Apache web server serving static files on a 48-core AMD system, Affinity-Accept reduces time spent in the TCP stack by 30% and improves overall throughput by 24%.",
"title": ""
},
{
"docid": "b7222f86da6f1e44bd1dca88eb59dc4b",
"text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.",
"title": ""
},
{
"docid": "bb8604e0446fd1d3b01f426a8aa8c7e5",
"text": "Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, including memory hierarchies, interconnects, instruction sets and variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware variants pose serious challenges for operating system structures.\n We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.\n We have implemented a multikernel OS to show that the approach is promising, and we describe how traditional scalability problems for operating systems (such as memory management) can be effectively recast using messages and can exploit insights from distributed systems and networking. An evaluation of our prototype on multicore systems shows that, even on present-day machines, the performance of a multikernel is comparable with a conventional OS, and can scale better to support future hardware.",
"title": ""
}
] |
[
{
"docid": "faf25bfda6d078195b15f5a36a32673a",
"text": "In high performance VLSI circuits, the power consumption is mainly related to signal transition, charging and discharging of parasitic capacitance in transistor during switching activity. Adiabatic switching is a reversible logic to conserve energy instead of dissipating power reuses it. In this paper, low power multipliers and compressor are designed using adiabatic logic. Compressors are the basic components in many applications like partial product summation in multipliers. The Vedic multiplier is designed using the compressor and the power result is analysed. The designs are implemented and the power results are obtained using TANNER EDA 12.0 tool. This paper presents a novel scheme for analysis of low power multipliers using adiabatic logic in inverter and in the compressor. The scheme is optimized for low power as well as high speed implementation over reported scheme. K e y w o r d s : A d i a b a t i c l o g i c , C o m p r e s s o r , M u l t i p l i e r s .",
"title": ""
},
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "e1ba35e1558540c1b99abf1e05e927fc",
"text": "Device-to-device (D2D) communication underlaying cellular networks brings significant benefits to resource utilization, improving user's throughput and extending battery life of user equipments. However, the allocation of radio resources and power to D2D communication needs elaborate coordination, as D2D communication causes interference to cellular networks. In this paper, we propose a novel joint radio resource and power allocation scheme to improve the performance of the system in the uplink period. Energy efficiency is considered as our optimization objective since devices are handheld equipments with limited battery life. We formulate the the allocation problem as a reverse iterative combinatorial auction game. In the auction, radio resources occupied by cellular users are considered as bidders competing for D2D packages and their corresponding transmit power. We propose an algorithm to solve the allocation problem as an auction game. We also perform numerical simulations to prove the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "ed97b6815085d2664c6548abcf68a767",
"text": "Good mental health literacy in young people and their key helpers may lead to better outcomes for those with mental disorders, either by facilitating early help-seeking by young people themselves, or by helping adults to identify early signs of mental disorders and seek help on their behalf. Few interventions to improve mental health literacy of young people and their helpers have been evaluated, and even fewer have been well evaluated. There are four categories of interventions to improve mental health literacy: whole-of-community campaigns; community campaigns aimed at a youth audience; school-based interventions teaching help-seeking skills, mental health literacy, or resilience; and programs training individuals to better intervene in a mental health crisis. The effectiveness of future interventions could be enhanced by using specific health promotion models to guide their development.",
"title": ""
},
{
"docid": "12fa7a50132468598cf20ac79f51b540",
"text": "As medical organizations modernize their operations, they are increasingly adopting electronic health records (EHRs) and deploying new health information technology systems that create, gather, and manage their information. As a result, the amount of data available to clinicians, administrators, and researchers in the healthcare system continues to grow at an unprecedented rate. However, despite the substantial evidence showing the benefits of EHR adoption, e-prescriptions, and other components of health information exchanges, healthcare providers often report only modest improvements in their ability to make better decisions by using more comprehensive clinical information. The large volume of clinical data now being captured for each patient poses many challenges to (a) clinicians trying to combine data from different disparate systems and make sense of the patient’s condition within the context of the patient’s medical history, (b) administrators trying to make decisions grounded in data, (c) researchers trying to understand differences in population outcomes, and (d) patients trying to make use of their own medical data. In fact, despite the many hopes that access to more information would lead to more informed decisions, access to comprehensive and large-scale clinical data resources has instead made some analytical processes even more difficult. Visual analytics is an emerging discipline that has shown significant promise in addressing many of these information overload challenges. Visual analytics is the science of analytical reasoning facilitated by advanced interactive visual interfaces. In order to facilitate reasoning over, and interpretation of, complex data, visual analytics techniques combine concepts from data mining, machine learning, human computing interaction, and human cognition. As the volume of healthrelated data continues to grow at unprecedented rates and new information systems are deployed to those already overrun with too much data, there is a need for exploring how visual analytics methods can be used to avoid information overload. Information overload is the problem that arises when individuals try to analyze a number of variables that surpass the limits of human cognition. Information overload often leads to users ignoring, overlooking, or misinterpreting crucial information. The information overload problem is widespread in the healthcare domain and can result in incorrect interpretations of data, wrong diagnoses, and missed warning signs of impending changes to patient conditions. The multi-modal and heterogeneous properties of EHR data together with the frequency of redundant, irrelevant, and subjective measures pose significant challenges to users trying to synthesize the information and obtain actionable insights. Yet despite these challenges, the promise of big data in healthcare remains. There is a critical need to support research and pilot projects to study effective ways of using visual analytics to support the analysis of large amounts of medical data. Currently new interactive interfaces are being developed to unlock the value of large-scale clinical databases for a wide variety of different tasks. For instance, visual analytics could help provide clinicians with more effective ways to combine the longitudinal clinical data with the patient-generated health data to better understand patient progression. Patients could be supported in understanding personalized wellness plans and comparing their health measurements against similar patients. Researchers could use visual analytics tools to help perform population-based analysis and obtain insights from large amounts of clinical data. Hospital administrators could use visual analytics to better understand the productivity of an organization, gaps in care, outcomes measurements, and patient satisfaction. Visual analytics systems—by combining advanced interactive visualization methods with statistical inference and correlation models—have the potential to support intuitive analysis for all of these user populations while masking the underlying complexity of the data. This special focus issue of JAMIA is dedicated to new research, applications, case studies, and approaches that use visual analytics to support the analysis of complex clinical data.",
"title": ""
},
{
"docid": "dc8143e1aee228db14347dc1094a7df6",
"text": "In this paper, we propose a novel large-scale, context-aware recommender system that provides accurate recommendations, scalability to a large number of diverse users and items, differential services, and does not suffer from “cold start” problems. Our proposed recommendation system relies on a novel algorithm which learns online the item preferences of users based on their click behavior, and constructs online item-cluster trees. The recommendations are then made by choosing an item-cluster level and then selecting an item within that cluster as a recommendation for the user. This approach is able to significantly improve the learning speed when the number of users and items is large, while still providing high recommendation accuracy. Each time a user arrives at the website, the system makes a recommendation based on the estimations of item payoffs by exploiting past context arrivals in a neighborhood of the current user's context. It exploits the similarity of contexts to learn how to make better recommendations even when the number and diversity of users and items is large. This also addresses the cold start problem by using the information gained from similar users and items to make recommendations for new users and items. We theoretically prove that the proposed algorithm for item recommendations converges to the optimal item recommendations in the long-run. We also bound the probability of making a suboptimal item recommendation for each user arriving to the system while the system is learning. Experimental results show that our approach outperforms the state-of-the-art algorithms by over 20 percent in terms of click through rates.",
"title": ""
},
{
"docid": "339bfb7f54ce8202de1a4079097a6f8d",
"text": "This article reviews research from published studies on the association between nutrition among school-aged children and their performance in school and on tests of cognitive functioning. Each reviewed article is accompanied by a brief description of its research methodology and outcomes. Articles are separated into 4 categories: food insufficiency, iron deficiency and supplementation, deficiency and supplementation of micronutrients, and the importance of breakfast. Research shows that children with iron deficiencies sufficient to cause anemia are at a disadvantage academically. Their cognitive performance seems to improve with iron therapy. A similar association and improvement with therapy is not found with either zinc or iodine deficiency, according to the reviewed articles. There is no evidence that population-wide vitamin and mineral supplementation will lead to improved academic performance. Food insufficiency is a serious problem affecting children’s ability to learn, but its relevance to US populations needs to be better understood. Research indicates that school breakfast programs seem to improve attendance rates and decrease tardiness. Among severely undernourished populations, school breakfast programs seem to improve academic performance and cognitive functioning. (J Sch Health. 2005;75(6):199-213) Parents, educators, and health professionals have long touted the association between what our children eat and their school performance. Evidence for this correlation is not always apparent, and biases on both sides of the argument sometimes override data when this topic is discussed. Understanding existing evidence linking students’ dietary intake and their ability to learn is a logical first step in developing school food service programs, policies, and curricula on nutrition and in guiding parents of school-aged children. The National Coordinating Committee on School Health and Safety (NCCSHS) comprises representatives of several federal departments and nongovernmental organizations working to develop and enhance coordinated school health programs. The NCCSHS has undertaken a project to enhance awareness of evidence linking child health and school performance and identifying gaps in our knowledge. NCCSHS has conducted a search of peerreviewed, published research reporting on the relationship between students’ health and academic performance. In addition to nutrition, NCCSHS has sponsored research reviews of the association between academic performance and asthma, diabetes, sickle cell anemia, sleep, obesity, and physical activity. SELECTION OF ARTICLES Articles meeting the following specific characteristics were selected. (1) Subjects were school-aged children (5 to 18 years), (2) article was published after 1980 in a peerreviewed journal, and (3) findings included at least 1 of the following outcome measures: school attendance, academic achievement, a measure of cognitive ability (such as general intelligence, memory), and attention. Students’ level of attention was only acceptable as an outcome measure for purposes of inclusion in this review, if attention was measured objectively in the school environment. Studies of the impact of nutritional intake in children prior to school age were not included. Studies were identified using MedLine and similar Internet-based searches. If a full article could not be retrieved, but a detailed abstract was available, the research was included. Outcomes other than academic achievement, attendance, and cognitive ability, although considered major by the authors, may not be described at all or are only briefly alluded to in the tables of research descriptions.",
"title": ""
},
{
"docid": "fc6382579f90ffbc2e54498ad2034d3b",
"text": "Features extracted by deep networks have been popular in many visual search tasks. This article studies deep network structures and training schemes for mobile visual search. The goal is to learn an effective yet portable feature representation that is suitable for bridging the domain gap between mobile user photos and (mostly) professionally taken product images while keeping the computational cost acceptable for mobile-based applications. The technical contributions are twofold. First, we propose an alternative of the contrastive loss popularly used for training deep Siamese networks, namely robust contrastive loss, where we relax the penalty on some positive and negative pairs to alleviate overfitting. Second, a simple multitask fine-tuning scheme is leveraged to train the network, which not only utilizes knowledge from the provided training photo pairs but also harnesses additional information from the large ImageNet dataset to regularize the fine-tuning process. Extensive experiments on challenging real-world datasets demonstrate that both the robust contrastive loss and the multitask fine-tuning scheme are effective, leading to very promising results with a time cost suitable for mobile product search scenarios.",
"title": ""
},
{
"docid": "b49925f5380f695ccc3f9a150030051c",
"text": "Understanding the behaviour of algorithms is a key element of computer science. However, this learning objective is not always easy to achieve, as the behaviour of some algorithms is complicated or not readily observable, or affected by the values of their input parameters. To assist students in learning the multilevel feedback queue scheduling algorithm (MLFQ), we designed and developed an interactive visualization tool, Marble MLFQ, that illustrates how the algorithm works under various conditions. The tool is intended to supplement course material and instructions in an undergraduate operating systems course. The main features of Marble MLFQ are threefold: (1) It animates the steps of the scheduling algorithm graphically to allow users to observe its behaviour; (2) It provides a series of lessons to help users understand various aspects of the algorithm; and (3) It enables users to customize input values to the algorithm to support exploratory learning.",
"title": ""
},
{
"docid": "4f1070b988605290c1588918a716cef2",
"text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.",
"title": ""
},
{
"docid": "d19a77b3835b7b43acf57da377b11cb4",
"text": "Given the importance of relation or event extraction from biomedical research publications to support knowledge capture and synthesis, and the strong dependency of approaches to this information extraction task on syntactic information, it is valuable to understand which approaches to syntactic processing of biomedical text have the highest performance. We perform an empirical study comparing state-of-the-art traditional feature-based and neural network-based models for two core natural language processing tasks of part-of-speech (POS) tagging and dependency parsing on two benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge, there is no recent work making such comparisons in the biomedical context; specifically no detailed analysis of neural models on this data is available. Experimental results show that in general, the neural models outperform the feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We also perform a task-oriented evaluation to investigate the influences of these models in a downstream application on biomedical event extraction, and show that better intrinsic parsing performance does not always imply better extrinsic event extraction performance. We have presented a detailed empirical study comparing traditional feature-based and neural network-based models for POS tagging and dependency parsing in the biomedical context, and also investigated the influence of parser selection for a biomedical event extraction downstream task. We make the retrained models available at https://github.com/datquocnguyen/BioPosDep.",
"title": ""
},
{
"docid": "69f95ac2ca7b32677151de88b9d95d4c",
"text": "Gunaratna, Kalpa. PhD, Department of Computer Science and Engineering, Wright State University, 2017. Semantics-based Summarization of Entities in Knowledge Graphs. The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored. In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the",
"title": ""
},
{
"docid": "7f110e4769b996de13afe63962bcf2d2",
"text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.",
"title": ""
},
{
"docid": "ecdeb5b8665661c55d91b782dd8fb3a7",
"text": "We present a classifier-based parser that produces constituent trees in linear time. The parser uses a basic bottom-up shiftreduce algorithm, but employs a classifier to determine parser actions instead of a grammar. This can be seen as an extension of the deterministic dependency parser of Nivre and Scholz (2004) to full constituent parsing. We show that, with an appropriate feature set used in classification, a very simple one-path greedy parser can perform at the same level of accuracy as more complex parsers. We evaluate our parser on section 23 of the WSJ section of the Penn Treebank, and obtain precision and recall of 87.54% and 87.61%, respectively.",
"title": ""
},
{
"docid": "0c842ef34f1924e899e408309f306640",
"text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.",
"title": ""
},
{
"docid": "d87eeaac97b868b83e52f0154ff56071",
"text": "This paper presents a new algorithm, termed <italic>truncated amplitude flow</italic> (TAF), to recover an unknown vector <inline-formula> <tex-math notation=\"LaTeX\">$ {x}$ </tex-math></inline-formula> from a system of quadratic equations of the form <inline-formula> <tex-math notation=\"LaTeX\">$y_{i}=|\\langle {a}_{i}, {x}\\rangle |^{2}$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$ {a}_{i}$ </tex-math></inline-formula>’s are given random measurement vectors. This problem is known to be <italic>NP-hard</italic> in general. We prove that as soon as the number of equations is on the order of the number of unknowns, TAF recovers the solution exactly (up to a global unimodular constant) with high probability and complexity growing linearly with both the number of unknowns and the number of equations. Our TAF approach adapts the <italic>amplitude-based</italic> empirical loss function and proceeds in two stages. In the first stage, we introduce an <italic>orthogonality-promoting</italic> initialization that can be obtained with a few power iterations. Stage two refines the initial estimate by successive updates of scalable <italic>truncated generalized gradient iterations</italic>, which are able to handle the rather challenging nonconvex and nonsmooth amplitude-based objective function. In particular, when vectors <inline-formula> <tex-math notation=\"LaTeX\">$ {x}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${a}_{i}$ </tex-math></inline-formula>’s are real valued, our gradient truncation rule provably eliminates erroneously estimated signs with high probability to markedly improve upon its untruncated version. Numerical tests using synthetic data and real images demonstrate that our initialization returns more accurate and robust estimates relative to spectral initializations. Furthermore, even under the same initialization, the proposed amplitude-based refinement outperforms existing Wirtinger flow variants, corroborating the superior performance of TAF over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "74497cbcf698a821e755b93ba5d8bb7a",
"text": "The integration of different learning and adaptation techniques to overcome individual limitations and to achieve synergetic effects through the hybridization or fusion of these techniques has, in recent years, contributed to a large number of new intelligent system designs. Computational intelligence is an innovative framework for constructing intelligent hybrid architectures involving Neural Networks (NN), Fuzzy Inference Systems (FIS), Probabilistic Reasoning (PR) and derivative free optimization techniques such as Evolutionary Computation (EC). Most of these hybridization approaches, however, follow an ad hoc design methodology, justified by success in certain application domains. Due to the lack of a common framework it often remains difficult to compare the various hybrid systems conceptually and to evaluate their performance comparatively. This chapter introduces the different generic architectures for integrating intelligent systems. The designing aspects and perspectives of different hybrid archirectures like NN-FIS, EC-FIS, EC-NN, FIS-PR and NN-FIS-EC systems are presented. Some conclusions are also provided towards the end.",
"title": ""
},
{
"docid": "4c8ac629f8a7faaa315e4e4441eb630c",
"text": "This article reviews the cognitive therapy of depression. The psychotherapy based on this theory consists of behavioral and verbal techniques to change cognitions, beliefs, and errors in logic in the patient's thinking. A few of the various techniques are described and a case example is provided. Finally, the outcome studies testing the efficacy of this approach are reviewed.",
"title": ""
},
{
"docid": "da56b994c91051847a05a5ffb69c78f0",
"text": "We define CWS, a non-preemptive scheduling policy for workloads with correlated job sizes. CWS tackles the scheduling problem by inferring the expected sizes of upcoming jobs based on the structure of correlations and on the outcome of past scheduling decisions. Size prediction is achieved using a class of Hidden Markov Models (HMM) with continuous observation densities that describe job sizes. We show how the forward-backward algorithm of HMMs applies effectively in scheduling applications and how it can be used to derive closed-form expressions for size prediction. This is particularly simple to implement in the case of observation densities that are phase-type (PH-type) distributed, where existing fitting methods for Markovian point processes may also simplify the parameterization of the HMM workload model.\n Based on the job size predictions, CWS emulates size-based policies which favor short jobs, with accuracy depending mainly on the HMM used to parametrize the scheduling algorithm. Extensive simulation and analysis illustrate that CWS is competitive with policies that assume exact information about the workload.",
"title": ""
},
{
"docid": "638265455e769ee474106f26fceb6c19",
"text": "This paper considers a novel implementation scheme for fixed priority (FP) uniprocessor scheduling of mixed criticality systems. The scheme requires that jobs have their execution times monitored. If system behavior inconsistent with lower criticality levels is detected during run-time via such monitoring, (i) tasks of lower criticalities are discarded (this is already done by current FP mixed-criticality scheduling algorithms); and (ii) the priorities of the remaining tasks may be re-ordered. Evaluations illustrate the benefits of this scheme.",
"title": ""
}
] |
scidocsrr
|
65beaaa72aadb30d96cefc6d19e4b84c
|
The Truth and Nothing But the Truth: Multimodal Analysis for Deception Detection
|
[
{
"docid": "ff56bae298b25accf6cd8c2710160bad",
"text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.",
"title": ""
}
] |
[
{
"docid": "0837ca7bd6e28bb732cfdd300ccecbca",
"text": "In our previous research we have made literature analysis and discovered possible mind map application areas. We have pointed out why currently developed software and methods are not adequate and why we are developing a new one. We have defined system architecture and functionality that our software would have. After that, we proceeded with text-mining algorithm development and testing after which we have concluded with our plans for further research. In this paper we will give basic notions about previously published article and present our custom developed software for automatic mind map generation. This software will be tested. Generated mind maps will be critically analyzed. The paper will be concluded with research summary and possible further research and software improvement.",
"title": ""
},
{
"docid": "2fbfe1fa8cda571a931b700cbb18f46e",
"text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.",
"title": ""
},
{
"docid": "e82c0826863ccd9cd647725fc00a2137",
"text": "Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate, for instance, the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models.",
"title": ""
},
{
"docid": "629648968e2b378f46fa19ae6a343e70",
"text": "BACKGROUND\nAustralia was one of the first countries to introduce a publicly funded national human papillomavirus (HPV) vaccination program that commenced in April 2007, using the quadrivalent HPV vaccine targeting 12- to 13-year-old girls on an ongoing basis. Two-year catch-up programs were offered to 14- to 17- year-old girls in schools and 18- to 26-year-old women in community-based settings. We present data from the school-based program on population-level vaccine effectiveness against cervical abnormalities in Victoria, Australia.\n\n\nMETHODS\nData for women age-eligible for the HPV vaccination program were linked between the Victorian Cervical Cytology Registry and the National HPV Vaccination Program Register to create a cohort of screening women who were either vaccinated or unvaccinated. Entry into the cohort was 1 April 2007 or at first Pap test for women not already screening. Vaccine effectiveness (VE) and hazard ratios (HR) for cervical abnormalities by vaccination status between 1 April 2007 and 31 December 2011 were calculated using proportional hazards regression.\n\n\nRESULTS\nThe study included 14,085 unvaccinated and 24,871 vaccinated women attending screening who were eligible for vaccination at school, 85.0% of whom had received three doses. Detection rates of histologically confirmed high-grade (HG) cervical abnormalities and high-grade cytology (HGC) were significantly lower for vaccinated women (any dose) (HG 4.8 per 1,000 person-years, HGC 11.9 per 1,000 person-years) compared with unvaccinated women (HG 6.4 per 1,000 person-years, HGC 15.3 per 1,000 person-years) HR 0.72 (95% CI 0.58 to 0.91) and HR 0.75 (95% CI 0.65 to 0.87), respectively. The HR for low-grade (LG) cytological abnormalities was 0.76 (95% CI 0.72 to 0.80). VE adjusted a priori for age at first screening, socioeconomic status and remoteness index, for women who were completely vaccinated, was greatest for CIN3+/AIS at 47.5% (95% CI 22.7 to 64.4) and 36.4% (95% CI 9.8 to 55.1) for women who received any dose of vaccine, and was negatively associated with age. For women who received only one or two doses of vaccine, HRs for HG histology were not significantly different from 1.0, although the number of outcomes was small.\n\n\nCONCLUSION\nA population-based HPV vaccination program in schools significantly reduced cervical abnormalities for vaccinated women within five years of implementation, with the greatest vaccine effectiveness observed for the youngest women.",
"title": ""
},
{
"docid": "cfea41d4bc6580c91ee27201360f8e17",
"text": "It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term ”cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering ”cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term ”cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.",
"title": ""
},
{
"docid": "5300e9938a545895c8b97fe6c9d06aa5",
"text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"title": ""
},
{
"docid": "77906a8aebb33860423077ac66dc6552",
"text": "If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scaleinvariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure 1). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.",
"title": ""
},
{
"docid": "94b482fefc9e8e61fe4614245ff03287",
"text": "In this paper, a general-purpose fuzzy controller for dc–dc converters is investigated. Based on a qualitative description of the system to be controlled, fuzzy controllers are capable of good performances, even for those systems where linear control techniques fail, e.g., when a mathematical description is not available or is in the presence of wide parameter variations. The presented approach is general and can be applied to any dc–dc converter topologies. Controller implementation is relatively simple and can guarantee a small-signal response as fast and stable as other standard regulators and an improved large-signal response. Simulation results of Buck-Boost and Sepic converters show control potentialities.",
"title": ""
},
{
"docid": "e64320b71675f2a059a50fd9479d2056",
"text": "Extreme sports (ES) are usually pursued in remote locations with little or no access to medical care with the athlete competing against oneself or the forces of nature. They involve high speed, height, real or perceived danger, a high level of physical exertion, spectacular stunts, and heightened risk element or death.Popularity for such sports has increased exponentially over the past two decades with dedicated TV channels, Internet sites, high-rating competitions, and high-profile sponsors drawing more participants.Recent data suggest that the risk and severity of injury in some ES is unexpectedly high. Medical personnel treating the ES athlete need to be aware there are numerous differences which must be appreciated between the common traditional sports and this newly developing area. These relate to the temperament of the athletes themselves, the particular epidemiology of injury, the initial management following injury, treatment decisions, and rehabilitation.The management of the injured extreme sports athlete is a challenge to surgeons and sports physicians. Appropriate safety gear is essential for protection from severe or fatal injuries as the margins for error in these sports are small.The purpose of this review is to provide an epidemiologic overview of common injuries affecting the extreme athletes through a focus on a few of the most popular and exciting extreme sports.",
"title": ""
},
{
"docid": "ffef3f247f0821eee02b8d8795ddb21c",
"text": "A broadband polarization reconfigurable rectenna is proposed, which can operate in three polarization modes. The receiving antenna of the rectenna is a polarization reconfigurable planar monopole antenna. By installing switches on the feeding network, the antenna can switch to receive electromagnetic (EM) waves with different polarizations, including linear polarization (LP), right-hand and left-hand circular polarizations (RHCP/LHCP). To achieve stable conversion efficiency of the rectenna (nr) in all the modes within a wide frequency band, a tunable matching network is inserted between the rectifying circuit and the antenna. The measured nr changes from 23.8% to 31.9% in the LP mode within 5.1-5.8 GHz and from 22.7% to 24.5% in the CP modes over 5.8-6 GHz. Compared to rectennas with conventional broadband matching network, the proposed rectenna exhibits more stable conversion efficiency.",
"title": ""
},
{
"docid": "6fc86c662db76c22e708c5091af6a0da",
"text": "Liver hemangiomas are the most common benign liver tumors and are usually incidental findings. Liver hemangiomas are readily demonstrated by abdominal ultrasonography, computed tomography or magnetic resonance imaging. Giant liver hemangiomas are defined by a diameter larger than 5 cm. In patients with a giant liver hemangioma, observation is justified in the absence of symptoms. Surgical resection is indicated in patients with abdominal (mechanical) complaints or complications, or when diagnosis remains inconclusive. Enucleation is the preferred surgical method, according to existing literature and our own experience. Spontaneous or traumatic rupture of a giant hepatic hemangioma is rare, however, the mortality rate is high (36-39%). An uncommon complication of a giant hemangioma is disseminated intravascular coagulation (Kasabach-Merritt syndrome); intervention is then required. Herein, the authors provide a literature update of the current evidence concerning the management of giant hepatic hemangiomas. In addition, the authors assessed treatment strategies and outcomes in a series of patients with giant liver hemangiomas managed in our department.",
"title": ""
},
{
"docid": "1e3729164ecb6b74dbe5c9019bff7ae4",
"text": "Serverless or functions as a service runtimes have shown significant benefits to efficiency and cost for event-driven cloud applications. Although serverless runtimes are limited to applications requiring lightweight computation and memory, such as machine learning prediction and inference, they have shown improvements on these applications beyond other cloud runtimes. Training deep learning can be both compute and memory intensive. We investigate the use of serverless runtimes while leveraging data parallelism for large models, show the challenges and limitations due to the tightly coupled nature of such models, and propose modifications to the underlying runtime implementations that would mitigate them. For hyperparameter optimization of smaller deep learning models, we show that serverless runtimes can provide significant benefit.",
"title": ""
},
{
"docid": "57167d5bf02e9c76057daa83d3f803c5",
"text": "When alcohol is consumed, the alcoholic beverages first pass through the various segments of the gastrointestinal (GI) tract. Accordingly, alcohol may interfere with the structure as well as the function of GI-tract segments. For example, alcohol can impair the function of the muscles separating the esophagus from the stomach, thereby favoring the occurrence of heartburn. Alcohol-induced damage to the mucosal lining of the esophagus also increases the risk of esophageal cancer. In the stomach, alcohol interferes with gastric acid secretion and with the activity of the muscles surrounding the stomach. Similarly, alcohol may impair the muscle movement in the small and large intestines, contributing to the diarrhea frequently observed in alcoholics. Moreover, alcohol inhibits the absorption of nutrients in the small intestine and increases the transport of toxins across the intestinal walls, effects that may contribute to the development of alcohol-related damage to the liver and other organs.",
"title": ""
},
{
"docid": "e4236031c7d165a48a37171c47de1c38",
"text": "We present a discrete event simulation model reproducing the adoption of Radio Frequency Identification (RFID) technology for the optimal management of common logistics processes of a Fast Moving Consumer Goods (FMCG) warehouse. In this study, simulation is exploited as a powerful tool to replicate both the reengineered RFID logistics processes and the flows of Electronic Product Code (EPC) data generated by such processes. Moreover, a complex tool has been developed to analyze data resulting from the simulation runs, thus addressing the issue of how the flows of EPC data generated by RFID technology can be exploited to provide value-added information for optimally managing the logistics processes. Specifically, an EPCIS compliant Data Warehouse has been designed to act as EPCIS Repository and store EPC data resulting from simulation. Starting from EPC data, properly designed tools, referred to as Business Intelligence Modules, provide value-added information for processes optimization. Due to the newness of RFID adoption in the logistics context and to the lack of real case examples that can be examined, we believe that both the model and the data management system developed can be very useful to understand the practical implications of the technology and related information flow, as well as to show how to leverage EPC data for process management. Results of the study can provide a proof-of-concept to substantiate the adoption of RFID technology in the FMCG industry.",
"title": ""
},
{
"docid": "1a1467aa70bbcc97e01a6ec25899bb17",
"text": "Despite numerous studies to reduce the power consumption of the display-related components of mobile devices, previous works have led to a deterioration in user experience due to compromised graphic quality. In this paper, we propose an effective scheme to reduce the energy consumption of the display subsystems of mobile devices without compromising user experience. In preliminary experiments, we noticed that mobile devices typically perform redundant display updates even if the display content does not change. Based on this observation, we first propose a metric called the content rate, which is defined as the number of meaningful frame changes in a second. Our scheme then estimates an optimal refresh rate based on the content rate in order to eliminate redundant display updates. Also proposed is the flicker compensation technique, which prevents the flickering problem caused by the reduced refresh rate. Extensive experiments conducted on the latest smartphones demonstrated that our system effectively reduces the overall power consumption of mobile devices by 35 percent while simultaneously maintaining satisfactory display quality.",
"title": ""
},
{
"docid": "ff9b5d96b762b2baacf4bf19348c614b",
"text": "Drought stress is a major factor in reduce growth, development and production of plants. Stress was applied with polyethylene glycol (PEG) 6000 and water potentials were: zero (control), -0.15 (PEG 10%), -0.49 (PEG 20%), -1.03 (PEG 30%) and -1.76 (PEG40%) MPa. The solutes accumulation of two maize (Zea mays L.) cultivars -704 and 301were determined after drought stress. In our experiments, a higher amount of soluble sugars and a lower amount of starch were found under stress. Soluble sugars concentration increased (from 1.18 to 1.90 times) in roots and shoots of both varieties when the studied varieties were subjected to drought stress, but starch content were significantly (p<0.05) decreased (from 16 to 84%) in both varieties. This suggests that sugars play an important role in Osmotic Adjustment (OA) in maize. The free proline level also increased (from 1.56 to 3.13 times) in response to drought stress and the increase in 704 var. was higher than 301 var. It seems to proline may play a role in minimizing the damage caused by dehydration. Increase of proline content in shoots was higher than roots, but increase of soluble sugar content and decrease of starch content in roots was higher than shoots.",
"title": ""
},
{
"docid": "7539af35786fba888fa3a7cafa5db0b0",
"text": "Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.",
"title": ""
},
{
"docid": "45f2599c6a256b55ee466c258ba93f48",
"text": "Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders.",
"title": ""
},
{
"docid": "2b53b125dc8c79322aabb083a9c991e4",
"text": "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author’s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain “location indicative words”. We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.",
"title": ""
},
{
"docid": "14b6ff85d404302af45cf608137879c7",
"text": "In this paper, an automatic multi-organ segmentation based on multi-boost learning and statistical shape model search was proposed. First, simple but robust Multi-Boost Classifier was trained to hierarchically locate and pre-segment multiple organs. To ensure the generalization ability of the classifier relative location information between organs, organ and whole body is exploited. Left lung and right lung are first localized and pre-segmented, then liver and spleen are detected upon its location in whole body and its relative location to lungs, kidney is finally detected upon the features of relative location to liver and left lung. Second, shape and appearance models are constructed for model fitting. The final refinement delineation is performed by best point searching guided by appearance profile classifier and is constrained with multi-boost classified probabilities, intensity and gradient features. The method was tested on 30 unseen CT and 30 unseen enhanced CT (CTce) datasets from ISBI 2015 VISCERAL challenge. The results demonstrated that the multi-boost learning can be used to locate multi-organ robustly and segment lung and kidney accurately. The liver and spleen segmentation based on statistical shape searching has shown good performance too. Copyright c © by the paper’s authors. Copying permitted only for private and academic purposes. In: O. Goksel (ed.): Proceedings of the VISCERAL Anatomy Grand Challenge at the 2015 IEEE International Symposium on Biomedical Imaging (ISBI), New York, NY, Apr 16, 2015 published at http://ceur-ws.org",
"title": ""
}
] |
scidocsrr
|
4e1af4fe8608454cfeadc2805bc52569
|
A Neural Probabilistic Model for Context Based Citation Recommendation
|
[
{
"docid": "908baa7a1004a372f1e8e42f037e0501",
"text": "Scientists depend on literature search to find prior work that is relevant to their research ideas. We introduce a retrieval model for literature search that incorporates a wide variety of factors important to researchers, and learns the weights of each of these factors by observing citation patterns. We introduce features like topical similarity and author behavioral patterns, and combine these with features from related work like citation count and recency of publication. We present an iterative process for learning weights for these features that alternates between retrieving articles with the current retrieval model, and updating model weights by training a supervised classifier on these articles. We propose a new task for evaluating the resulting retrieval models, where the retrieval system takes only an abstract as its input and must produce as output the list of references at the end of the abstract's article. We evaluate our model on a collection of journal, conference and workshop articles from the ACL Anthology Reference Corpus. Our model achieves a mean average precision of 28.7, a 12.8 point improvement over a term similarity baseline, and a significant improvement both over models using only features from related work and over models without our iterative learning.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "d36a69538293e384d64c905c678f4944",
"text": "Many studies have investigated factors that affect susceptibility to false memories. However, few have investigated the role of sleep deprivation in the formation of false memories, despite overwhelming evidence that sleep deprivation impairs cognitive function. We examined the relationship between self-reported sleep duration and false memories and the effect of 24 hr of total sleep deprivation on susceptibility to false memories. We found that under certain conditions, sleep deprivation can increase the risk of developing false memories. Specifically, sleep deprivation increased false memories in a misinformation task when participants were sleep deprived during event encoding, but did not have a significant effect when the deprivation occurred after event encoding. These experiments are the first to investigate the effect of sleep deprivation on susceptibility to false memories, which can have dire consequences.",
"title": ""
},
{
"docid": "e0c83197770752c9fdfe5e51edcd3d46",
"text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.",
"title": ""
},
{
"docid": "3d89b509ab12e41eb54b7b6800e5c785",
"text": "We have constructed a new “Who-did-What” dataset of over 200,000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus. The WDW dataset has a variety of novel features. First, in contrast with the CNN and Daily Mail datasets (Hermann et al., 2015) we avoid using article summaries for question formation. Instead, each problem is formed from two independent articles — an article given as the passage to be read and a separate article on the same events used to form the question. Second, we avoid anonymization — each choice is a person named entity. Third, the problems have been filtered to remove a fraction that are easily solved by simple baselines, while remaining 84% solvable by humans. We report performance benchmarks of standard systems and propose the WDW dataset as a challenge task for the community.1",
"title": ""
},
{
"docid": "d7bb22eefbff0a472d3e394c61788be2",
"text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "982d7d2d65cddba4fa7dac3c2c920790",
"text": "In this paper, we present our multichannel neural architecture for recognizing emerging named entity in social media messages, which we applied in the Novel and Emerging Named Entity Recognition shared task at the EMNLP 2017 Workshop on Noisy User-generated Text (W-NUT). We propose a novel approach, which incorporates comprehensive word representations with multichannel information and Conditional Random Fields (CRF) into a traditional Bidirectional Long Short-Term Memory (BiLSTM) neural network without using any additional hand-crafted features such as gazetteers. In comparison with other systems participating in the shared task, our system won the 3rd place in terms of the average of two evaluation metrics.",
"title": ""
},
{
"docid": "fbddd20271cf134e15b33e7d6201c374",
"text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.",
"title": ""
},
{
"docid": "e7eae4ab0859d66acbf435f2430a63a1",
"text": "Voice recognition technology-enabled devices possess extraordinary growth potential, yet some research indicates that organizations and consumers are resisting their adoption. This study investigates the implementation of a voice recognition device in the United States Navy. Grounded in the social psychology and information systems literature, the researchers adapted instruments and developed a tool to explain technology adoption in this environment. Using factor analysis and structural equation modeling, analysis of data from the 270 participants explained almost 90% of the variance in the model. This research adapts the technology acceptance model by adding elements of the theory of planned behavior, providing researchers and practitioners with a valuable instrument to predict technology adoption.",
"title": ""
},
{
"docid": "5481f319296c007412e62129d2ec5943",
"text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.",
"title": ""
},
{
"docid": "db0c7a200d76230740e027c2966b066c",
"text": "BACKGROUND\nPromotion and provision of low-cost technologies that enable improved water, sanitation, and hygiene (WASH) practices are seen as viable solutions for reducing high rates of morbidity and mortality due to enteric illnesses in low-income countries. A number of theoretical models, explanatory frameworks, and decision-making models have emerged which attempt to guide behaviour change interventions related to WASH. The design and evaluation of such interventions would benefit from a synthesis of this body of theory informing WASH behaviour change and maintenance.\n\n\nMETHODS\nWe completed a systematic review of existing models and frameworks through a search of related articles available in PubMed and in the grey literature. Information on the organization of behavioural determinants was extracted from the references that fulfilled the selection criteria and synthesized. Results from this synthesis were combined with other relevant literature, and from feedback through concurrent formative and pilot research conducted in the context of two cluster-randomized trials on the efficacy of WASH behaviour change interventions to inform the development of a framework to guide the development and evaluation of WASH interventions: the Integrated Behavioural Model for Water, Sanitation, and Hygiene (IBM-WASH).\n\n\nRESULTS\nWe identified 15 WASH-specific theoretical models, behaviour change frameworks, or programmatic models, of which 9 addressed our review questions. Existing models under-represented the potential role of technology in influencing behavioural outcomes, focused on individual-level behavioural determinants, and had largely ignored the role of the physical and natural environment. IBM-WASH attempts to correct this by acknowledging three dimensions (Contextual Factors, Psychosocial Factors, and Technology Factors) that operate on five-levels (structural, community, household, individual, and habitual).\n\n\nCONCLUSIONS\nA number of WASH-specific models and frameworks exist, yet with some limitations. The IBM-WASH model aims to provide both a conceptual and practical tool for improving our understanding and evaluation of the multi-level multi-dimensional factors that influence water, sanitation, and hygiene practices in infrastructure-constrained settings. We outline future applications of our proposed model as well as future research priorities needed to advance our understanding of the sustained adoption of water, sanitation, and hygiene technologies and practices.",
"title": ""
},
{
"docid": "d90407926b8dc5454902875d66b2404b",
"text": "In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge.",
"title": ""
},
{
"docid": "95903410bc39b26e44f6ea80ad85e182",
"text": "We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.",
"title": ""
},
{
"docid": "38693524e69d494b95c311840d599c93",
"text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75%) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30% of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.",
"title": ""
},
{
"docid": "14839c18d1029270174e9f94d122edd5",
"text": "Nested event structures are a common occurrence in both open domain and domain specific extraction tasks, e.g., a “crime” event can cause a “investigation” event, which can lead to an “arrest” event. However, most current approaches address event extraction with highly local models that extract each event and argument independently. We propose a simple approach for the extraction of such structures by taking the tree of event-argument relations and using it directly as the representation in a reranking dependency parser. This provides a simple framework that captures global properties of both nested and flat event structures. We explore a rich feature space that models both the events to be parsed and context from the original supporting text. Our approach obtains competitive results in the extraction of biomedical events from the BioNLP’09 shared task with a F1 score of 53.5% in development and 48.6% in testing.",
"title": ""
},
{
"docid": "d087b127025074c48477b964c9c2483a",
"text": "In this letter, a 77-GHz transmitter (TX) with a 12.8-GHz phase-locked-loop (PLL) and a $\\times$ 6 frequency multiplier is presented for a FMCW radar sensor in a 65-nm CMOS process. To realize the low-phase-noise TX, a voltage controlled oscillator (VCO) with an excellent phase noise performance at a lower fundamental frequency (12.8 GHz) is designed and scaled up ( $\\times$ 6) for the desired target frequency (77 GHz). The measured FMCW modulation range with an external triangular chirp signal (1-ms sweep time) is 601 MHz. The output power and the total DC power consumption of the TX are 8.9 dBm and 116.7 mW, respectively. Here, a good phase noise level of -91.16 dBc/Hz at a 1-MHz offset frequency from a 76.81-GHz carrier is achieved.",
"title": ""
},
{
"docid": "97f8b8ee60e3f03e64833a16aaf5e743",
"text": "OBJECTIVE\nA pilot randomized controlled trial (RCT) of the effectiveness of occupational therapy using a sensory integration approach (OT-SI) was conducted with children who had sensory modulation disorders (SMDs). This study evaluated the effectiveness of three treatment groups. In addition, sample size estimates for a large scale, multisite RCT were calculated.\n\n\nMETHOD\nTwenty-four children with SMD were randomly assigned to one of three treatment conditions; OT-SI, Activity Protocol, and No Treatment. Pretest and posttest measures of behavior, sensory and adaptive functioning, and physiology were administered.\n\n\nRESULTS\nThe OT-SI group, compared to the other two groups, made significant gains on goal attainment scaling and on the Attention subtest and the Cognitive/Social composite of the Leiter International Performance Scale-Revised. Compared to the control groups, OT-SI improvement trends on the Short Sensory Profile, Child Behavior Checklist, and electrodermal reactivity were in the hypothesized direction.\n\n\nCONCLUSION\nFindings suggest that OT-SI may be effective in ameliorating difficulties of children with SMD.",
"title": ""
},
{
"docid": "ca468aa680c29fb00f55e9d851676200",
"text": "The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered. Uniform generation problems are, in computational difficulty, intermediate between classical existence and counting problems. It is shown that exactly uniform generation of 'efficiently verifiable' combinatorial structures is reducible to approximate counting (and hence, is within the third level of the polynomial hierarchy). Natural combinatorial problems are presented which exhibit complexity gaps between their existence and generation, and between their generation and counting versions. It is further shown that for self-reducible problems, almost uniform generation and randomized approximate counting are inter-reducible, and hence, of similar complexity. CR Categories. F.I.1, F.1.3, G.2.1, G.3",
"title": ""
},
{
"docid": "37a8fe29046ec94d54e62f202a961129",
"text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.",
"title": ""
},
{
"docid": "542d17b1f1437420a003895f9ca16406",
"text": "This paper discusses the Correntropy Induced Metric (CIM) based Growing Neural Gas (GNG) architecture. CIM is a kernel method based similarity measurement from the information theoretic learning perspective, which quantifies the similarity between probability distributions of input and reference vectors. We apply CIM to find a maximum error region and node insert criterion, instead of euclidean distance based function in original GNG. Furthermore, we introduce the two types of Gaussian kernel bandwidth adaptation methods for CIM. The simulation experiments in terms of the affect of kernel bandwidth σ in CIM, the self-organizing ability, and the quantitative comparison show that proposed model has the superior abilities than original GNG.",
"title": ""
},
{
"docid": "0d750d31bcd0a998bd944910e707830c",
"text": "In this paper we focus on estimating the post-click engagement on native ads by predicting the dwell time on the corresponding ad landing pages. To infer relationships between features of the ads and dwell time we resort to the application of survival analysis techniques, which allow us to estimate the distribution of the length of time that the user will spend on the ad. This information is then integrated into the ad ranking function with the goal of promoting the rank of ads that are likely to be clicked and consumed by users (dwell time greater than a given threshold). The online evaluation over live tra c shows that considering post-click engagement has a consistent positive e↵ect on both CTR, decreases the number of bounces and increases the average dwell time, hence leading to a better user post-click experience.",
"title": ""
},
{
"docid": "3bcb57af56157f974f1acac7a5c09d95",
"text": "During the past 70+ years of research and development in the domain of Artificial Intelligence (AI) we observe three principal, historical waves: embryonic, embedded and embodied AI. As the first two waves have demonstrated huge potential to seed new technologies and provide tangible business results, we describe likely developments of embodied AI in the next 25-35 years. We postulate that the famous Turing Test was a noble goal for AI scientists, making key, historical inroads - while we believe that Biological Systems Intelligence and the Insect/Swarm Intelligence analogy/mimicry, though largely disregarded, represents the key to further developments. We describe briefly the key lines of past and ongoing research, and outline likely future developments in this remarkable field.",
"title": ""
}
] |
scidocsrr
|
b56abe7d62498573653d23a0b4ebea92
|
Multi-DOF Counterbalance Mechanism for a Service Robot Arm
|
[
{
"docid": "dbc09474868212acf3b29e49a6facbce",
"text": "In this paper, we propose a sophisticated design of human symbiotic robots that provide physical supports to the elderly such as attendant care with high-power and kitchen supports with dexterity while securing contact safety even if physical contact occurs with them. First of all, we made clear functional requirements for such a new generation robot, amounting to fifteen items to consolidate five significant functions such as “safety”, “friendliness”, “dexterity”, “high-power” and “mobility”. In addition, we set task scenes in daily life where support by robot is useful for old women living alone, in order to deduce specifications for the robot. Based on them, we successfully developed a new generation of human symbiotic robot, TWENDY-ONE that has a head, trunk, dual arms with a compact passive mechanism, anthropomorphic dual hands with mechanical softness in joints and skins and an omni-wheeled vehicle. Evaluation experiments focusing on attendant care and kitchen supports using TWENDY-ONE indicate that this new robot will be extremely useful to enhance quality of life for the elderly in the near future where human and robot co-exist.",
"title": ""
}
] |
[
{
"docid": "4487f3713062ef734ceab5c7f9ccc6e3",
"text": "In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.",
"title": ""
},
{
"docid": "5fc76164af859604c5c2543bce017094",
"text": "We train and validate a semi-supervised, multi-task LSTM on 57,675 person-weeks of data from off-the-shelf wearable heart rate sensors, showing high accuracy at detecting multiple medical conditions, including diabetes (0.8451), high cholesterol (0.7441), high blood pressure (0.8086), and sleep apnea (0.8298). We compare two semi-supervised training methods, semi-supervised sequence learning and heuristic pretraining, and show they outperform hand-engineered biomarkers from the medical literature. We believe our work suggests a new approach to patient risk stratification based on cardiovascular risk scores derived from popular wearables such as Fitbit, Apple Watch, or Android Wear.",
"title": ""
},
{
"docid": "db31a8887bfc1b24c2d2c2177d4ef519",
"text": "The equilibrium microstructure of a fluid may only be described exactly in terms of a complete set of n-body atomic distribution functions, where n is 1, 2, 3 , . . . , N, and N is the total number of particles in the system. The higher order functions, i. e. n > 2, are complex and practically inaccessible but con siderable qualitative information can already be derived from studies of the mean radial occupation function n(r) defined as the average number of atoms in a sphere of radius r centred on a particular atom. The function for a perfect gas of non-inter acting particles is",
"title": ""
},
{
"docid": "c43de372dac79cf922f560450545e5b3",
"text": "Unsupervised learning and supervised learning are key research topics in deep learning. However, as high-capacity supervised neural networks trained with a large amount of labels have achieved remarkable success in many computer vision tasks, the availability of large-scale labeled images reduced the significance of unsupervised learning. Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction. First, we demonstrate that the intermediate activations of pretrained large-scale classification networks preserve almost all the information of input images except a portion of local spatial details. Then, by end-to-end training of the entire augmented architecture with the reconstructive objective, we show improvement of the network performance for supervised tasks. We evaluate several variants of autoencoders, including the recently proposed “what-where\" autoencoder that uses the encoder pooling switches, to study the importance of the architecture design. Taking the 16-layer VGGNet trained under the ImageNet ILSVRC 2012 protocol as a strong baseline for image classification, our methods improve the validation-set accuracy by a noticeable margin.",
"title": ""
},
{
"docid": "c4f706ff9ceb514e101641a816ba7662",
"text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classication systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from dierent classes are further apart, resulting in statistically signicant improvement when compared to other approaches on three datasets from two dierent domains.",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "5e240ad1d257a90c0ca414ce8e7e0949",
"text": "Improving Cloud Security using Secure Enclaves by Jethro Gideon Beekman Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor David Wagner, Chair Internet services can provide a wealth of functionality, yet their usage raises privacy, security and integrity concerns for users. This is caused by a lack of guarantees about what is happening on the server side. As a worst case scenario, the service might be subjected to an insider attack. This dissertation describes the unalterable secure service concept for trustworthy cloud computing. Secure services are a powerful abstraction that enables viewing the cloud as a true extension of local computing resources. Secure services combine the security benefits one gets locally with the manageability and availability of the distributed cloud. Secure services are implemented using secure enclaves. Remote attestation of the server is used to obtain guarantees about the programming of the service. This dissertation addresses concerns related to using secure enclaves such as providing data freshness and distributing identity information. Certificate Transparency is augmented to distribute information about which services exist and what they do. All combined, this creates a platform that allows legacy clients to obtain security guarantees about Internet services.",
"title": ""
},
{
"docid": "82af5212b43e8dfe6d54582de621d96c",
"text": "The use of multiple radar configurations can overcome some of the geometrical limitations that exist when obtaining radar images of a target using inverse synthetic aperture radar (ISAR) techniques. It is shown here how a particular bistatic configuration can produce three view angles and three ISAR images simultaneously. A new ISAR signal model is proposed and the applicability of employing existing monostatic ISAR techniques to bistatic configurations is analytically demonstrated. An analysis of the distortion introduced by the bistatic geometry to the ISAR image point spread function (PSF) is then carried out and the limits of the applicability of ISAR techniques (without the introduction of additional signal processing) are found and discussed. Simulations and proof of concept experimental data are also provided that support the theory.",
"title": ""
},
{
"docid": "d8f6f4bef57e26e9d2dc3684ea07a2f4",
"text": "Alzheimer's disease is a progressive neurodegenerative disease that typically manifests clinically as an isolated amnestic deficit that progresses to a characteristic dementia syndrome. Advances in neuroimaging research have enabled mapping of diverse molecular, functional, and structural aspects of Alzheimer's disease pathology in ever increasing temporal and regional detail. Accumulating evidence suggests that distinct types of imaging abnormalities related to Alzheimer's disease follow a consistent trajectory during pathogenesis of the disease, and that the first changes can be detected years before the disease manifests clinically. These findings have fuelled clinical interest in the use of specific imaging markers for Alzheimer's disease to predict future development of dementia in patients who are at risk. The potential clinical usefulness of single or multimodal imaging markers is being investigated in selected patient samples from clinical expert centres, but additional research is needed before these promising imaging markers can be successfully translated from research into clinical practice in routine care.",
"title": ""
},
{
"docid": "f2d8ee741a61b1f950508ac57b2aa379",
"text": "The concentrations of cellulose chemical markers, in oil, are influenced by various parameters due to the partition between the oil and the cellulose insulation. One major parameter is the oil temperature which is a function of the transformer load, ambient temperature and the type of cooling. To accurately follow the chemical markers concentration trends during all the transformer life, it is crucial to normalize the concentrations at a specific temperature. In this paper, we propose equations for the normalization of methanol, ethanol and 2-furfural at 20 °C. The proposed equations have been validated on some real power transformers.",
"title": ""
},
{
"docid": "b85112d759d9facedacb3935ce2d0de5",
"text": "Internet is one of the primary sources of Big Data. Rise of the social networking platforms are creating enormous amount of data in every second where human emotions are constantly expressed in real-time. The sentiment behind each post, comments, likes can be found using opinion mining. It is possible to determine business values from these objects and events if sentiment analysis is done on the huge amount of data. Here, we have chosen FOODBANK which is a very popular Facebook group in Bangladesh; to analyze sentiment of the data to find out their market values.",
"title": ""
},
{
"docid": "977efac2809f4dc455e1289ef54008b0",
"text": "A novel 3-D NAND flash memory device, VSAT (Vertical-Stacked-Array-Transistor), has successfully been achieved. The VSAT was realized through a cost-effective and straightforward process called PIPE (planarized-Integration-on-the-same-plane). The VSAT combined with PIPE forms a unique 3-D vertical integration method that may be exploited for ultra-high-density Flash memory chip and solid-state-drive (SSD) applications. The off-current level in the polysilicon-channel transistor dramatically decreases by five orders of magnitude by using an ultra-thin body of 20 nm thick and a double-gate-in-series structure. In addition, hydrogen annealing improves the subthreshold swing and the mobility of the polysilicon-channel transistor.",
"title": ""
},
{
"docid": "3a092c071129e2ffced1800f2b4d519c",
"text": "Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and/or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and/or CNNs of similar model complexities.",
"title": ""
},
{
"docid": "ee1e2400ed5c944826747a8e616b18c1",
"text": "Metastasis remains the greatest challenge in the clinical management of cancer. Cell motility is a fundamental and ancient cellular behaviour that contributes to metastasis and is conserved in simple organisms. In this Review, we evaluate insights relevant to human cancer that are derived from the study of cell motility in non-mammalian model organisms. Dictyostelium discoideum, Caenorhabditis elegans, Drosophila melanogaster and Danio rerio permit direct observation of cells moving in complex native environments and lend themselves to large-scale genetic and pharmacological screening. We highlight insights derived from each of these organisms, including the detailed signalling network that governs chemotaxis towards chemokines; a novel mechanism of basement membrane invasion; the positive role of E-cadherin in collective direction-sensing; the identification and optimization of kinase inhibitors for metastatic thyroid cancer on the basis of work in flies; and the value of zebrafish for live imaging, especially of vascular remodelling and interactions between tumour cells and host tissues. While the motility of tumour cells and certain host cells promotes metastatic spread, the motility of tumour-reactive T cells likely increases their antitumour effects. Therefore, it is important to elucidate the mechanisms underlying all types of cell motility, with the ultimate goal of identifying combination therapies that will increase the motility of beneficial cells and block the spread of harmful cells.",
"title": ""
},
{
"docid": "812c1713c1405c4925c6c6057624465b",
"text": "Fuel cell hybrid tramway has gained increasing attention recently and energy management strategy (EMS) is one of its key technologies. A hybrid tramway power system consisting of proton exchange membrane fuel cell (PEMFC) and battery is designed in the MATLAB /SIMULINK software as basic for the energy management strategy research. An equivalent consumption minimization strategy (ECMS) for hybrid tramway is proposed and embedded into the aforementioned hybrid model. In order to evaluate the proposed energy management, a real tramway driving cycle is adopted to simulate in RT-LAB platform. The simulation results prove the effectiveness of the proposed EMS.",
"title": ""
},
{
"docid": "cfd60f60a0a0bcc16ede57c7cee4fd23",
"text": "A compact planar multiband four-unit multiple-input multiple-output (MIMO) antenna system with high isolation is developed. At VSWR ≤ 2.75, the proposed MIMO antenna operates in the frequency range of LTE Band-1, 2, 3, 7, 40 and WLAN 2.4 GHz band. A T-strip and dumbbell shaped slots are studied to mitigate mutual coupling effects. The measured worst case isolation is better that 15.3 dB and envelope correlation coefficient is less than 0.01. The received signals satisfy the equal power gain condition and radiation patterns confirm the pattern diversity to combat multipath fading effects. At 29 dB SNR, the achieved MIMO channel capacity is about 22.2 b/s/Hz. These results infer that the proposed MIMO antenna is an attractive candidate for 4G-LTE mobile phone applications.",
"title": ""
},
{
"docid": "93bc26aa1a020f178692f40f4542b691",
"text": "The \"Fast Fourier Transform\" has now been widely known for about a year. During that time it has had a major effect on several areas of computing, the most striking example being techniques of numerical convolution, which have been completely revolutionized. What exactly is the \"Fast Fourier Transform\"?",
"title": ""
},
{
"docid": "9988f6dc4a2241e2a9025fd7b76ef4ee",
"text": "In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.",
"title": ""
}
] |
scidocsrr
|
1806f96192a12df943c552df15ea61e0
|
Wearable and Implantable Wireless Sensor Network Solutions for Healthcare Monitoring
|
[
{
"docid": "f921555856d856eef308af6e987c1fbb",
"text": "Wireless Body Area Networks (WBANs) provide efficient communication solutions to the ubiquitous healthcare systems. Health monitoring, telemedicine, military, interactive entertainment, and portable audio/video systems are some of the applications where WBANs can be used. The miniaturized sensors together with advance micro-electro-mechanical systems (MEMS) technology create a WBAN that continuously monitors the health condition of a patient. This paper presents a comprehensive discussion on the applications of WBANs in smart healthcare systems. We highlight a number of projects that enable WBANs to provide unobtrusive long-term healthcare monitoring with real-time updates to the health center. In addition, we list many potential medical applications of a WBAN including epileptic seizure warning, glucose monitoring, and cancer detection.",
"title": ""
}
] |
[
{
"docid": "e766e5a45936c53767898c591e6126f8",
"text": "Video completion is a computer vision technique to recover the missing values in video sequences by filling the unknown regions with the known information. In recent research, tensor completion, a generalization of matrix completion for higher order data, emerges as a new solution to estimate the missing information in video with the assumption that the video frames are homogenous and correlated. However, each video clip often stores the heterogeneous episodes and the correlations among all video frames are not high. Thus, the regular tenor completion methods are not suitable to recover the video missing values in practical applications. To solve this problem, we propose a novel spatiallytemporally consistent tensor completion method for recovering the video missing data. Instead of minimizing the average of the trace norms of all matrices unfolded along each mode of a tensor data, we introduce a new smoothness regularization along video time direction to utilize the temporal information between consecutive video frames. Meanwhile, we also minimize the trace norm of each individual video frame to employ the spatial correlations among pixels. Different to previous tensor completion approaches, our new method can keep the spatio-temporal consistency in video and do not assume the global correlation in video frames. Thus, the proposed method can be applied to the general and practical video completion applications. Our method shows promising results in all evaluations on both 3D biomedical image sequence and video benchmark data sets. Video completion is the process of filling in missing pixels or replacing undesirable pixels in a video. The missing values in a video can be caused by many situations, e.g., the natural noise in video capture equipment, the occlusion from the obstacles in environment, segmenting or removing interested objects from videos. Video completion is of great importance to many applications such as video repairing and editing, movie post-production (e.g., remove unwanted objects), etc. Missing information recovery in images is called inpaint∗To whom all correspondence should be addressed. This work was partially supported by US NSF IIS-1117965, IIS-1302675, IIS-1344152. Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing, which is usually accomplished by inferring or guessing the missing information from the surrounding regions, i.e. the spatial information. Video completion can be considered as an extension of 2D image inpainting to 3D. Video completion uses the information from the past and the future frames to fill the pixels in the missing region, i.e. the spatiotemporal information, which has been getting increasing attention in recent years. In computer vision, an important application area of artificial intelligence, there are many video completion algorithms. The most representative approaches include video inpainting, analogous to image inpainting (Bertalmio, Bertozzi, and Sapiro 2001), motion layer video completion, which splits the video sequence into different motion layers and completes each motion layer separately (Shiratori et al. 2006), space-time video completion, which is based on texture synthesis and is good but slow (Wexler, Shechtman, and Irani 2004), and video repairing, which repairs static background with motion layers and repairs moving foreground using model alignment (Jia et al. 2004). Many video completion methods are less effective because the video is often treated as a set of independent 2D images. Although the temporal independence assumption simplifies the problem, losing temporal consistency in recovered pixels leads to the unsatisfactory performance. On the other hand, temporal information can improve the video completion results (Wexler, Shechtman, and Irani 2004; Matsushita et al. 2005), but to exploit it the computational speeds of most methods are significantly reduced. Thus, how to efficiently and effectively utilize both spatial and temporal information is a challenging problem in video completion. In most recent work, Liu et. al. (Liu et al. 2013) estimated the missing data in video via tensor completion which was generalized from matrix completion methods. In these methods, the rank or rank approximation (trace norm) is used, as a powerful tool, to capture the global information. The tensor completion method (Liu et al. 2013) minimizes the trace norm of a tensor, i.e. the average of the trace norms of all matrices unfolded along each mode. Thus, it assumes the video frames are highly correlated in the temporal direction. If the video records homogenous episodes and all frames describe the similar information, this assumption has no problem. However, one video clip usually includes multiple different episodes and the frames from different episodes Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "a2e2e49ba695f81eed05abaa9333b4f2",
"text": "This paper presents an automatic lesion segmentation method based on similarities between multichannel patches. A patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally an iterative patch-based label refinement process based on the initial segmentation map is performed to ensure the spatial consistency of the detected lesions. The method was evaluated in experiments on multiple sclerosis (MS) lesion segmentation in magnetic resonance images (MRI) of the brain. An evaluation was done for each image in the MICCAI 2008 MS lesion segmentation challenge. Results are shown to compete with the state of the art in the challenge. We conclude that the proposed algorithm for segmentation of lesions provides a promising new approach for local segmentation and global detection in medical images.",
"title": ""
},
{
"docid": "6dfe8b18e3d825b2ecfa8e6b353bbb99",
"text": "In the last decade tremendous effort has been put in the study of the Apollonian circle packings. Given the great variety of mathematics it exhibits, this topic has attracted experts from different fields: number theory, homogeneous dynamics, expander graphs, group theory, to name a few. The principle investigator (PI) contributed to this program in his PhD studies. The scenery along the way formed the horizon of the PI at his early mathematical career. After his PhD studies, the PI has successfully applied tools and ideas from Apollonian circle packings to the studies of topics from various fields, and will continue this endeavor in his proposed research. The proposed problems are roughly divided into three categories: number theory, expander graphs, geometry. Each of which will be discussed in depth in later sections. Since Apollonian circle packing provides main inspirations for this proposal, let’s briefly review how it comes up and what has been done. We start with four mutually circles, with one circle bounding the other three. We can repeatedly inscribe more and more circles into curvilinear triangular gaps as illustrated in Figure 1, and we call the resultant set an Apollonian circle packing, which consists of infinitely many circles.",
"title": ""
},
{
"docid": "3f8860bc21f26b81b066f4c75b9390e1",
"text": "Adaptive filter algorithms are extensively use in active control applications and the availability of low cost powerful digital signal processor (DSP) platforms has opened the way for new applications and further research opportunities in e.g. the active control area. The field of active control demands a solid exposure to practical systems and DSP platforms for a comprehensive understanding of the theory involved. Traditional laboratory experiments prove to be insufficient to fulfill these demands and need to be complemented with more flexible and economic remotely controlled laboratories. The purpose of this thesis project is to implement a number of different adaptive control algorithms in the recently developed remotely controlled Virtual Instrument Systems in Reality (VISIR) ANC/DSP remote laboratory at Blekinge Institute of Technology and to evaluate the performance of these algorithms in the remote laboratory. In this thesis, performance of different filtered-x versions adaptive algorithms (NLMS, LLMS, RLS and FuRLMS) has been evaluated in a remote Laboratory. The adaptive algorithms were implemented remotely on a Texas Instrument DSP TMS320C6713 in an ANC system to attenuate low frequency noise which ranges from 0-200 Hz in a circular ventilation duct using single channel feed forward control. Results show that the remote lab can handle complex and advanced control algorithms. These algorithms were tested and it was found that remote lab works effectively and the achieved attenuation level for the algorithms used on the duct system is comparable to similar applications.",
"title": ""
},
{
"docid": "94485a72ab9392be5398322e651e553a",
"text": "The current study, integrating relevant concepts derived from self-regulatory focus, prospect/involvement and knowledge structure theories, proposes a conceptual framework that depicts how the message framing strategy of advertising may have an impact on the persuasiveness of brand marketing. As empirical examination of the framework shows, the consumer characteristics of self-construal, consumer involvement, and product knowledge are three key antecedents of the persuasiveness that message framing generates at the dimensions of advertising attitude, brand attitude, and purchase intention. Besides, significant interaction exists among these three variables. Implications of the research findings, both for academics and practitioners, are discussed.",
"title": ""
},
{
"docid": "206dc1a4a27b603360888d414e0b5cf6",
"text": "Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning-termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.",
"title": ""
},
{
"docid": "f7ec4acfd6c4916f3fec0dfa26db558c",
"text": "In the real-world online social networks, users tend to form different social communities. Due to its extensive applications, community detection in online social networks has been a hot research topic in recent years. In this chapter, we will focus on introducing the social community detection problem in online social networks. To be more specific, we will take the hard community detection problem as an example to introduce the existing models proposed for conventional (one single) homogeneous social network, and the recent broad learning based (multiple aligned) heterogeneous social networks respectively. Key Word: Community Detection; Social Media; Aligned Heterogeneous Networks; Broad Learning",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "69b631f179ea3c521f1dde75be537279",
"text": "A conceptually simple but effective noise smoothing algorithm is described. This filter is motivated by the sigma probability of the Gaussian distribution, and it smooths the image noise by averaging only those neighborhood pixels which have the intensities within a fixed sigma range of the center pixel. Consequently, image edges are preserved, and subtle details and thin tines such as roads are retained. The characteristics of this smoothing algorithm are analyzed and compared with several other known filtering algorithms by their ability to retain subtle details, preserving edge shapes, sharpening ramp edges, etc. The comparison also indicates that the sigma filter is the most computationally efficient filter among those evaluated. The filter can be easily extended into several forms which can be used in contrast enhancement, image segmentation, and smoothing signal-dependent noisy images. Several test images 128 X 128 and 256 X 256 pixels in size are used to substantiate its characteristics. The algorithm can be easily extended to 3-D image smoothing.",
"title": ""
},
{
"docid": "cc10051c413cfb6f87d0759100bc5182",
"text": "Social Media Hate Speech has continued to grow both locally and globally due to the increase of Online Social Media web forums like Facebook, Twitter and blogging. This has been propelled even further by smartphones and mobile data penetration locally. Global and Local terrorism has posed a vital question for technologists to investigate, prosecute, predict and prevent Social Media Hate Speech. This study provides a social media digital forensics tool through the design, development and implementation of a software application. The study will develop an application using Linux Apache MySQL PHP and Python. The application will use Scrapy Python page ranking algorithm to perform web crawling and the data will be placed in a MySQL database for data mining. The application used Agile Software development methodology with twenty websites being the subject of interest. The websites will be the sample size to demonstrate how the application",
"title": ""
},
{
"docid": "f87e64901ede5cc11dbb14f59cd95e80",
"text": "This paper presents a methodology to develop a dimensional data warehouse by integrating all three development approaches such as supply-driven, goal-driven and demand-driven. By having the combination of all three approaches, the final design will ensure that user requirements, company interest and existing source of data are included in the model. We proposed an automatic system using ontology as the knowledge domain. Starting from operational ER-D (Entity Relationship-Diagram), the selection of facts table, verification of terms and consistency checking will utilize domain ontology. The model will also be verified against user and company requirements. Any discrepancy in the final design requires designer and user intervention. The proposed methodology is supported by a prototype using a business data warehouse example.",
"title": ""
},
{
"docid": "ed9e53f132eada9ceb1f943cce00f20a",
"text": "With the proliferation of e-commerce websites and the ubiquitousness of smart phones, cross-domain image retrieval using images taken by smart phones as queries to search products on e-commerce websites is emerging as a popular application. One challenge of this task is to locate the attention of both the query and database images. In particular, database images, e.g. of fashion products, on e-commerce websites are typically displayed with other accessories, and the images taken by users contain noisy background and large variations in orientation and lighting. Consequently, their attention is difficult to locate. In this paper, we exploit the rich tag information available on the e-commerce websites to locate the attention of database images. For query images, we use each candidate image in the database as the context to locate the query attention. Novel deep convolutional neural network architectures, namely TagYNet and CtxYNet, are proposed to learn the attention weights and then extract effective representations of the images. Experimental results on public datasets confirm that our approaches have significant improvement over the existing methods in terms of the retrieval accuracy and efficiency.",
"title": ""
},
{
"docid": "cfe5d769b9d479dccd543f8a4d23fcf9",
"text": "This paper aims to describe the role of advanced sensing systems in the electric grid of the future. In detail, the project, development, and experimental validation of a smart power meter are described in the following. The authors provide an outline of the potentialities of the sensing systems and IoT to monitor efficiently the energy flow among nodes of an electric network. The described power meter uses the metrics proposed in the IEEE Standard 1459–2010 to analyze and process voltage and current signals. Information concerning the power consumption and power quality could allow the power grid to route efficiently the energy by means of more suitable decision criteria. The new scenario has changed the way to exchange energy in the grid. Now, energy flow must be able to change its direction according to needs. Energy cannot be now routed by considering just only the criterion based on the simple shortening of transmission path. So, even energy coming from a far node should be preferred, if it has higher quality standards. In this view, the proposed smart power meter intends to support the smart power grid to monitor electricity among different nodes in an efficient and effective way.",
"title": ""
},
{
"docid": "8c5726817049b2f5a77f6c1ba32b1254",
"text": "A memory leak occurs when a program allocates a block of memory, but does not release it after its last use. In case such a block is still referenced by one or more reachable pointers at the end of the execution, fixing the leak is often quite simple as long as it is known where the block was allocated. If, however, all references to the block are overwritten or lost during the program’s execution, only knowing the allocation site is not enough in most cases. This paper describes an approach based on dynamic instrumentation and garbage collection techniques, which enables us to also inform the user about where the last reference to a lost memory block was created and where it was lost, without the need for recompilation or relinking.",
"title": ""
},
{
"docid": "d21ce518c0186c15f93348bb43273655",
"text": "On the basis of current evidence regarding human papillomavirus (HPV) and cancer, this chapter provides estimates of the global burden of HPV-related cancers, and the proportion that are actually \"caused\" by infection with HPV types, and therefore potentially preventable. We also present trends in incidence and mortality of these cancers in the past, and consider their likely future evolution.",
"title": ""
},
{
"docid": "ea9bafe86af4418fa51abe27a2c2180b",
"text": "In this work, we propose a novel phenomenological model of the EEG signal based on the dynamics of a coupled Duffing-van der Pol oscillator network. An optimization scheme is adopted to match data generated from the model with clinically obtained EEG data from subjects under resting eyes-open (EO) and eyes-closed (EC) conditions. It is shown that a coupled system of two Duffing-van der Pol oscillators with optimized parameters yields signals with characteristics that match those of the EEG in both the EO and EC cases. The results, which are reinforced using statistical analysis, show that the EEG recordings under EC and EO resting conditions are clearly distinct realizations of the same underlying model occurring due to parameter variations with qualitatively different nonlinear dynamic characteristics. In addition, the interplay between noise and nonlinearity is addressed and it is shown that, for appropriately chosen values of noise intensity in the model, very good agreement exists between the model output and the EEG in terms of the power spectrum as well as Shannon entropy. In summary, the results establish that an appropriately tuned stochastic coupled nonlinear oscillator network such as the Duffing-van der Pol system could provide a useful framework for modeling and analysis of the EEG signal. In turn, design of algorithms based on the framework has the potential to positively impact the development of novel diagnostic strategies for brain injuries and disorders. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7d957e619fa7b3fbca8073be818fc94",
"text": "The dielectric properties of epoxy nanocomposites with insulating nano-fillers, viz., TiO2, ZnO and AI2O3 were investigated at low filler concentrations by weight. Epoxy nanocomposite samples with a good dispersion of nanoparticles in the epoxy matrix were prepared and experiments were performed to measure the dielectric permittivity and tan delta (400 Hz-1 MHz), dc volume resistivity and ac dielectric strength. At very low nanoparticle loadings, results demonstrate some interesting dielectric behaviors for nanocomposites and some of the electrical properties are found to be unique and advantageous for use in several existing and potential electrical systems. The nanocomposite dielectric properties are analyzed in detail with respect to different experimental parameters like frequency (for permittivity/tan delta), filler size, filler concentration and filler permittivity. In addition, epoxy microcomposites for the same systems were synthesized and their dielectric properties were compared to the results already obtained for nanocomposites. The interesting dielectric characteristics for epoxy based nanodielectric systems are attributed to the large volume fraction of interfaces in the bulk of the material and the ensuing interactions between the charged nanoparticle surface and the epoxy chains.",
"title": ""
},
{
"docid": "5cb18c0ac81c6ead1892c699d43224b4",
"text": "We discuss algorithms for performing canonical correlation analysis. In canonical correlation analysis we try to find correlations between two data sets. The canonical correlation coefficients can be calculated directly from the two data sets or from (reduced) representations such as the covariance matrices. The algorithms for both representations are based on singular value decomposition. The methods described here have been implemented in the speech analysis program PRAAT (Boersma & Weenink, 1996), and some examples will be demonstated for formant frequency and formant level data from 50 male Dutch speakers as were reported by Pols et al. (1973).",
"title": ""
},
{
"docid": "ec9f13212368d59ff737a0e87939ccd2",
"text": "Abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as wellwords refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as well as automatically created sense-specific abstractness ratings.",
"title": ""
}
] |
scidocsrr
|
593de3db50578fd348bc5de06dd68ba5
|
Automotive power generation and control
|
[
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
}
] |
[
{
"docid": "507cddc2df8ab2775395efb8387dad93",
"text": "A novel band-reject element for the design of inline waveguide pseudoelliptic band-reject filters is introduced. The element consists of an offset partial-height post in a rectangular waveguide in which the dominant TE10 mode is propagating. The location of the attenuation pole is primarily determined by the height of the post that generates it. The element allows the implementation of weak, as well as strong coupling coefficients that are encountered in asymmetric band-reject responses with broad stopbands. The coupling strength is controlled by the offset of the post with respect to the center of the main waveguide. The posts are separated by uniform sections of the main waveguide. An equivalent low-pass circuit based on the extracted pole technique is first used in a preliminary design. An improved equivalent low-pass circuit that includes a more accurate equivalent circuit of the band-reject element is then introduced. A synthesis method of the enhanced network is also presented. Filters based on the introduced element are designed, fabricated, and tested. Good agreement between measured and simulated results is achieved",
"title": ""
},
{
"docid": "7cc5c8250ad7ffaa8983d00b398c6ea9",
"text": "Decisions are powerfully affected by anticipated regret, and people anticipate feeling more regret when they lose by a narrow margin than when they lose by a wide margin. But research suggests that people are remarkably good at avoiding self-blame, and hence they may be better at avoiding regret than they realize. Four studies measured people's anticipations and experiences of regret and self-blame. In Study 1, students overestimated how much more regret they would feel when they \"nearly won\" than when they \"clearly lost\" a contest. In Studies 2, 3a, and 3b, subway riders overestimated how much more regret and self-blame they would feel if they \"nearly caught\" their trains than if they \"clearly missed\" their trains. These results suggest that people are less susceptible to regret than they imagine, and that decision makers who pay to avoid future regrets may be buying emotional insurance that they do not actually need.",
"title": ""
},
{
"docid": "5169d59af7f5cae888a998f891d99b18",
"text": "Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.",
"title": ""
},
{
"docid": "bf180a4ed173ef81c91594a2ee651c8c",
"text": "Recent emergence of low-cost and easy-operating depth cameras has reinvigorated the research in skeleton-based human action recognition. However, most existing approaches overlook the intrinsic interdependencies between skeleton joints and action classes, thus suffering from unsatisfactory recognition performance. In this paper, a novel latent max-margin multitask learning model is proposed for 3-D action recognition. Specifically, we exploit skelets as the mid-level granularity of joints to describe actions. We then apply the learning model to capture the correlations between the latent skelets and action classes each of which accounts for a task. By leveraging structured sparsity inducing regularization, the common information belonging to the same class can be discovered from the latent skelets, while the private information across different classes can also be preserved. The proposed model is evaluated on three challenging action data sets captured by depth cameras. Experimental results show that our model consistently achieves superior performance over recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "2a8f464e709dcae4e34f73654aefe31f",
"text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.",
"title": ""
},
{
"docid": "2ae80b030c82bf97bcf3662386cb2ec8",
"text": "A system model and its corresponding inversion for synthetic aperture radar (SAR) imaging are presented. The system model incorporates the spherical nature of a radar's radiation pattern at far field. The inverse method based on this model performs a spatial Fourier transform (Doppler processing) on the recorded signals with respect to the available coordinates of a translational radar (SAR) or target (inverse SAR). It is shown that the transformed data provide samples of the spatial Fourier transform of the target's reflectivity function. The inverse method can be modified to incorporate deviations of the radar's motion from its prescribed straight line path. The effects of finite aperture on resolution, reconstruction, and sampling constraints for the imaging problem are discussed.",
"title": ""
},
{
"docid": "d03a86459dd461dcfac842ae55ae4ebb",
"text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.",
"title": ""
},
{
"docid": "e610893c12836cf6019fa37c888e1666",
"text": "A new type of uncertainty relation is presented, concerning the information-bearing properti a discrete quantum system. A natural link is then revealed between basic quantum theory a linear error correcting codes of classical information theory. A subset of the known codes is desc having properties which are important for error correction in quantum communication. It is shown a pair of states which are, in a certain sense, “macroscopically different,” can form a superposit which the interference phase between the two parts is measurable. This provides a highly sta “Schrödinger cat” state. [S0031-9007(96)00779-X]",
"title": ""
},
{
"docid": "467637b1f55d4673d0ddd5322a130979",
"text": "In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU.",
"title": ""
},
{
"docid": "64a77ec55d5b0a729206d9af6d5c7094",
"text": "In this paper, we propose an Internet of Things (IoT) virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud. The framework uses the sensor-as-aservice notion to expose IoT cloud's connected objects functional aspects in the form of web services. The framework uses an adapter oriented approach to address the issue of connectivity with various types of sensor nodes. We employ semantic enhanced access polices to ensure that only authorized parties can access the IoT framework services, which result in enhancing overall security of the proposed framework. Furthermore, the use of event-driven service oriented architecture (e-SOA) paradigm assists the framework to leverage the monitoring process by dynamically sensing and responding to different connected objects sensor events. We present our design principles, implementations, and demonstrate the development of IoT application with reasoning capability by using a green school motorcycle (GSMC) case study. Our exploration shows that amalgamation of e-SOA, semantic web technologies and virtualization paves the way to address the connectivity, security and monitoring issues of IoT domain.",
"title": ""
},
{
"docid": "717e11d1a112557abdc4160afe75ce16",
"text": "Various types of lipids and their metabolic products associated with the biological membrane play a crucial role in signal transduction, modulation, and activation of receptors and as precursors of bioactive lipid mediators. Dysfunction in the lipid homeostasis in the brain could be a risk factor for the many types of neurodegenerative disorders, including Alzheimer’s disease, Huntington’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis. These neurodegenerative disorders are marked by extensive neuronal apoptosis, gliosis, and alteration in the differentiation, proliferation, and development of neurons. Sphingomyelin, a constituent of plasma membrane, as well as its primary metabolite ceramide acts as a potential lipid second messenger molecule linked with the modulation of various cellular signaling pathways. Excessive production of reactive oxygen species associated with enhanced oxidative stress has been implicated with these molecules and involved in the regulation of a variety of different neurodegenerative and neuroinflammatory disorders. Studies have shown that alterations in the levels of plasma lipid/cholesterol concentration may result to neurodegenerative diseases. Alteration in the levels of inflammatory cytokines and mediators in the brain has also been found to be implicated in the pathophysiology of neurodegenerative diseases. Although several mechanisms involved in neuronal apoptosis have been described, the molecular mechanisms underlying the correlation between lipid metabolism and the neurological deficits are not clearly understood. In the present review, an attempt has been made to provide detailed information about the association of lipids in neurodegeneration especially in Alzheimer’s disease.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "20173b723d2ed8cf17970ef119c11571",
"text": "In recent years, there have been amazing advances in deep learning methods for machine reading. In machine reading, the machine reader has to extract the answer from the given ground truth paragraph. Recently, the stateof-the-art machine reading models achieve human level performance in SQuAD which is a reading comprehension-style question answering (QA) task. The success of machine reading has inspired researchers to combine information retrieval with machine reading to tackle open-domain QA. However, these systems perform poorly compared to reading comprehension-style QA because it is difficult to retrieve the pieces of paragraphs that contain the answer to the question. In this study, we propose two neural network rankers that assign scores to different passages based on their likelihood of containing the answer to a given question. Additionally, we analyze the relative importance of semantic similarity and word level relevance matching in open-domain QA.",
"title": ""
},
{
"docid": "0c8d6441b5756d94cd4c3a0376f94fdc",
"text": "Electronic word of mouth (eWOM) has been an important factor influencing consumer purchase decisions. Using the ABC model of attitude, this study proposes a model to explain how eWOM affects online discussion forums. Specifically, we propose that platform (Web site reputation and source credibility) and customer (obtaining buying-related information and social orientation through information) factors influence purchase intentions via perceived positive eWOM review credibility, as well as product and Web site attitudes in an online community context. A total of 353 online discussion forum users in an online community (Fashion Guide) in Taiwan were recruited, and structural equation modeling (SEM) was used to test the research hypotheses. The results indicate that Web site reputation, source credibility, obtaining buying-related information, and social orientation through information positively influence perceived positive eWOM review credibility. In turn, perceived positive eWOM review credibility directly influences purchase intentions and also indirectly influences purchase intentions via product and Web site attitudes. Finally, we discuss the theoretical and managerial implications of the findings.",
"title": ""
},
{
"docid": "aa3178c1b4d7ae8f9e3e97fabea3d6a1",
"text": "This study continues landmark research, by Katz in 1984 and Hartland and Londoner in 1997, on characteristics of effective teaching by nurse anesthesia clinical instructors. Based on the literature review, there is a highlighted gap in research evaluating current teaching characteristics of clinical nurse anesthesia instructors that are valuable and effective from an instructor's and student's point of view. This study used a descriptive, quantitative research approach to assess (1) the importance of 24 characteristics (22 effective clinical teaching characteristics identified by Katz, and 2 items added for this study) of student registered nurse anesthetists (SRNAs) and clinical preceptors, who are Certified Registered Nurse Anesthetists, and (2) the congruence between the student and preceptor perceptions. A Likert-scale survey was used to assess the importance of each characteristic. The study was conducted at a large Midwestern hospital. The findings of this study did not support the results found by Hartland and Londoner based on the Friedman 2-way analysis. The rankings of the 24 characteristics by the students and the clinical preceptors in the current research were not significantly congruent based on the Kendall coefficient analysis. The results can help clinical preceptors increase their teaching effectiveness and generate effective learning environments for SRNAs.",
"title": ""
},
{
"docid": "d3fbf7429dff6f68ec06014467b0217a",
"text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.",
"title": ""
},
{
"docid": "5bd713c468f48313e42b399f441bb709",
"text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.",
"title": ""
},
{
"docid": "d5be665f8ce9fb442c87da6dd4baa6a6",
"text": "In this paper we propose a novel kernel sparse representation classification (SRC) framework and utilize the local binary pattern (LBP) descriptor in this framework for robust face recognition. First we develop a kernel coordinate descent (KCD) algorithm for 11 minimization in the kernel space, which is based on the covariance update technique. Then we extract LBP descriptors from each image and apply two types of kernels (χ2 distance based and Hamming distance based) with the proposed KCD algorithm under the SRC framework for face recognition. Experiments on both the Extended Yale B and the PIE face databases show that the proposed method is more robust against noise, occlusion, and illumination variations, even with small number of training samples.",
"title": ""
},
{
"docid": "83a4a89d3819009d61123a146b38d0e9",
"text": "OBJECTIVE\nBehçet's disease (BD) is a chronic, relapsing, inflammatory vascular disease with no pathognomonic test. Low sensitivity of the currently applied International Study Group (ISG) clinical diagnostic criteria led to their reassessment.\n\n\nMETHODS\nAn International Team for the Revision of the International Criteria for BD (from 27 countries) submitted data from 2556 clinically diagnosed BD patients and 1163 controls with BD-mimicking diseases or presenting at least one major BD sign. These were randomly divided into training and validation sets. Logistic regression, 'leave-one-country-out' cross-validation and clinical judgement were employed to develop new International Criteria for BD (ICBD) with the training data. Existing and new criteria were tested for their performance in the validation set.\n\n\nRESULTS\nFor the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations 1 point each. The pathergy test, when used, was assigned 1 point. A patient scoring ≥4 points is classified as having BD. In the training set, 93.9% sensitivity and 92.1% specificity were assessed compared with 81.2% sensitivity and 95.9% specificity for the ISG criteria. In the validation set, ICBD demonstrated an unbiased estimate of sensitivity of 94.8% (95% CI: 93.4-95.9%), considerably higher than that of the ISG criteria (85.0%). Specificity (90.5%, 95% CI: 87.9-92.8%) was lower than that of the ISG-criteria (96.0%), yet still reasonably high. For countries with at least 90%-of-cases and controls having a pathergy test, adding 1 point for pathergy test increased the estimate of sensitivity from 95.5% to 98.5%, while barely reducing specificity from 92.1% to 91.6%.\n\n\nCONCLUSION\nThe new proposed criteria derived from multinational data exhibits much improved sensitivity over the ISG criteria while maintaining reasonable specificity. It is proposed that the ICBD criteria to be adopted both as a guide for diagnosis and classification of BD.",
"title": ""
},
{
"docid": "2bb36d78294b15000b78acd7a0831762",
"text": "This study aimed to verify whether achieving a dist inctive academic performance is unlikely for students at high risk of smartphone addiction. Additionally, it verified whether this phenomenon was equally applicable to male and femal e students. After implementing systematic random sampling, 293 university students participated by completing an online survey questionnaire posted on the university’s stu dent information system. The survey questionnaire collected demographic information and responses to the Smartphone Addiction Scale-Short Version (SAS-SV) items. The results sho wed that male and female university students were equally susceptible to smartphone add iction. Additionally, male and female university students were equal in achieving cumulat ive GPAs with distinction or higher within the same levels of smartphone addiction. Fur thermore, undergraduate students who were at a high risk of smartphone addiction were le ss likely to achieve cumulative GPAs of distinction or higher.",
"title": ""
}
] |
scidocsrr
|
f1030390f40dc904d8ab89d57572128c
|
Which Are the Best Features for Automatic Verb Classification
|
[
{
"docid": "ef1f5eaa9c6f38bbe791e512a7d89dab",
"text": "Lexical-semantic verb classifications have proved useful in supporting various natural language processing (NLP) tasks. The largest and the most widely deployed classification in English is Levin’s (1993) taxonomy of verbs and their classes. While this resource is attractive in being extensive enough for some NLP use, it is not comprehensive. In this paper, we present a substantial extension to Levin’s taxonomy which incorporates 57 novel classes for verbs not covered (comprehensively) by Levin. We also introduce 106 novel diathesis alternations, created as a side product of constructing the new classes. We demonstrate the utility of our novel classes by using them to support automatic subcategorization acquisition and show that the resulting extended classification has extensive coverage over the English verb lexicon.",
"title": ""
},
{
"docid": "69fabbf2e0cc50dbcf28de6cc174159d",
"text": "This paper presents an automatic word sense disambiguation (WSD) system that uses Part-of-Speech (POS) tags along with word classes as the discrete features. Word Classes are derived from the Word Class Assigner using the Word Exchange Algorithm from statistical language processing. Naïve-Bayes classifier is employed from Weka in both the training and testing phases to perform the supervised learning on the standard Senseval-3 data set. Experiments were performing using 10-fold cross-validation on the training set and the training and testing data for training the model and evaluating it. In both experiments, the features will either used separately or combined together to produce the accuracies. Results indicate that word class features did not provide any discrimination for word sense disambiguation. POS tag features produced a small improvement over the baseline. The combination of both word class and POS tag features did not increase the accuracy results. Overall, further study is likely needed to possibly improve the implementation of the word class features in the system.",
"title": ""
}
] |
[
{
"docid": "38bc206d9caac1d2dbe767d7e39b7aa0",
"text": "We discuss the idea that addictions can be treated by changing the mechanisms involved in self-control with or without regard to intention. The core clinical symptoms of addiction include an enhanced incentive for drug taking (craving), impaired self-control (impulsivity and compulsivity), negative mood, and increased stress re-activity. Symptoms related to impaired self-control involve reduced activity in control networks including anterior cingulate (ACC), adjacent prefrontal cortex (mPFC), and striatum. Behavioral training such as mindfulness meditation can increase the function of control networks and may be a promising approach for the treatment of addiction, even among those without intention to quit.",
"title": ""
},
{
"docid": "cc3d0d9676ad19f71b4a630148c4211f",
"text": "OBJECTIVES\nPrevious studies have revealed that memory performance is diminished in chronic pain patients. Few studies, however, have assessed multiple components of memory in a single sample. It is currently also unknown whether attentional problems, which are commonly observed in chronic pain, mediate the decline in memory. Finally, previous studies have focused on middle-aged adults, and a possible detrimental effect of aging on memory performance in chronic pain patients has been commonly disregarded. This study, therefore, aimed at describing the pattern of semantic, working, and visual and verbal episodic memory performance in participants with chronic pain, while testing for possible contributions of attention and age to task performance.\n\n\nMETHODS\nThirty-four participants with chronic pain and 32 pain-free participants completed tests of episodic, semantic, and working memory to assess memory performance and a test of attention.\n\n\nRESULTS\nParticipants with chronic pain performed worse on tests of working memory and verbal episodic memory. A decline in attention explained some, but not all, group differences in memory performance. Finally, no additional effect of age on the diminished task performance in participants with chronic pain was observed.\n\n\nDISCUSSION\nTaken together, the results indicate that chronic pain significantly affects memory performance. Part of this effect may be caused by underlying attentional dysfunction, although this could not fully explain the observed memory decline. An increase in age in combination with the presence of chronic pain did not additionally affect memory performance.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "01638567bf915e26bf9398132ca27264",
"text": "Uncontrolled bleeding from the cystic artery and its branches is a serious problem that may increase the risk of intraoperative lesions to vital vascular and biliary structures. On laparoscopic visualization anatomic relations are seen differently than during conventional surgery, so proper knowledge of the hepatobiliary triangle anatomic structures under the conditions of laparoscopic visualization is required. We present an original classification of the anatomic variations of the cystic artery into two main groups based on our experience with 200 laparoscopic cholecystectomies, with due consideration of the known anatomicotopographic relations. Group I designates a cystic artery situated within the hepatobiliary triangle on laparoscopic visualization. This group included three types: (1) normally lying cystic artery, found in 147 (73.5%) patients; (2) most common cystic artery variation, manifesting as its doubling, present in 31 (15.5%) patients; and (3) the cystic artery originating from the aberrant right hepatic artery, observed in 11 (5.5%) patients. Group II designates a cystic artery that could not be found within the hepatobiliary triangle on laparoscopic dissection. This group included two types of variation: (1) cystic artery originating from the gastroduodenal artery, found in nine (4.5%) patients; and (2) cystic artery originating from the left hepatic artery, recorded in two (1%) patients.",
"title": ""
},
{
"docid": "b5b91947716e3594e3ddbb300ea80d36",
"text": "In this paper, a novel drive method, which is different from the traditional motor drive techniques, for high-speed brushless DC (BLDC) motor is proposed and verified by a series of experiments. It is well known that the BLDC motor can be driven by either pulse-width modulation (PWM) techniques with a constant dc-link voltage or pulse-amplitude modulation (PAM) techniques with an adjustable dc-link voltage. However, to our best knowledge, there is rare study providing a proper drive method for a high-speed BLDC motor with a large power over a wide speed range. Therefore, the detailed theoretical analysis comparison of the PWM control and the PAM control for high-speed BLDC motor is first given. Then, a conclusion that the PAM control is superior to the PWM control at high speed is obtained because of decreasing the commutation delay and high-frequency harmonic wave. Meanwhile, a new high-speed BLDC motor drive method based on the hybrid approach combining PWM and PAM is proposed. Finally, the feasibility and effectiveness of the performance analysis comparison and the new drive method are verified by several experiments.",
"title": ""
},
{
"docid": "350cda71dae32245b45d96b5fdd37731",
"text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.",
"title": ""
},
{
"docid": "2805fdd4cd97931497b6c42263a20534",
"text": "The well-established Modulation Transfer Function (MTF) is an imaging performance parameter that is well suited to describing certain sources of detail loss, such as optical focus and motion blur. As performance standards have developed for digital imaging systems, the MTF concept has been adapted and applied as the spatial frequency response (SFR). The international standard for measuring digital camera resolution, ISO 12233, was adopted over a decade ago. Since then the slanted edge-gradient analysis method on which it was based has been improved and applied beyond digital camera evaluation. Practitioners have modified minor elements of the standard method to suit specific system characteristics, unique measurement needs, or computational shortcomings in the original method. Some of these adaptations have been documented and benchmarked, but a number have not. In this paper we describe several of these modifications, and how they have improved the reliability of the resulting system evaluations. We also review several ways the method has been adapted and applied beyond camera resolution.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "3cda92028692a25411d74e5a002740ac",
"text": "Protecting sensitive information from unauthorized disclosure is a major concern of every organization. As an organization’s employees need to access such information in order to carry out their daily work, data leakage detection is both an essential and challenging task. Whether caused by malicious intent or an inadvertent mistake, data loss can result in significant damage to the organization. Fingerprinting is a content-based method used for detecting data leakage. In fingerprinting, signatures of known confidential content are extracted and matched with outgoing content in order to detect leakage of sensitive content. Existing fingerprinting methods, however, suffer from two major limitations. First, fingerprinting can be bypassed by rephrasing (or minor modification) of the confidential content, and second, usually the whole content of document is fingerprinted (including non-confidential parts), resulting in false alarms. In this paper we propose an extension to the fingerprinting approach that is based on sorted k-skip-n-grams. The proposed method is able to produce a fingerprint of the core confidential content which ignores non-relevant (non-confidential) sections. In addition, the proposed fingerprint method is more robust to rephrasing and can also be used to detect a previously unseen confidential document and therefore provide better detection of intentional leakage incidents.",
"title": ""
},
{
"docid": "d1e062c5c91e93a29b9cd1a015d5e135",
"text": "Experimental acoustic cell separation methods have been widely used to perform separation for different types of blood cells. However, numerical simulation of acoustic cell separation has not gained enough attention and needs further investigation since by using numerical methods, it is possible to optimize different parameters involved in the design of an acoustic device and calculate particle trajectories in a simple and low cost manner before spending time and effort for fabricating these devices. In this study, we present a comprehensive finite element-based simulation of acoustic separation of platelets, red blood cells and white blood cells, using standing surface acoustic waves (SSAWs). A microfluidic channel with three inlets, including the middle inlet for sheath flow and two symmetrical tilted angle inlets for the cells were used to drive the cells through the channel. Two interdigital transducers were also considered in this device and by implementing an alternating voltage to the transducers, an acoustic field was created which can exert the acoustic radiation force to the cells. Since this force is dependent to the size of the cells, the cells are pushed towards the midline of the channel with different path lines. Particle trajectories for different cells were obtained and compared with a theoretical equation. Two types of separations were observed as a result of varying the amplitude of the acoustic field. In the first mode of separation, white blood cells were sorted out through the middle outlet and in the second mode of separation, platelets were sorted out through the side outlets. Depending on the clinical needs and by using the studied microfluidic device, each of these modes can be applied to separate the desired cells.",
"title": ""
},
{
"docid": "f64c7c6d068b0e2f9500d3b1e2d79178",
"text": "The proposed protocol is for a systematic review and meta-analysis on the effects of whole-grains (WG) on non-communicable diseases such as type 2 diabetes, cardiovascular disease, hypertension and obesity. The primary objectives is to explore the mechanisms of WG intake on multiple biomarkers of NCDs such as fasting glucose, fasting insulin and many others. The secondary objective will look at the dose-response relationship between these various mechanisms. The protocol outlines the motive and scope for the review, and methodology including the risk of bias, statistical analysis, screening and study criteria.",
"title": ""
},
{
"docid": "60af8669ea0acb73e8edcd90abf0ce3e",
"text": "The physical mechanism of seed germination and its inhibition by abscisic acid (ABA) in Brassica napus L. was investigated, using volumetric growth (= water uptake) rate (dV/dt), water conductance (L), cell wall extensibility coefficient (m), osmotic pressure ( product operator(i)), water potential (Psi(i)), turgor pressure (P), and minimum turgor for cell expansion (Y) of the intact embryo as experimental parameters. dV/dt, product operator(i), and Psi(i) were measured directly, while m, P, and Y were derived by calculation. Based on the general equation of hydraulic cell growth [dV/dt = Lm/(L + m) (Delta product operator - Y), where Delta product operator = product operator(i) - product operator of the external medium], the terms (Lm/(L + m) and product operator(i) - Y were defined as growth coefficient (k(G)) and growth potential (GP), respectively. Both k(G) and GP were estimated from curves relating dV/dt (steady state) to product operator of osmotic test solutions (polyethylene glycol 6000).During the imbibition phase (0-12 hours after sowing), k(G) remains very small while GP approaches a stable level of about 10 bar. During the subsequent growth phase of the embryo, k(G) increases about 10-fold. ABA, added before the onset of the growth phase, prevents the rise of k(G) and lowers GP. These effects are rapidly abolished when germination is induced by removal of ABA. Neither L (as judged from the kinetics of osmotic water efflux) nor the amount of extractable solutes are affected by these changes. product operator(i) and Psi(i) remain at a high level in the ABA-treated seed but drop upon induction of germination, and this adds up to a large decrease of P, indicating that water uptake of the germinating embryo is controlled by cell wall loosening rather than by changes of product operator(i) or L. ABA inhibits water uptake by preventing cell wall loosening. By calculating Y and m from the growth equation, it is further shown that cell wall loosening during germination comprises both a decrease of Y from about 10 to 0 bar and an at least 10-fold increase of m. ABA-mediated embryo dormancy is caused by a reversible inhibition of both of these changes in cell wall stability.",
"title": ""
},
{
"docid": "ec69b95261fc19183a43c0e102f39016",
"text": "The selection of a surgical approach for the treatment of tibia plateau fractures is an important decision. Approximately 7% of all tibia plateau fractures affect the posterolateral corner. Displaced posterolateral tibia plateau fractures require anatomic articular reduction and buttress plate fixation on the posterior aspect. These aims are difficult to reach through a lateral or anterolateral approach. The standard posterolateral approach with fibula osteotomy and release of the posterolateral corner is a traumatic procedure, which includes the risk of fragment denudation. Isolated posterior approaches do not allow sufficient visual control of fracture reduction, especially if the fracture is complex. Therefore, the aim of this work was to present a surgical approach for posterolateral tibial plateau fractures that both protects the soft tissue and allows for good visual control of fracture reduction. The approach involves a lateral arthrotomy for visualizing the joint surface and a posterolateral approach for the fracture reduction and plate fixation, which are both achieved through one posterolateral skin incision. Using this approach, we achieved reduction of the articular surface and stable fixation in six of seven patients at the final follow-up visit. No complications and no loss of reduction were observed. Additionally, the new posterolateral approach permits direct visual exposure and facilitates the application of a buttress plate. Our approach does not require fibular osteotomy, and fragments of the posterolateral corner do not have to be detached from the soft tissue network.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "b3ebbff355dfc23b4dfbab3bc3012980",
"text": "Research with young children has shown that, like adults, they focus selectively on the aspects of an actor's behavior that are relevant to his or her underlying intentions. The current studies used the visual habituation paradigm to ask whether infants would similarly attend to those aspects of an action that are related to the actor's goals. Infants saw an actor reach for and grasp one of two toys sitting side by side on a curtained stage. After habituation, the positions of the toys were switched and babies saw test events in which there was a change in either the path of motion taken by the actor's arm or the object that was grasped by the actor. In the first study, 9-month-old infants looked longer when the actor grasped a new toy than when she moved through a new path. Nine-month-olds who saw an inanimate object of approximately the same dimensions as the actor's arm touch the toy did not show this pattern in test. In the second study, 5-month-old infants showed similar, though weaker, patterns. A third study provided evidence that the findings for the events involving a person were not due to perceptual changes in the objects caused by occlusion by the hand. A fourth study replicated the 9 month results for a human grasp at 6 months, and revealed that these effects did not emerge when infants saw an inanimate object with digits that moved to grasp the toy. Taken together, these findings indicate that young infants distinguish in their reasoning about human action and object motion, and that by 6 months infants encode the actions of other people in ways that are consistent with more mature understandings of goal-directed action.",
"title": ""
},
{
"docid": "197f5af02ea53b1dd32167780c4126ed",
"text": "A new technique for summarization is presented here for summarizing articles known as text summarization using neural network and rhetorical structure theory. A neural network is trained to learn the relevant characteristics of sentences by using back propagation technique to train the neural network which will be used in the summary of the article. After training neural network is then modified to feature fusion and pruning the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used to summarize articles and combining it with the rhetorical structure theory to form final summary of an article.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7",
"text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.",
"title": ""
},
{
"docid": "934680e03cfaccd2426ee8e8e311ef06",
"text": "Photocatalytic water splitting using particulate semiconductors is a potentially scalable and economically feasible technology for converting solar energy into hydrogen. Z-scheme systems based on two-step photoexcitation of a hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) are suited to harvesting of sunlight because semiconductors with either water reduction or oxidation activity can be applied to the water splitting reaction. However, it is challenging to achieve efficient transfer of electrons between HEP and OEP particles. Here, we present photocatalyst sheets based on La- and Rh-codoped SrTiO3 (SrTiO3:La, Rh; ref. ) and Mo-doped BiVO4 (BiVO4:Mo) powders embedded into a gold (Au) layer. Enhancement of the electron relay by annealing and suppression of undesirable reactions through surface modification allow pure water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency of 1.1% and an apparent quantum yield of over 30% at 419 nm. The photocatalyst sheet design enables efficient and scalable water splitting using particulate semiconductors.",
"title": ""
},
{
"docid": "ac044ce167d7296675ddfa1f9387c75d",
"text": "Over the years, many millimeter-wave circulator techniques have been presented, such as nonradiative dielectric and fin-line circulators. Although excellent results have been demonstrated in the literature, their proliferation in commercial devices has been hindered by complex assembly cost. This paper presents a study of substrate-integrated millimeter-wave degree-2 circulators. Although the substrate integrated-circuits technique may be applied to virtually any planar transmission medium, the one adopted in this paper is the substrate integrated waveguide (SIW). Two design configurations are possible: a planar one that is suitable for thin substrate materials and a turnstile one for thicker substrate materials. The turnstile circulator is ideal for systems where the conductor losses associated with the thin SIW cannot be tolerated. The design methodology adopted in this paper is to characterize the complex gyrator circuit as a preamble to design. This is done via a commercial finite-element package",
"title": ""
}
] |
scidocsrr
|
31d12f6f3af91826a57eb83ddb829ae9
|
Linking Cybersecurity Knowledge : Cybersecurity Information Discovery Mechanism
|
[
{
"docid": "e913a4d2206be999f0278d48caa4708a",
"text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.",
"title": ""
}
] |
[
{
"docid": "42c2e599dbbb00784e2a6837ebd17ade",
"text": "Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28cfe864acc8c40eb8759261273cf3bb",
"text": "Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an $\\left[O\\left(1\\slash V\\right),O\\left(V\\right)\\right]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance.",
"title": ""
},
{
"docid": "b85330c2d0816abe6f28fd300e5f9b75",
"text": "This paper presents a novel dual polarized planar aperture antenna using the low-temperature cofired ceramics technology to realize a novel antenna-in-package for a 60-GHz CMOS differential transceiver chip. Planar aperture antenna technology ensures high gain and wide bandwidth. Differential feeding is adopted to be compatible with the chip. Dual polarization makes the antenna function as a pair of single polarized antennas but occupies much less area. The antenna is ±45° dual polarized, and each polarization acts as either a transmitting (TX) or receiving (RX) antenna. This improves the signal-to-noise ratio of the wireless channel in a point-to-point communication, because the TX/RX polarization of one antenna is naturally copolarized with the RX/TX polarization of the other antenna. A prototype of the proposed antenna is designed, fabricated, and measured, whose size is 12 mm × 12 mm × 1.128 mm (2.4λ0 × 2.4λ0 × 0.226λ0). The measurement shows that the -10 dB impedance bandwidth covers the entire 60 GHz unlicensed band (57-64 GHz) for both polarizations. Within the bandwidth, the isolation between the ports of the two polarizations is better than 26 dB, and the gain is higher than 10 dBi with a peak of around 12 dBi for both polarizations.",
"title": ""
},
{
"docid": "c5d0d79bc6a0b58cf09c5d8eb0dc2ecf",
"text": "FRAME SEMANTICS is a research program in empirical semantics which emphasizes the continuities between language and experience, and provides a framework for presenting the results of that research. A FRAME is any system of concepts related in such a way that to understand any one concept it is necessary to understand the entire system; introducing any one concept results in all of them becoming available. In Frame Semantics, a word represents a category of experience; part of the research endeavor is the uncovering of reasons a speech community has for creating the category represented by the word and including that reason in the description of the meaning of the word.",
"title": ""
},
{
"docid": "cee3c61474bf14158d4abf0c794a9c2a",
"text": "This course will focus on describing techniques for handling datasets larger than main memory in scientific visualization and computer graphics. Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, such a gap penalizes algorithms which do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This course reviews fundamental issues, current problems, and unresolved solutions, and presents an in-depth study of external memory algorithms developed in recent years. Its goal is to provide students and graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Schedule (tentative) 5 min Introduction to the course Silva 45 min Overview of external memory algorithms Chiang 40 min Out-of-core scientific visualization Silva",
"title": ""
},
{
"docid": "024c5cd20c5764f29f62a1f35288eef2",
"text": "This paper presents a low-loss and high Tx-to-Rx isolation single-pole double-throw (SPDT) millimeter-wave switch for true time delay applications. The switch is designed based on matching-network and double-shunt transistors with quarter-wavelength transmission lines. The insertion loss and isolation characteristics of the switches are analyzed revealing that optimization of the transistor size with a matching-network switch on the receiver side and a double-shunt switch on the transmitter side can enhance the isolation performance with low loss. Implemented in 90-nm CMOS, the switch achieves a measured insertion loss and Tx-to-Rx isolation of 1.9 and 39 dB at 60 GHz, respectively. The input 1-dB gain compression point is 10 dBm at 60 GHz, and the return loss of the SPDT switch ports is greater than 10 dB at 48-67 GHz.",
"title": ""
},
{
"docid": "2cd7bbaf04f773c2248ad5e76cb5bf5d",
"text": "This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.",
"title": ""
},
{
"docid": "2259232b86607e964393c884340efe79",
"text": "Dynamic task allocation is an essential requirement for multi-robot systems functioning in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of individual robots and the collective behavior. We analyze the effect that the number of observations and the choice of decision functions have on the performance of the system. We validate the mathematical models on a multi-foraging scenario in a multi-robot system. We find that the model’s predictions agree very closely with experimental results from sensor-based simulations.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "e5016e84bdbd016e880f12bfdfd99cb5",
"text": "The subject of this paper is a method which suppresses systematic errors of resolvers and optical encoders with sinusoidal line signals. The proposed method does not require any additional hardware and the computational efforts are minimal. Since this method does not cause any time delay, the dynamic of the speed control is not affected. By means of this new scheme, dynamic and smooth running characteristics of drive systems are improved considerably.",
"title": ""
},
{
"docid": "3ad19b3710faeda90db45e2f7cebebe8",
"text": "Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.",
"title": ""
},
{
"docid": "1f3f352c7584fb6ec1924ca3621fb1fb",
"text": "The National Firearms Forensic Intelligence Database (NFFID (c) Crown Copyright 2003-2008) was developed by The Forensic Science Service (FSS) as an investigative tool for collating and comparing information from items submitted to the FSS to provide intelligence reports for the police and relevant government agencies. The purpose of these intelligence reports was to highlight current firearm and ammunition trends and their distribution within the country. This study reviews all the trends that have been highlighted by NFFID between September 2003 and September 2008. A total of 8887 guns of all types have been submitted to the FSS over the last 5 years, where an average of 21% of annual submissions are converted weapons. The makes, models, and modes of conversion of these weapons are described in detail. The number of trends identified by NFFID shows that this has been a valuable tool in the analysis of firearms-related crime.",
"title": ""
},
{
"docid": "ac0a6e663caa3cb8cdcb1a144561e624",
"text": "A two-stage process is performed by human operator for cleaning windows. The first being the application of cleaning fluid, which is usually achieved by using a wetted applicator. The aim of this task being to cover the whole window area in the shortest possible time. This depends on two parameters: the size of the applicator and the path which the applicator travels without significantly overlapping previously wetted area. The second is the removal of cleaning fluid by a squeegee blade without spillage on to other areas of the facade or previously cleaned areas of glass. This is particularly difficult for example if the window is located on the roof of a building and cleaning is performed from inside by the human window cleaner.",
"title": ""
},
{
"docid": "3b32ade20fbdd7474ee10fc10d80d90a",
"text": "We report the modulation performance of micro-light-emitting diode arrays with peak emission ranging from 370 to 520 nm, and emitter diameters ranging from 14 to 84 μm. Bandwidths in excess of 400 MHz and error-free data transmission up to 1.1Gbit/s is shown. These devices are shown integrated with electronic drivers, allowing convenient control of individual array emitters. Transmission using such a device is shown at 512 Mbit/s.",
"title": ""
},
{
"docid": "6d61da17db5c16611409356bd79006c4",
"text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.",
"title": ""
},
{
"docid": "55895dab9cc43c20aac200876da5722e",
"text": "We show the equivalence of two stateof-the-art models for link prediction/ knowledge graph completion: Nickel et al’s holographic embeddings and Trouillon et al.’s complex embeddings. We first consider a spectral version of the holographic embeddings, exploiting the frequency domain in the Fourier transform for efficient computation. The analysis of the resulting model reveals that it can be viewed as an instance of the complex embeddings with a certain constraint imposed on the initial vectors upon training. Conversely, any set of complex embeddings can be converted to a set of equivalent holographic embeddings.",
"title": ""
},
{
"docid": "a579a45a917999f48846a29cd09a92f4",
"text": "Over the last fifty years, the “Big Five” model of personality traits has become a standard in psychology, and research has systematically documented correlations between a wide range of linguistic variables and the Big Five traits. A distinct line of research has explored methods for automatically generating language that varies along personality dimensions. We present PERSONAGE (PERSONAlity GEnerator), the first highly parametrizable language generator for extraversion, an important aspect of personality. We evaluate two personality generation methods: (1) direct generation with particular parameter settings suggested by the psychology literature; and (2) overgeneration and selection using statistical models trained from judge’s ratings. Results show that both methods reliably generate utterances that vary along the extraversion dimension, according to human judges.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "0765510720f450736135efd797097450",
"text": "In this paper we discuss the re-orientation of human-computer interaction as an aesthetic field. We argue that mainstream approaches lack of general openness and ability to assess experience aspects of interaction, but that this can indeed be remedied. We introduce the concept of interface criticism as a way to turn the conceptual re-orientation into handles for practical design, and we present and discuss an interface criticism guide.",
"title": ""
},
{
"docid": "f33e96f81e63510f0a5e34609a390c2d",
"text": "Authentication based on passwords is used largely in applications for computer security and privacy. However, human actions such as choosing bad passwords and inputting passwords in an insecure way are regarded as “the weakest link” in the authentication chain. Rather than arbitrary alphanumeric strings, users tend to choose passwords either short or meaningful for easy memorization. With web applications and mobile apps piling up, people can access these applications anytime and anywhere with various devices. This evolution brings great convenience but also increases the probability of exposing passwords to shoulder surfing attacks. Attackers can observe directly or use external recording devices to collect users’ credentials. To overcome this problem, we proposed a novel authentication system PassMatrix, based on graphical passwords to resist shoulder surfing attacks. With a one-time valid login indicator and circulative horizontal and vertical bars covering the entire scope of pass-images, PassMatrix offers no hint for attackers to figure out or narrow down the password even they conduct multiple camera-based attacks. We also implemented a PassMatrix prototype on Android and carried out real user experiments to evaluate its memorability and usability. From the experimental result, the proposed system achieves better resistance to shoulder surfing attacks while maintaining usability.",
"title": ""
}
] |
scidocsrr
|
716fea4cbfe4446d6ae7a354264986be
|
Extracting Opinions, Opinion Holders, And Topics Expressed In Online News Media Text
|
[
{
"docid": "03b3d8220753570a6b2f21916fe4f423",
"text": "Recent systems have been developed for sentiment classification, opinion recogni tion, and opinion analysis (e.g., detect ing polarity and strength). We pursue an other aspect of opinion analysis: identi fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden tification as a sequence tagging task, Au toSlog learns extraction patterns. Our re sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.",
"title": ""
}
] |
[
{
"docid": "2ec0db3840965993e857b75bd87a43b7",
"text": "Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\n In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.",
"title": ""
},
{
"docid": "97a1d44956f339a678da4c7a32b63bf6",
"text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",
"title": ""
},
{
"docid": "f5a4d05c8b8c42cdca540794000afad5",
"text": "Design thinking (DT) is regarded as a system of three overlapping spaces—viability, desirability, and feasibility—where innovation increases when all three perspectives are addressed. Understanding how innovation within teams can be supported by DT methods and tools captivates the interest of business communities. This paper aims to examine how DT methods and tools foster innovation in teams. A case study approach, based on two workshops, examined three DT methods with a software tool. The findings support the use of DT methods and tools as a way of incubating ideas and creating innovative solutions within teams when team collaboration and software limitations are balanced. The paper proposes guidelines for utilizing DT methods and tools in innovation",
"title": ""
},
{
"docid": "bed3e58bc8e69242e6e00c7d13dabb93",
"text": "The convergence of online learning algorithms is analyzed using the tools of the stochastic approximation theory, and proved under very weak conditions. A general framework for online learning algorithms is first presented. This framework encompasses the most common online learning algorithms in use today, as illustrated by several examples. The stochastic approximation theory then provides general results describing the convergence of all these learning algorithms at once. Revised version, May 2018.",
"title": ""
},
{
"docid": "c3650b4a82790147a7ab911ce8b0c424",
"text": "OBJECTIVES\nTo demonstrate through a clinical case the systemic effetcss and complications that can arise after an acute gastric dilatation caused by an eating binge.\n\n\nCLINICAL CASE\nA young woman diagnosed of bulimia nervosa presents to the emergency room after a massive food intake. She shows important abdominal distention and refers inability to self-induce vomit. A few hours later she commences to show signs of hemodynamic instability and oliguria. A CT scan is performed; it shows bilateral renal infarctions due to compression of the abdominal aorta and some of its visceral branches.\n\n\nINTERVENTIONS\nThe evaluation procedures included quantification of the gastric volume by CT. A decompression gastrostomy was performed; it allowed the evacuation of a large amount of gastric content and restored blood supply to the abdomen, which improved renal perfusion.\n\n\nCONCLUSIONS\nCT is a basic diagnostic tool that not only allows us to quantify the degree of acute gastric dilatation but can also evaluate the integrity of the adjacent organs which may be suffering compression hypoperfusion.",
"title": ""
},
{
"docid": "d27d17176181b09a74c9c8115bc6a66e",
"text": "In this chapter, we provide definitions of Business Intelligence (BI) and outline the development of BI over time, particularly carving out current questions of BI. Different scenarios of BI applications are considered and business perspectives and views of BI on the business process are identified. Further, the goals and tasks of BI are discussed from a management and analysis point of view and a method format for BI applications is proposed. This format also gives an outline of the book’s contents. Finally, examples from different domain areas are introduced which are used for demonstration in later chapters of the book. 1.1 Definition of Business Intelligence If one looks for a definition of the term Business Intelligence (BI) one will find the first reference already in 1958 in a paper of H.P. Luhn (cf. [14]). Starting from the definition of the terms “Intelligence” as “the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal” and “Business” as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera”, he specifies a business intelligence system as “[an] automatic system [that] is being developed to disseminate information to the various sections of any industrial, scientific or government organization.” The main task of Luhn’s system was automatic abstracting of documents and delivering this information to appropriate so-called action points. This definition did not come into effect for 30 years, and in 1989Howard Dresner coined the term Business Intelligence (BI) again. He introduced it as an umbrella term for a set of concepts and methods to improve business decision making, using systems based on facts. Many similar definitions have been given since. In Negash [18], important aspects of BI are emphasized by stating that “. . . business intelligence systems provide actionable information delivered at the right time, at the right location, and in the right form to assist decision makers.” Today one can find many different definitions which show that at the top level the intention of BI has not changed so much. For example, in [20] BI is defined as “an integrated, company-specific, IT-based total approach for managerial decision © Springer-Verlag Berlin Heidelberg 2015 W. Grossmann, S. Rinderle-Ma, Fundamentals of Business Intelligence, Data-Centric Systems and Applications, DOI 10.1007/978-3-662-46531-8_1 1",
"title": ""
},
{
"docid": "da695403ee969f71ea01a4b16477556f",
"text": "Data augmentation is a widely used technique in many machine learning tasks, such as image classification, to virtually enlarge the training dataset size and avoid overfitting. Traditional data augmentation techniques for image classification tasks create new samples from the original training data by, for example, flipping, distorting, adding a small amount of noise to, or cropping a patch from an original image. In this paper, we introduce a simple but surprisingly effective data augmentation technique for image classification tasks. With our technique, named SamplePairing, we synthesize a new sample from one image by overlaying another image randomly chosen from the training data (i.e., taking an average of two images for each pixel). By using two images randomly selected from the training set, we can generate N new samples from N training samples. This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33.5% to 29.0% for the ILSVRC 2012 dataset with GoogLeNet and from 8.22% to 6.93% in the CIFAR-10 dataset. We also show that our SamplePairing technique largely improved accuracy when the number of samples in the training set was very small. Therefore, our technique is more valuable for tasks with a limited amount of training data, such as medical imaging tasks.",
"title": ""
},
{
"docid": "0d0d11c1e340e67939cfba0cde4783ed",
"text": "Recent research effort in poem composition has focused on the use of automatic language generation to produce a polished poem. A less explored question is how effectively a computer can serve as an interactive assistant to a poet. For this purpose, we built a web application that combines rich linguistic knowledge from classical Chinese philology with statistical natural language processing techniques. The application assists users in composing a ‘couplet’—a pair of lines in a traditional Chinese poem—by making suggestions for the next and corresponding characters. A couplet must meet a complicated set of requirements on phonology, syntax, and parallelism, which are challenging for an amateur poet to master. The application checks conformance to these requirements and makes suggestions for characters based on lexical, syntactic, and semantic properties. A distinguishing feature of the application is its extensive use of linguistic knowledge, enabling it to inform users of specific phonological principles in detail, and to explicitly model semantic parallelism, an essential characteristic of Chinese poetry. We evaluate the quality of poems composed solely with characters suggested by the application, and the coverage of its character suggestions. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "4d3ba5824551b06c861fc51a6cae41a5",
"text": "This paper shows a gate driver design for 1.7 kV SiC MOSFET module as well a Rogowski-coil based current sensor for effective short circuit protection. The design begins with the power architecture selection for better common-mode noise immunity as the driver is subjected to high dv/dt due to the very high switching speed of the SiC MOSFET modules. The selection of the most appropriate gate driver IC is made to ensure the best performance and full functionalities of the driver, followed by the circuitry designs of paralleled external current booster, Soft Turn-Off, and Miller Clamp. In addition to desaturation, a high bandwidth PCB-based Rogowski current sensor is proposed to serve as a more effective method for the short circuit protection for the high-cost SiC MOSFET modules.",
"title": ""
},
{
"docid": "41f7d66c6e2c593eb7bda22c72a7c048",
"text": "Artificial neural networks are algorithms that can be used to perform nonlinear statistical modeling and provide a new alternative to logistic regression, the most commonly used method for developing predictive models for dichotomous outcomes in medicine. Neural networks offer a number of advantages, including requiring less formal statistical training, ability to implicitly detect complex nonlinear relationships between dependent and independent variables, ability to detect all possible interactions between predictor variables, and the availability of multiple training algorithms. Disadvantages include its \"black box\" nature, greater computational burden, proneness to overfitting, and the empirical nature of model development. An overview of the features of neural networks and logistic regression is presented, and the advantages and disadvantages of using this modeling technique are discussed.",
"title": ""
},
{
"docid": "98269ed4d72abecb6112c35e831fc727",
"text": "The goal of this article is to place the role that social media plays in collective action within a more general theoretical structure, using the events of the Arab Spring as a case study. The article presents two broad theoretical principles. The first is that one cannot understand the role of social media in collective action without first taking into account the political environment in which they operate. The second principle states that a significant increase in the use of the new media is much more likely to follow a significant amount of protest activity than to precede it. The study examines these two principles using political, media, and protest data from twenty Arab countries and the Palestinian Authority. The findings provide strong support for the validity of the claims.",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "d208033e210816d7a9454749080587d9",
"text": "Graph classification is a problem with practical applications in many different domains. Most of the existing methods take the entire graph into account when calculating graph features. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or subgraphs. In the real-world, however, graphs can be both large and noisy with discriminative patterns confined to certain regions in the graph only. In this work, we study the problem of attentional processing for graph classification. The use of attention allows us to focus on small but informative parts of the graph, avoiding noise in the rest of the graph. We present a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of “interesting” nodes. The model is equipped with an external memory component which allows it to integrate information gathered from different parts of the graph. We demonstrate the effectiveness of the model through various experiments.",
"title": ""
},
{
"docid": "d18faf207a0dbccc030e5dcc202949ab",
"text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.",
"title": ""
},
{
"docid": "4b69831f2736ae08049be81e05dd4046",
"text": "One of the most important aspects in playing the piano is using the appropriate fingers to facilitate movement and transitions. The fingering arrangement depends to a ce rtain extent on the size of the musician’s hand. We hav e developed an automatic fingering system that, given a sequence of pitches, suggests which fingers should be used. The output can be personalized to agree with t he limitations of the user’s hand. We also consider this system to be the base of a more complex future system: a score reduction system that will reduce orchestra scor e to piano scores. This paper describes: • “Vertical cost” model: the stretch induced by a given hand position. • “Horizontal cost” model: transition between two hand positions. • A system that computes low-cost fingering for a given piece of music. • A machine learning technique used to learn the appropriate parameters in the models.",
"title": ""
},
{
"docid": "9dfcba284d0bf3320d893d4379042225",
"text": "Botnet is a hybrid of previous threats integrated with a command and control system and hundreds of millions of computers are infected. Although botnets are widespread development, the research and solutions for botnets are not mature. In this paper, we present an overview of research on botnets. We discuss in detail the botnet and related research including infection mechanism, botnet malicious behavior, command and control models, communication protocols, botnet detection, and botnet defense. We also present a simple case study of IRC-based SpyBot.",
"title": ""
},
{
"docid": "7d0fb12fce0ef052684a8664a3f5c543",
"text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "fb0e9f6f58051b9209388f81e1d018ff",
"text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.",
"title": ""
}
] |
scidocsrr
|
456ce2d909fc268a151fa6967cbfaa11
|
Hierarchical Spatio-Temporal Pattern Discovery and Predictive Modeling
|
[
{
"docid": "55032007199b5126480d432b1c45db4a",
"text": "Concern about national security has increased after the 26/11 Mumbai attack. In this paper we look at the use of missing value and clustering algorithm for a data mining approach to help predict the crimes patterns and fast up the process of solving crime. We will concentrate on MV algorithm and Apriori algorithm with some enhancements to aid in the process of filling the missing value and identification of crime patterns. We applied these techniques to real crime data. We also use semisupervised learning technique in this paper for knowledge discovery from the crime records and to help increase the predictive accuracy. General Terms Crime data mining, MV Algorithm, Apriori Algorithm",
"title": ""
},
{
"docid": "cc1ae8daa1c1c4ee2b3b4a65ef48b6f5",
"text": "The use of entropy as a distance measure has several benefits. Amongst other things it provides a consistent approach to handling of symbolic attributes, real valued attributes and missing values. The approach of taking all possible transformation paths is discussed. We describe K*, an instance-based learner which uses such a measure, and results are presented which compare favourably with several machine learning algorithms.",
"title": ""
}
] |
[
{
"docid": "c64dd1051c5b6892df08813e38285843",
"text": "Diabetes has emerged as a major healthcare problem in India. Today Approximately 8.3 % of global adult population is suffering from Diabetes. India is one of the most diabetic populated country in the world. Today the technologies available in the market are invasive methods. Since invasive methods cause pain, time consuming, expensive and there is a potential risk of infectious diseases like Hepatitis & HIV spreading and continuous monitoring is therefore not possible. Now a days there is a tremendous increase in the use of electrical and electronic equipment in the medical field for clinical and research purposes. Thus biomedical equipment’s have a greater role in solving medical problems and enhance quality of life. Hence there is a great demand to have a reliable, instantaneous, cost effective and comfortable measurement system for the detection of blood glucose concentration. Non-invasive blood glucose measurement device is one such which can be used for continuous monitoring of glucose levels in human body.",
"title": ""
},
{
"docid": "023514bca28bf91e74ebcf8e473b4573",
"text": "As a result of technological advances on robotic systems, electronic sensors, and communication techniques, the production of unmanned aerial vehicle (UAV) systems has become possible. Their easy installation and flexibility led these UAV systems to be used widely in both the military and civilian applications. Note that the capability of one UAV is however limited. Nowadays, a multi-UAV system is of special interest due to the ability of its associate UAV members either to coordinate simultaneous coverage of large areas or to cooperate to achieve common goals / targets. This kind of cooperation / coordination requires reliable communication network with a proper network model to ensure the exchange of both control and data packets among UAVs. Such network models should provide all-time connectivity to avoid the dangerous failures or unintended consequences. Thus, the multi-UAV system relies on communication to operate. In this paper, current literature about multi-UAV system regarding its concepts and challenges is presented. Also, both the merits and drawbacks of the available networking architectures and models in a multi-UAV system are presented. Flying Ad Hoc Network (FANET) is moreover considered as a sophisticated type of wireless ad hoc network among UAVs, which solved the communication problems into other network models. Along with the FANET unique features, challenges and open issues are also discussed.",
"title": ""
},
{
"docid": "3284431912c05706fe61dfc56e2a38a5",
"text": "In recent years social media have become indispensable tools for information dissemination, operating in tandem with traditional media outlets such as newspapers, and it has become critical to understand the interaction between the new and old sources of news. Although social media as well as traditional media have attracted attention from several research communities, most of the prior work has been limited to a single medium. In addition temporal analysis of these sources can provide an understanding of how information spreads and evolves. Modeling temporal dynamics while considering multiple sources is a challenging research problem. In this paper we address the problem of modeling text streams from two news sources - Twitter and Yahoo! News. Our analysis addresses both their individual properties (including temporal dynamics) and their inter-relationships. This work extends standard topic models by allowing each text stream to have both local topics and shared topics. For temporal modeling we associate each topic with a time-dependent function that characterizes its popularity over time. By integrating the two models, we effectively model the temporal dynamics of multiple correlated text streams in a unified framework. We evaluate our model on a large-scale dataset, consisting of text streams from both Twitter and news feeds from Yahoo! News. Besides overcoming the limitations of existing models, we show that our work achieves better perplexity on unseen data and identifies more coherent topics. We also provide analysis of finding real-world events from the topics obtained by our model.",
"title": ""
},
{
"docid": "4cfe999fa7b2594327b6109084f0164f",
"text": "A large number of post-transcriptional modifications of transfer RNAs (tRNAs) have been described in prokaryotes and eukaryotes. They are known to influence their stability, turnover, and chemical/physical properties. A specific subset of tRNAs contains a thiolated uridine residue at the wobble position to improve the codon-anticodon interaction and translational accuracy. The proteins involved in tRNA thiolation are reminiscent of prokaryotic sulfur transfer reactions and of the ubiquitylation process in eukaryotes. In plants, some of the proteins involved in this process have been identified and show a high degree of homology to their non-plant equivalents. For other proteins, the identification of the plant homologs is much less clear, due to the low conservation in protein sequence. This manuscript describes the identification of CTU2, the second CYTOPLASMIC THIOURIDYLASE protein of Arabidopsis thaliana. CTU2 is essential for tRNA thiolation and interacts with ROL5, the previously identified CTU1 homolog of Arabidopsis. CTU2 is ubiquitously expressed, yet its activity seems to be particularly important in root tissue. A ctu2 knock-out mutant shows an alteration in root development. The analysis of CTU2 adds a new component to the so far characterized protein network involved in tRNA thiolation in Arabidopsis. CTU2 is essential for tRNA thiolation as a ctu2 mutant fails to perform this tRNA modification. The identified Arabidopsis CTU2 is the first CTU2-type protein from plants to be experimentally verified, which is important considering the limited conservation of these proteins between plant and non-plant species. Based on the Arabidopsis protein sequence, CTU2-type proteins of other plant species can now be readily identified.",
"title": ""
},
{
"docid": "ef96b4d9cac097af65fdfbb61d0fc847",
"text": "Altering image’s color is one of the most common tasks in image processing. However, most of existing methods are aimed to perform global color transfer. This usually means that the whole image is affected. But in many cases colors of only a part of an image needs changing, so it is important that the rest of the image remains unmodified. In this article we offer a fast and simple interactive algorithm based on local color statistics that allows altering color of only a part of an image, preserving image’s details and natural look.",
"title": ""
},
{
"docid": "28c0afcde94ba0fcf39678cba0b5999a",
"text": "To describe the aponeurotic expansion of the supraspinatus tendon with anatomic correlations and determine its prevalence in a series of patients imaged with MRI. In the first part of this HIPAA-compliant and IRB-approved study, we retrospectively reviewed 150 consecutive MRI studies of the shoulder obtained on a 1.5-T system. The aponeurotic expansion at the level of the bicipital groove was classified as: not visualized (type 0), flat-shaped (type 1), oval-shaped and less than 50 % the size of the adjacent long head of the biceps section (type 2A), or oval-shaped and more than 50 % the size of the adjacent long head of the biceps section (type 2B). In the second part of this study, we examined both shoulders of 25 cadavers with ultrasound. When aponeurotic expansion was seen at US, a dissection was performed to characterize its origin and termination. An aponeurotic expansion of the supraspinatus located anterior and lateral to the long head of the biceps in its groove was clearly demonstrated in 49 % of the shoulders with MRI. According to our classification, its shape was type 1 in 35 %, type 2A in 10 % and type 2B in 4 %. This structure was also identified in 28 of 50 cadaveric shoulders with ultrasound and confirmed at dissection in 10 cadavers (20 shoulders). This structure originated from the most anterior and superficial aspect of the supraspinatus tendon and inserted distally on the pectoralis major tendon. The aponeurotic expansion of the supraspinatus tendon can be identified with MRI or ultrasound in about half of the shoulders. It courses anteriorly and laterally to the long head of the biceps tendon, outside its synovial sheath.",
"title": ""
},
{
"docid": "2ffc4bb9de1fe6759b6c1d441c4d8854",
"text": "One of the long-standing tasks in computer vision is to use a single 2-D view of an object in order to produce its 3-D shape. Recovering the lost dimension in this process has been the goal of classic shape-from-X methods, but often the assumptions made in those works are quite limiting to be useful for general 3-D objects. This problem has been recently addressed with deep learning methods containing a 2-D (convolution) encoder followed by a 3-D (deconvolution) decoder. These methods have been reasonably successful, but memory and run time constraints impose a strong limitation in terms of the resolution of the reconstructed 3-D shapes. In particular, state-of-the-art methods are able to reconstruct 3-D shapes represented by volumes of at most 323 voxels using state-of-the-art desktop computers. In this work, we present a scalable 2-D single view to 3-D volume reconstruction deep learning method, where the 3-D (deconvolution) decoder is replaced by a simple inverse discrete cosine transform (IDCT) decoder. Our simpler architecture has an order of magnitude faster inference when reconstructing 3-D volumes compared to the convolution-deconvolutional model, an exponentially smaller memory complexity while training and testing, and a sub-linear run-time training complexity with respect to the output volume size. We show on benchmark datasets that our method can produce high-resolution reconstructions with state of the art accuracy.",
"title": ""
},
{
"docid": "dbc11b8d76eb527444ead3b2168aa2c2",
"text": "In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning. To this end, we introduce a new model for statistical relational learning that is built upon deep recursive neural networks, and give experimental evidence that it can easily compete with, or even outperform, existing logic-based reasoners on the task of ontology reasoning. More precisely, we compared our implemented system with one of the best logic-based ontology reasoners at present, RDFox, on a number of large standard benchmark datasets, and found that our system attained high reasoning quality, while being up to two orders of magnitude faster.",
"title": ""
},
{
"docid": "6ef6cbb60da56bfd53ae945480908d3c",
"text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.",
"title": ""
},
{
"docid": "ddae0422527c45e37f9a5b204cb0580f",
"text": "Several studies have reported high efficacy and safety of artemisinin-based combination therapy (ACT) mostly under strict supervision of drug intake and limited to children less than 5 years of age. Patients over 5 years of age are usually not involved in such studies. Thus, the findings do not fully reflect the reality in the field. This study aimed to assess the effectiveness and safety of ACT in routine treatment of uncomplicated malaria among patients of all age groups in Nanoro, Burkina Faso. A randomized open label trial comparing artesunate–amodiaquine (ASAQ) and artemether–lumefantrine (AL) was carried out from September 2010 to October 2012 at two primary health centres (Nanoro and Nazoanga) of Nanoro health district. A total of 680 patients were randomized to receive either ASAQ or AL without any distinction by age. Drug intake was not supervised as pertains in routine practice in the field. Patients or their parents/guardians were advised on the time and mode of administration for the 3 days treatment unobserved at home. Follow-up visits were performed on days 3, 7, 14, 21, and 28 to evaluate clinical and parasitological resolution of their malaria episode as well as adverse events. PCR genotyping of merozoite surface proteins 1 and 2 (msp-1, msp-2) was used to differentiate recrudescence and new infection. By day 28, the PCR corrected adequate clinical and parasitological response was 84.1 and 77.8 % respectively for ASAQ and AL. The cure rate was higher in older patients than in children under 5 years old. The risk of re-infection by day 28 was higher in AL treated patients compared with those receiving ASAQ (p < 0.00001). Both AL and ASAQ treatments were well tolerated. This study shows a lowering of the efficacy when drug intake is not directly supervised. This is worrying as both rates are lower than the critical threshold of 90 % required by the WHO to recommend the use of an anti-malarial drug in a treatment policy. Trial registration: NCT01232530",
"title": ""
},
{
"docid": "82e282703eeed354d2e5dc39992b779c",
"text": "Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine-tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross-validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.",
"title": ""
},
{
"docid": "6ce860678cbee5db5940cd1bb161525e",
"text": "We propose a novel method for the multi-view reconstruction problem. Surfaces which do not have direct support in the input 3D point cloud and hence need not be photo-consistent but represent real parts of the scene (e.g. low-textured walls, windows, cars) are important for achieving complete reconstructions. We augmented the existing Labatut CGF 2009 method with the ability to cope with these difficult surfaces just by changing the t-edge weights in the construction of surfaces by a minimal s-t cut. Our method uses Visual-Hull to reconstruct the difficult surfaces which are not sampled densely enough by the input 3D point cloud. We demonstrate importance of these surfaces on several real-world data sets. We compare our improvement to our implementation of the Labatut CGF 2009 method and show that our method can considerably better reconstruct difficult surfaces while preserving thin structures and details in the same quality and computational time.",
"title": ""
},
{
"docid": "db98068f4c69b2389c9ff1bc0ade4e6f",
"text": "We infiltrate the ASIC development chain by inserting a small denial-of-service (DoS) hardware Trojan at the fabrication design phase into an existing VLSI circuit, thereby simulating an adversary at a semiconductor foundry. Both the genuine and the altered ASICs have been fabricated using a 180 nm CMOS process. The Trojan circuit adds an overhead of only 0.5% to the original design. In order to detect the hardware Trojan, we perform side-channel analyses and apply IC-fingerprinting techniques using templates, principal component analysis (PCA), and support vector machines (SVMs). As a result, we were able to successfully identify and classify all infected ASICs from non-infected ones. To the best of our knowledge, this is the first hardware Trojan manufactured as an ASIC and has successfully been analyzed using side channels.",
"title": ""
},
{
"docid": "a44ad77cec2b25cb1c42cb0e9e491e39",
"text": "We present a new and novel continuum robot, built from contracting pneumatic muscles. The robot has a continuous compliant backbone, achieved via three independently controlled serially connected three degree of freedom sections, for a total of nine degrees of freedom. We detail the design, construction, and initial testing of the robot. The use of contracting muscles, in contrast to previous comparable designs featuring expanding muscles, is well-suited to use of the robot as an active hook in dynamic manipulation tasks. We describe experiments using the robot in this novel manipulation mode.",
"title": ""
},
{
"docid": "55aea20148423bdb7296addac847d636",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "cacef3b17bafadd25cf9a49e826ee066",
"text": "Road accidents are frequent and many cause casualties. Fast handling can minimize the number of deaths from traffic accidents. In addition to victims of traffic accidents, there are also patients who need emergency handling of the disease he suffered. One of the first help that can be given to the victim or patient is to use an ambulance equipped with medical personnel and equipment needed. The availability of ambulance and accurate information about victims and road conditions can help the first aid process for victims or patients. Supportive treatment can be done to deal with patients by determining the best route (nearest and fastest) to the nearest hospital. The best route can be known by utilizing the collaboration between the Dijkstra algorithm and the Floyd-warshall algorithm. This application applies Dijkstra's algorithm to determine the fastest travel time to the nearest hospital. The Floyd-warshall algorithm is implemented to determine the closest distance to the hospital. Data on some nearby hospitals will be collected by the system using Dijkstra's algorithm and then the system will calculate the fastest distance based on the last traffic condition using the Floyd-warshall algorithm to determine the best route to the nearest hospital recommended by the system. This application is built with the aim of providing support for the first handling process to the victim or the emergency patient by giving the ambulance calling report and determining the best route to the nearest hospital.",
"title": ""
},
{
"docid": "0dc9f8f65efd02f16fea77d910fd73c7",
"text": "The visual system is the most studied sensory pathway, which is partly because visual stimuli have rather intuitive properties. There are reasons to think that the underlying principle ruling coding, however, is the same for vision and any other type of sensory signal, namely the code has to satisfy some notion of optimality--understood as minimum redundancy or as maximum transmitted information. Given the huge variability of natural stimuli, it would seem that attaining an optimal code is almost impossible; however, regularities and symmetries in the stimuli can be used to simplify the task: symmetries allow predicting one part of a stimulus from another, that is, they imply a structured type of redundancy. Optimal coding can only be achieved once the intrinsic symmetries of natural scenes are understood and used to the best performance of the neural encoder. In this paper, we review the concepts of optimal coding and discuss the known redundancies and symmetries that visual scenes have. We discuss in depth the only approach which implements the three of them known so far: translational invariance, scale invariance and multiscaling. Not surprisingly, the resulting code possesses features observed in real visual systems in mammals.",
"title": ""
},
{
"docid": "b7d181503afa8bcb36b2428fcf3655bc",
"text": "Since the IEEE 1609/WAVE standards were published, much research has continued on validation and optimization. However, precise simulation models of these standards are lacking recently, especially within the ns-3 network simulator. In this paper, we present the ns-3 implementation details of the IEEE 1609.4 and IEEE 802.11p standards which are key elements of the WAVE MAC layer. Moreover we discuss some implementation issues and describe our solutions. Lastly, we also analyze and evaluate the performance of the WAVE MAC layer with the implemented model. Our simulation results show that multiple channel operation specified in the WAVE standards could impact vehicular wireless communication differently, depending on the different scenarios, and the results should be considered carefully during the development of VANET applications.",
"title": ""
},
{
"docid": "bd6115cbcf62434f38ca4b43480b7c5a",
"text": "Most existing person re-identification methods focus on finding similarities between persons between pairs of cameras (camera pairwise re-identification) without explicitly maintaining consistency of the results across the network. This may lead to infeasible associations when results from different camera pairs are combined. In this paper, we propose a network consistent re-identification (NCR) framework, which is formulated as an optimization problem that not only maintains consistency in re-identification results across the network, but also improves the camera pairwise re-identification performance between all the individual camera pairs. This can be solved as a binary integer programing problem, leading to a globally optimal solution. We also extend the proposed approach to the more general case where all persons may not be present in every camera. Using two benchmark datasets, we validate our approach and compare against state-of-the-art methods.",
"title": ""
},
{
"docid": "e56e6fd8620ab8c76abc73c379d1fdd5",
"text": "Article history: Received 7 August 2015 Received in revised form 26 January 2016 Accepted 1 April 2016 Available online 7 April 2016 The emergence of social commerce has brought substantial changes to both businesses and consumers. Hence, understanding consumer behavior in the context of social commerce has become critical for companies that aim to better influence consumers and harness the power of their social ties. Given that research on this issue is new and largely fragmented, it will be theoretically important to evaluate what has been studied and derive meaningful insights through a structured review of the literature. In this study, we conduct a systematic review of social commerce studies to explicate how consumers behave on social networking sites. We classify these studies, discuss noteworthy theories, and identify important research methods. More importantly, we draw upon the stimulus–organism–response model and the five-stage consumer decision-making process to propose an integrative framework for understanding consumer behavior in this context. We believe that this framework can provide a useful basis for future social commerce research. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
a38492ed7d3a6ca0d75054765f346f6f
|
Personalized Prognostic Models for Oncology: A Machine Learning Approach
|
[
{
"docid": "a88c0d45ca7859c050e5e76379f171e6",
"text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.",
"title": ""
}
] |
[
{
"docid": "30dffba83b24e835a083774aa91e6c59",
"text": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users’ motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents’ digital traces in Wikipedia’s server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia’s user experience, editors striving to cater to their readers’ needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.",
"title": ""
},
{
"docid": "3aa4fd13689907ae236bd66c8a7ed8c8",
"text": "Biomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features. We present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance — 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus. Our neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER .",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "955c7d91d4463fc50feb93320b7c370c",
"text": "The major problem in the use of the Web is that of searching for relevant information that meets the expectations of a user. This problem increases every day and especially with the emergence of web 2.0 or social web. Our paper, therefore, ignores the disadvantage of social web and operates it to rich user profile.",
"title": ""
},
{
"docid": "96d6173f58e36039577c8e94329861b2",
"text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.",
"title": ""
},
{
"docid": "1cbf55610014ef23e4015c07f5846619",
"text": "Variation of the system parameters and external disturbances always happen in the CNC servo system. With a traditional PID controller, it will cause large overshoot or poor stability. In this paper, a fuzzy-PID controller is proposed in order to improve the performance of the servo system. The proposed controller incorporates the advantages of PID control which can eliminate the steady-state error, and the advantages of fuzzy logic such as simple design, no need of an accurate mathematical model and some adaptability to nonlinearity and time-variation. The fuzzy-PID controller accepts the error (e) and error change(ec) as inputs ,while the parameters kp, ki, kd as outputs. Control rules of the controller are established based on experience so that self-regulation of the values of PID parameters is achieved. A simulation model of position servo system is constructed in Matlab/Simulink module based on a high-speed milling machine researched in our institute. By comparing the traditional PID controller and the fuzzy-PID controller, the simulation results show that the system has stronger robustness and disturbance rejection capability with the latter controller which can meet the performance requirements of the CNC position servo system better",
"title": ""
},
{
"docid": "e146a0534b5a81ac6f332332056ae58c",
"text": "Paraphrase identification is an important topic in artificial intelligence and this task is often tackled as sequence alignment and matching. Traditional alignment methods take advantage of attention mechanism, which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. In this paper, we empower neural architecture with Hungarian algorithm to extract the aligned unmatched parts. Specifically, first, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Extensive experiments show that our model outperforms other baselines, substantially and significantly.",
"title": ""
},
{
"docid": "cd35c6e2763b634d23de1903a3261c59",
"text": "We investigate the Belousov-Zhabotinsky (BZ) reaction in an attempt to establish a basis for computation using chemical oscillators coupled via inhibition. The system consists of BZ droplets suspended in oil. Interdrop coupling is governed by the non-polar communicator of inhibition, Br2. We consider a linear arrangement of three droplets to be a NOR gate, where the center droplet is the output and the other two are inputs. Oxidation spikes in the inputs, which we define to be TRUE, cause a delay in the next spike of the output, which we read to be FALSE. Conversely, when the inputs do not spike (FALSE) there is no delay in the output (TRUE), thus producing the behavior of a NOR gate. We are able to reliably produce NOR gates with this behavior in microfluidic experiment.",
"title": ""
},
{
"docid": "35ac15f19cefd103f984519e046e407c",
"text": "This paper presents a highly sensitive sensor for crack detection in metallic surfaces. The sensor is inspired by complementary split-ring resonators which have dimensions much smaller than the excitation’s wavelength. The entire sensor is etched in the ground plane of a microstrip line and fabricated using printed circuit board technology. Compared to available microwave techniques, the sensor introduced here has key advantages including high sensitivity, increased dynamic range, spatial resolution, design simplicity, selectivity, and scalability. Experimental measurements showed that a surface crack having 200-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> width and 2-mm depth gives a shift in the resonance frequency of 1.5 GHz. This resonance frequency shift exceeds what can be achieved using other sensors operating in the low GHz frequency regime by a significant margin. In addition, using numerical simulation, we showed that the new sensor is able to resolve a 10-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>-wide crack (equivalent to <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula>/4000) with 180-MHz shift in the resonance frequency.",
"title": ""
},
{
"docid": "bde1d85da7f1ac9c9c30b0fed448aac6",
"text": "We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.",
"title": ""
},
{
"docid": "1b790d2a5b9d8f6a911efee43ee2a9d2",
"text": "Content Centric Networking (CCN) represents an important change in the current operation of the Internet, prioritizing content over the communication between end nodes. Routers play an essential role in CCN, since they receive the requests for a given content and provide content caching for the most popular ones. They have their own forwarding strategies and caching policies for the most popular contents. Despite the number of works on this field, experimental evaluation of different forwarding algorithms and caching policies yet demands a huge effort in routers programming. In this paper we propose SDCCN, a SDN approach to CCN that provides programmable forwarding strategy and caching policies. SDCCN allows fast prototyping and experimentation in CCN. Proofs of concept were performed to demonstrate the programmability of the cache replacement algorithms and the Strategy Layer. Experimental results, obtained through implementation in the Mininet environment, are presented and evaluated.",
"title": ""
},
{
"docid": "9bf26d0e444ab8332ac55ce87d1b7797",
"text": "Toll like receptors (TLR)s have a central role in regulating innate immunity and in the last decade studies have begun to reveal their significance in potentiating autoimmune diseases such as rheumatoid arthritis (RA). Earlier investigations have highlighted the importance of TLR2 and TLR4 function in RA pathogenesis. In this review, we discuss the newer data that indicate roles for TLR5 and TLR7 in RA and its preclinical models. We evaluate the pathogenicity of TLRs in RA myeloid cells, synovial tissue fibroblasts, T cells, osteoclast progenitor cells and endothelial cells. These observations establish that ligation of TLRs can transform RA myeloid cells into M1 macrophages and that the inflammatory factors secreted from M1 and RA synovial tissue fibroblasts participate in TH-17 cell development. From the investigations conducted in RA preclinical models, we conclude that TLR-mediated inflammation can result in osteoclastic bone erosion by interconnecting the myeloid and TH-17 cell response to joint vascularization. In light of emerging unique aspects of TLR function, we summarize the novel approaches that are being tested to impair TLR activation in RA patients.",
"title": ""
},
{
"docid": "2afb992058eb720ff0baf4216e3a22c2",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "f5c04016ea72c94437cb5baeb556b01d",
"text": "This paper reports the design of a three pass stemmer STHREE for Malayalam. The language is rich in morphological variations but poor in linguistic computational resources. The system returns the meaningful root word of the input word in 97% of the cases when tested with 1040 words. This is a significant improvement over the reported accuracy of SILPA system, the only known stemmer for Malayalam, with the same test data sets.",
"title": ""
},
{
"docid": "427028ef819df3851e37734e5d198424",
"text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.",
"title": ""
},
{
"docid": "5de517f8ccdbf12228ca334173ecf797",
"text": "This paper describes the Chinese handwriting recognition competition held at the 12th International Conference on Document Analysis and Recognition (ICDAR 2013). This third competition in the series again used the CASIAHWDB/OLHWDB databases as the training set, and all the submitted systems were evaluated on closed datasets to report character-level correct rates. This year, 10 groups submitted 27 systems for five tasks: classification on extracted features, online/offline isolated character recognition, online/offline handwritten text recognition. The best results (correct rates) are 93.89% for classification on extracted features, 94.77% for offline character recognition, 97.39% for online character recognition, 88.76% for offline text recognition, and 95.03% for online text recognition, respectively. In addition to the test results, we also provide short descriptions of the recognition methods and brief discussions on the results. Keywords—Chinese handwriting recognition competition; isolated character recongition; handwritten text recognition; offline; online; CASIA-HWDB/OLHWDB database.",
"title": ""
},
{
"docid": "924dbc783bf8743a28c2cd4563d50de9",
"text": "This paper studies the off-policy evaluation problem, where one aims to estimate the value of a target policy based on a sample of observations collected by another policy. We first consider the multi-armed bandit case, establish a minimax risk lower bound, and analyze the risk of two standard estimators. It is shown, and verified in simulation, that one is minimax optimal up to a constant, while another can be arbitrarily worse, despite its empirical success and popularity. The results are applied to related problems in contextual bandits and fixed-horizon Markov decision processes, and are also related to semi-supervised learning.",
"title": ""
},
{
"docid": "27ed0ab08b10935d12b59b6d24bed3f1",
"text": "A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction - hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.",
"title": ""
},
{
"docid": "fe3a2ef6ffc3e667f73b19f01c14d15a",
"text": "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.",
"title": ""
}
] |
scidocsrr
|
c2e0f5a2362d741cd300ba72025cf93b
|
Automatic detection of cyberbullying in social media text
|
[
{
"docid": "c447e34a5048c7fe2d731aaa77b87dd3",
"text": "Bullying, in both physical and cyber worlds, has been recognized as a serious health issue among adolescents. Given its significance, scholars are charged with identifying factors that influence bullying involvement in a timely fashion. However, previous social studies of bullying are handicapped by data scarcity. The standard psychological science approach to studying bullying is to conduct personal surveys in schools. The sample size is typically in the hundreds, and these surveys are often collected only once. On the other hand, the few computational studies narrowly restrict themselves to cyberbullying, which accounts for only a small fraction of all bullying episodes.",
"title": ""
},
{
"docid": "f91a507a9cb7bdee2e8c3c86924ced8d",
"text": "a r t i c l e i n f o It is often stated that bullying is a \" group process \" , and many researchers and policymakers share the belief that interventions against bullying should be targeted at the peer-group level rather than at individual bullies and victims. There is less insight into what in the group level should be changed and how, as the group processes taking place at the level of the peer clusters or school classes have not been much elaborated. This paper reviews the literature on the group involvement in bullying, thus providing insight into the individuals' motives for participation in bullying, the persistence of bullying, and the adjustment of victims across different peer contexts. Interventions targeting the peer group are briefly discussed and future directions for research on peer processes in bullying are suggested. Bullying is a subtype of aggressive behavior, in which an individual or a group of individuals repeatedly attacks, humiliates, and/or excludes a relatively powerless person. The majority of studies on the topic have been conducted in schools, focusing on bullying among the concept of bullying is used to refer to peer-to-peer bullying among school-aged children and youth, when not otherwise mentioned. It is known that a sizable minority of primary and secondary school students is involved in peer-to-peer bullying either as perpetrators or victims — or as both, being both bullied themselves and harassing others. In WHO's Health Behavior in School-Aged Children survey (HBSC, see Craig & Harel, 2004), the average prevalence of victims across the 35 countries involved was 11%, whereas bullies represented another 11%. Children who report both bullying others and being bullied by others (so-called bully–victims) were not identified in the HBSC study, but other studies have shown that approximately 4–6% of the children can be classified as bully–victims (Haynie et al., 2001; Nansel et al., 2001). Bullying constitutes a serious risk for the psychosocial and academic adjustment of both victims",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
}
] |
[
{
"docid": "eb85cffda3aec56b77ae016ac6f73011",
"text": "This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",
"title": ""
},
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "d50550fe203ffe135ef90dd0b20cd975",
"text": "The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.",
"title": ""
},
{
"docid": "db252efe7bde6cc0d58e337f8ad04271",
"text": "Social skills training is a well-established method to decrease human anxiety and discomfort in social interaction, and acquire social skills. In this paper, we attempt to automate the process of social skills training by developing a dialogue system named \"automated social skills trainer,\" which provides social skills training through human-computer interaction. The system includes a virtual avatar that recognizes user speech and language information and gives feedback to users to improve their social skills. Its design is based on conventional social skills training performed by human participants, including defining target skills, modeling, role-play, feedback, reinforcement, and homework. An experimental evaluation measuring the relationship between social skill and speech and language features shows that these features have a relationship with autistic traits. Additional experiments measuring the effect of performing social skills training with the proposed application show that most participants improve their skill by using the system for 50 minutes.",
"title": ""
},
{
"docid": "66451aa5a41ec7f9246d749c0983fa60",
"text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.",
"title": ""
},
{
"docid": "c9acadfba9aa66ef6e7f4bc1d86943f6",
"text": "We propose a new saliency detection model by combining global information from frequency domain analysis and local information from spatial domain analysis. In the frequency domain analysis, instead of modeling salient regions, we model the nonsalient regions using global information; these so-called repeating patterns that are not distinctive in the scene are suppressed by using spectrum smoothing. In spatial domain analysis, we enhance those regions that are more informative by using a center-surround mechanism similar to that found in the visual cortex. Finally, the outputs from these two channels are combined to produce the saliency map. We demonstrate that the proposed model has the ability to highlight both small and large salient regions in cluttered scenes and to inhibit repeating objects. Experimental results also show that the proposed model outperforms existing algorithms in predicting objects regions where human pay more attention.",
"title": ""
},
{
"docid": "20ac5cea816906d595a65915680575f2",
"text": "A combination of distributed computation, positive feedback and constructive greedy heuristic is proposed as a new approach to stochastic optimization and problem solving. Positive feedback accounts for rapid discovery of very good solutions, distributed computation avoids premature convergence, and greedy heuristic helps the procedure to find acceptable solutions in the early stages of the search process. An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions. We report on many simulation results and discuss the working of the algorithm. Some hints about how this approach can be applied to a variety of optimization problems are also given.",
"title": ""
},
{
"docid": "b829049a8abf47f8f13595ca54eaa009",
"text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.",
"title": ""
},
{
"docid": "101554958aedffeaa26e429fca84e661",
"text": "Many healthcare reforms are to digitalize and integrate healthcare information systems. However, the disparity of business benefits in having an integrated healthcare information system (IHIS) varies with organizational fit factors. Critical success factors (CSFs) exist for hospitals to implement an IHIS successfully. This study investigated the relationship between the organizational fit and the system success. In addition, we examined the moderating effect of five CSFs -information systems adjustment, business process adjustment, organizational resistance, top management support, and the capability of key team members – in an IHIS implementation. Fifty-three hospitals that have successfully undertaken IHIS projects participated in this study. We used regression analysis to assess the relationships. The findings of this study provide a roadmap for hospitals to capitalize on the organizational fit and the five critical success factors in order to implement successful IHIS projects. Shin-Yuan Hung, Charlie Chen, Kuan-Hsiuang Wang (2014) \"Critical Success Factors For The Implementation Of Integrated Healthcare Information Systems Projects: An Organizational Fit Perspective\" Communication of the Association for Information Systems volume 34 Article 39 Version of record Available @ www.aisel.aisnet.org",
"title": ""
},
{
"docid": "fdd4c5fc773aa001da927ab3776559ae",
"text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.",
"title": ""
},
{
"docid": "624806aa09127fbca2e01c9d52b5764a",
"text": "Over the last few years, increased interest has arisen with respect to age-related tasks in the Computer Vision community. As a result, several \"in-the-wild\" databases annotated with respect to the age attribute became available in the literature. Nevertheless, one major drawback of these databases is that they are semi-automatically collected and annotated and thus they contain noisy labels. Therefore, the algorithms that are evaluated in such databases are prone to noisy estimates. In order to overcome such drawbacks, we present in this paper the first, to the best of knowledge, manually collected \"in-the-wild\" age database, dubbed AgeDB, containing images annotated with accurate to the year, noise-free labels. As demonstrated by a series of experiments utilizing state-of-the-art algorithms, this unique property renders AgeDB suitable when performing experiments on age-invariant face verification, age estimation and face age progression \"in-the-wild\".",
"title": ""
},
{
"docid": "2acb16f1e67f141220dc05b90ac23385",
"text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "d906d31f32ad89a843645cad98eab700",
"text": "Deep Learning has led to a dramatic leap in SuperResolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce \"Zero-Shot\" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method.",
"title": ""
},
{
"docid": "5d2c1095a34ee582f490f4b0392a3da0",
"text": "We study the problem of online learning to re-rank, where users provide feedback to improve the quality of displayed lists. Learning to rank has been traditionally studied in two settings. In the offline setting, rankers are typically learned from relevance labels of judges. These approaches have become the industry standard. However, they lack exploration, and thus are limited by the information content of offline data. In the online setting, an algorithm can propose a list and learn from the feedback on it in a sequential fashion. Bandit algorithms developed for this setting actively experiment, and in this way overcome the biases of offline data. But they also tend to ignore offline data, which results in a high initial cost of exploration. We propose BubbleRank, a bandit algorithm for re-ranking that combines the strengths of both settings. The algorithm starts with an initial base list and improves it gradually by swapping higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive numerical experiments on a large real-world click dataset.",
"title": ""
},
{
"docid": "6442c9e4eb9034abf90fcd697c32a343",
"text": "With the increasing popularity and demand for mobile applications, there has been a significant increase in the number of mobile application development projects. Highly volatile requirements of mobile applications require adaptive software development methods. The Agile approach is seen as a natural fit for mobile application and there is a need to explore various Agile methodologies for the development of mobile applications. This paper evaluates how adopting various Agile approaches improves the development of mobile applications and if they can be used in order to provide more tailor-made process improvements within an organization. A survey related to mobile application development process improvement was developed. The use of various Agile approaches for success in mobile application development were evaluated by determining the significance of the most used Agile engineering paradigms such as XP, Scrum, and Lean. The findings of the study show that these Agile methods have the potential to help deliver enhanced speed and quality for mobile application development.",
"title": ""
},
{
"docid": "13fc420d1fa63445c29c4107734e2943",
"text": "As technology advances, more and more devices have Internet access. This gives rise to the Internet of Things. With all these new devices connected to the Internet, cybercriminals are undoubtedly trying to take advantage of these devices, especially when they have poor protection. These botnets will have a large amount of processing power in the near future. This paper will elaborate on how much processing power these IoT botnets can gain and to what extend cryptocurrencies will be influenced by it. This will be done through a literature study which is validated through an experiment.",
"title": ""
},
{
"docid": "74b163a2c2f149dce9850c6ff5d7f1f6",
"text": "The vast majority of cutaneous canine nonepitheliotropic lymphomas are of T cell origin. Nonepithelial Bcell lymphomas are extremely rare. The present case report describes a 10-year-old male Golden retriever that was presented with slowly progressive nodular skin lesions on the trunk and limbs. Histopathology of skin biopsies revealed small periadnexal dermal nodules composed of rather pleomorphic round cells with round or contorted nuclei. The diagnosis of nonepitheliotropic cutaneous B-cell lymphoma was based on histopathological morphology and case follow-up, and was supported immunohistochemically by CD79a positivity.",
"title": ""
},
{
"docid": "0cae8939c57ff3713d7321102c80816e",
"text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.",
"title": ""
},
{
"docid": "e44f67fec39390f215b5267c892d1a26",
"text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.",
"title": ""
}
] |
scidocsrr
|
1dc241d5a52b7bd7f17e80dddac7fa45
|
Quantum statistical mechanics over function fields
|
[
{
"docid": "19d2e8cfa7787a139ca8117a0522b044",
"text": "We give here a comprehensive treatment of the mathematical theory of per-turbative renormalization (in the minimal subtraction scheme with dimensional regularization), in the framework of the Riemann–Hilbert correspondence and motivic Galois theory. We give a detailed overview of the work of Connes– Kreimer [31], [32]. We also cover some background material on affine group schemes, Tannakian categories, the Riemann–Hilbert problem in the regular singular and irregular case, and a brief introduction to motives and motivic Ga-lois theory. We then give a complete account of our results on renormalization and motivic Galois theory announced in [35]. Our main goal is to show how the divergences of quantum field theory, which may at first appear as the undesired effect of a mathematically ill-formulated theory, in fact reveal the presence of a very rich deeper mathematical structure, which manifests itself through the action of a hidden \" cosmic Galois group \" 1 , which is of an arithmetic nature, related to motivic Galois theory. Historically, perturbative renormalization has always appeared as one of the most elaborate recipes created by modern physics, capable of producing numerical quantities of great physical relevance out of a priori meaningless mathematical expressions. In this respect, it is fascinating for mathematicians and physicists alike. The depth of its origin in quantum field theory and the precision with which it is confirmed by experiments undoubtedly make it into one of the jewels of modern theoretical physics. For a mathematician in quest of \" meaning \" rather than heavy formalism, the attempts to cast the perturbative renormalization technique in a conceptual framework were so far falling short of accounting for the main computational aspects, used for instance in QED. These have to do with the subtleties involved in the subtraction of infinities in the evaluation of Feynman graphs and do not fall under the range of \" asymptotically free theories \" for which constructive quantum field theory can provide a mathematically satisfactory formulation., where the conceptual meaning of the detailed computational devices used in perturbative renormalization is analysed. Their work shows that the recursive procedure used by physicists is in fact identical to a mathematical method of extraction of finite values known as the Birkhoff decomposition, applied to a loop γ(z) with values in a complex pro-unipotent Lie group G.",
"title": ""
}
] |
[
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "91c0bd1c3faabc260277c407b7c6af59",
"text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.",
"title": ""
},
{
"docid": "c08e9731b9a1135b7fb52548c5c6f77e",
"text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.",
"title": ""
},
{
"docid": "f11a88cad05210e26940e79700b0ca11",
"text": "Agile software development methods provide great flexibility to adapt to changing requirements and rapidly market products. Sri Lankan software organizations too are embracing these methods to develop software products. Being an iterative an incremental software engineering methodology, agile philosophy promotes working software over comprehensive documentation and heavily relies on continuous customer collaboration throughout the life cycle of the product. Hence characteristics of the people involved with the project and their working environment plays an important role in the success of an agile project compared to any other software engineering methodology. This study investigated the factors that lead to the success of a project that adopts agile methodology in Sri Lanka. An online questionnaire was used to collect data to identify people and organizational factors that lead to project success. The sample consisted of Sri Lankan software professionals with several years of industry experience in developing projects using agile methods. According to the statistical data analysis, customer satisfaction, customer commitment, team size, corporate culture, technical competency, decision time, customer commitment and training and learning have a influence on the success of the project.",
"title": ""
},
{
"docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "bf2fbbfca758af3be4c6e84fb56ddf26",
"text": "Classification is important problem in data mining. Given a data set, classifier generates meaningful description for each class. Decision trees are most effective and widely used classification methods. There are several algorithms for induction of decision trees. These trees are first induced and then prune subtrees with subsequent pruning phase to improve accuracy and prevent overfitting. In this paper, various pruning methods are discussed with their features and also effectiveness of pruning is evaluated. Accuracy is measured for diabetes and glass dataset with various pruning factors. The experiments are shown for this two datasets for measuring accuracy and size of the tree.",
"title": ""
},
{
"docid": "6f45b4858c33d88472c131f379fd3edf",
"text": "Shadow maps are the current technique for generating high quality real-time dynamic shadows. This article gives a ‘practical’ introduction to shadow mapping (or projection mapping) with numerous simple examples and source listings. We emphasis some of the typical limitations and common pitfalls when implementing shadow mapping for the first time and how the reader can overcome these problems using uncomplicated debugging techniques. A scene without shadowing is life-less and flat objects seem decoupled. While different graphical techniques add a unique effect to the scene, shadows are crucial and when not present create a strange and mood-less aura.",
"title": ""
},
{
"docid": "beb7509b59f1bac8083ce5fbddb247e5",
"text": "Congestion in the Industrial, Scientific, and Medical (ISM) frequency band limits the expansion of the IEEE 802.11 Wireless Local Area Network (WLAN). Recently, due to the ‘digital switchover’ from analog to digital TV (DTV) broadcasting, a sizeable amount of bandwidth have been freed up in the conventional TV bands, resulting in the availability of TV white space (TVWS). The IEEE 802.11af is a standard for the WLAN technology that operates at the TVWS spectrum. TVWS operation must not cause harmful interference to the incumbent DTV service. This paper provides a method of computing the keep-out distance required between an IEEE 802.11af device and the DTV service contour, in order to keep the interference to a harmless level. The ITU-R P.1411-7 propagation model is used in the calculation. Four different DTV services are considered: Advanced Television Systems Committee (ATSC), Digital Video Broadcasting — Terrestrial (DVB-T), Integrated Services Digital Broadcasting — Terrestrial (ISDB-T), and Digital Terrestrial Multimedia Broadcasting (DTMB). The calculation results reveal that under many circumstances, allocating keep-out distance of 1 to 2.5 km is sufficient for the protection of DTV service.",
"title": ""
},
{
"docid": "d92b7ee3739843c2649d0f3f1e0ee5b2",
"text": "In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertextpacking technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of “General-LWE”). Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with “general-LWE” schemes, suggesting yet another tradeoff that can be optimized for different settings. Acknowledgments The first author is sponsored by DARPA under agreement number FA8750-11-C-0096. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. The second and third authors are sponsored by DARPA and ONR under agreement number N00014-11C-0390. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, or the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).",
"title": ""
},
{
"docid": "5a73be1c8c24958779272a1190a3df20",
"text": "We study how contract element extraction can be automated. We provide a labeled dataset with gold contract element annotations, along with an unlabeled dataset of contracts that can be used to pre-train word embeddings. Both datasets are provided in an encoded form to bypass privacy issues. We describe and experimentally compare several contract element extraction methods that use manually written rules and linear classifiers (logistic regression, SVMs) with hand-crafted features, word embeddings, and part-of-speech tag embeddings. The best results are obtained by a hybrid method that combines machine learning (with hand-crafted features and embeddings) and manually written post-processing rules.",
"title": ""
},
{
"docid": "e4a22b34510b28d1235fc987b97a8607",
"text": "Many regions of the globe are experiencing rapid urban growth, the location and intensity of which can have negative effects on ecological and social systems. In some locales, planners and policy makers have used urban growth boundaries to direct the location and intensity of development; however the empirical evidence for the efficacy of such policies is mixed. Monitoring the location of urban growth is an essential first step in understanding how the system has changed over time. In addition, if regulations purporting to direct urban growth to specific locales are present, it is important to evaluate if the desired pattern (or change in pattern) has been observed. In this paper, we document land cover and change across six dates (1986, 1991, 1995, 1999, 2002, and 2007) for six counties in the Central Puget Sound, Washington State, USA. We explore patterns of change by three different spatial partitions (the region, each county, 2000 U.S. Census Tracks), and with respect to urban growth boundaries implemented in the late 1990’s as part of the state’s Growth Management Act. Urban land cover increased from 8 to 19% of the study area between 1986 and 2007, while lowland deciduous and mixed forests decreased from 21 to 13% and grass and agriculture decreased from 11 to 8%. Land in urban classes outside of the urban growth boundaries increased more rapidly (by area and percentage of new urban land cover) than land within the urban growth boundaries, suggesting that the intended effect of the Growth Management Act to direct growth to within the urban growth boundaries may not have been accomplished by 2007. Urban sprawl, as estimated by the area of land per capita, increased overall within the region, with the more rural counties within commuting distance to cities having the highest rate of increase observed. Land cover data is increasingly available and can be used to rapidly evaluate urban development patterns over large areas. Such data are important inputs for policy makers, urban planners, and modelers alike to manage and plan for future population, land use, and land cover changes.",
"title": ""
},
{
"docid": "f1925c66ed41aa50838d115b235349f0",
"text": "Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 68.36% of the natural images in CIFAR10 test dataset and 41.22% of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22% and 5.52% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "cfeb97c3be1c697fb500d54aa43af0e1",
"text": "The development of accurate and robust palmprint verification algorithms is a critical issue in automatic palmprint authentication systems. Among various palmprint verification approaches, the orientation based coding methods, such as competitive code (CompCode), palmprint orientation code (POC) and robust line orientation code (RLOC), are state-of-the-art ones. They extract and code the locally dominant orientation as features and could match the input palmprint in real-time and with high accuracy. However, using only one dominant orientation to represent a local region may lose some valuable information because there are cross lines in the palmprint. In this paper, we propose a novel feature extraction algorithm, namely binary orientation co-occurrence vector (BOCV), to represent multiple orientations for a local region. The BOCV can better describe the local orientation features and it is more robust to image rotation. Our experimental results on the public palmprint database show that the proposed BOCV outperforms the CompCode, POC and RLOC by reducing the equal error rate (EER) significantly. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "885a51f55d5dfaad7a0ee0c56a64ada3",
"text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
},
{
"docid": "e9f9a7c506221bacf966808f54c4f056",
"text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.",
"title": ""
},
{
"docid": "b127e63ac45c81ce9fa9aa6240ce5154",
"text": "This paper examines the use of social learning platforms in conjunction with the emergent pedagogy of the `flipped classroom'. In particular the attributes of the social learning platform “Edmodo” is considered alongside the changes in the way in which online learning environments are being implemented, especially within British education. Some observations are made regarding the use and usefulness of these platforms along with a consideration of the increasingly decentralized nature of education in the United Kingdom.",
"title": ""
},
{
"docid": "c77c6ea404d9d834ef1be5a1d7222e66",
"text": "We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.",
"title": ""
},
{
"docid": "61c73842d25b54f24ff974b439d55c64",
"text": "Many electrical vehicles have been developed recently, and one of them is the vehicle type with the self-balancing capability. Portability also one of issue related to the development of electric vehicles. This paper presents one wheeled self-balancing electric vehicle namely PENS-Wheel. Since it only consists of one motor as its actuator, it becomes more portable than any other self-balancing vehicle types. This paper discusses on the implementation of Kalman filter for filtering the tilt sensor used by the self-balancing controller, mechanical design, and fabrication of the vehicle. The vehicle is designed based on the principle of the inverted pendulum by utilizing motor's torque on the wheel to maintain its upright position. The sensor system uses IMU which combine accelerometer and gyroscope data to get the accurate pitch angle of the vehicle. The paper presents the effects of Kalman filter parameters including noise variance of the accelerometer, noise variance of the gyroscope, and the measurement noise to the response of the sensor output. Finally, we present the result of the proposed filter and compare it with proprietary filter algorithm from InvenSense, Inc. running on Digital Motion Processor (DMP) inside the MPU6050 chip. The result of the filter algorithm implemented in the vehicle shows that it is capable in delivering comparable performance with the proprietary one.",
"title": ""
}
] |
scidocsrr
|
218c89117ca7b9dd1e88e1922bae6c11
|
Quadrotor Helicopter Trajectory Tracking Control
|
[
{
"docid": "5c732e1b9ded9ce11a347b82683fb039",
"text": "This paper presents the design of an embedded-control architecture for a four-rotor unmanned air vehicle (UAV) to perform autonomous hover flight. A non-linear control law based on nested saturations technique is presented that stabilizes the state of the aircraft around the origin. The control law was implemented in a microcontroller to stabilize the aircraft in real time. In order to obtain experimental results we have built a low-cost on-board system which drives the aircraft in position and orientation. The nonlinear controller has been successfully tested experimentally",
"title": ""
}
] |
[
{
"docid": "78253b77b78c8e2b57b56e4d87c908ab",
"text": "OBJECTIVES\nThis study examines living arrangements of older adults across 43 developing countries and compares patterns by gender, world regions, and macro-level indicators of socioeconomic development.\n\n\nMETHODS\nData are from Demographic and Health Surveys. The country is the unit of analysis. Indicators include household size, headship, relationship to head, and coresidence with spouse, children, and others. Unweighted regional averages and ordinary least-squares regressions determine whether variations exist.\n\n\nRESULTS\nAverage household sizes are large, but a substantially greater proportion of older adults live alone than do individuals in other age groups. Females are more likely than males to live alone and are less likely to live with a spouse or head of a household. Heading a household and living in a large household and with young children is more prevalent in Africa than elsewhere. Coresidence with adult children is most common in Asia and least in Africa. Coresidence is more frequent with sons than with daughters in both Asia and Africa, but not in Latin America. As a country's level of schooling rises, most living arrangement indicators change with families becoming more nuclear. Urbanization and gross national product have no significant effects.\n\n\nDISCUSSION\nAlthough living arrangements differ across world regions and genders, within-region variations exist and are explained in part by associations between countrywide levels of education and household structure. These associations may be caused by a variety of intermediating factors, such as migration of children and preferences for privacy.",
"title": ""
},
{
"docid": "b6f05fcc1face0dcf4981e6578b0330e",
"text": "The importance of accurate and timely information describing the nature and extent of land resources and changes over time is increasing, especially in rapidly growing metropolitan areas. We have developed a methodology to map and monitor land cover change using multitemporal Landsat Thematic Mapper (TM) data in the seven-county Twin Cities Metropolitan Area of Minnesota for 1986, 1991, 1998, and 2002. The overall seven-class classification accuracies averaged 94% for the four years. The overall accuracy of land cover change maps, generated from post-classification change detection methods and evaluated using several approaches, ranged from 80% to 90%. The maps showed that between 1986 and 2002 the amount of urban or developed land increased from 23.7% to 32.8% of the total area, while rural cover types of agriculture, forest and wetland decreased from 69.6% to 60.5%. The results quantify the land cover change patterns in the metropolitan area and demonstrate the potential of multitemporal Landsat data to provide an accurate, economical means to map and analyze changes in land cover over time that can be used as inputs to land management and policy decisions. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c0df4f379a3b54c4e6fa9855b1b6d372",
"text": "We present a novel optimization-based retraction algorithm to improve the performance of sample-based planners in narrow passages for 3D rigid robots. The retraction step is formulated as an optimization problem using an appropriate distance metric in the configuration space. Our algorithm computes samples near the boundary of C-obstacle using local contact analysis and uses those samples to improve the performance of RRT planners in narrow passages. We analyze the performance of our planner using Voronoi diagrams and show that the tree can grow closely towards any randomly generated sample. Our algorithm is general and applicable to all polygonal models. In practice, we observe significant speedups over prior RRT planners on challenging scenarios with narrow passages.",
"title": ""
},
{
"docid": "1c7131fcb031497b2c1487f9b25d8d4e",
"text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.",
"title": ""
},
{
"docid": "5ff345f050ec14b02c749c41887d592d",
"text": "Testing multithreaded code is hard and expensive. Each multithreaded unit test creates two or more threads, each executing one or more methods on shared objects of the class under test. Such unit tests can be generated at random, but basic generation produces tests that are either slow or do not trigger concurrency bugs. Worse, such tests have many false alarms, which require human effort to filter out. We present BALLERINA, a novel technique for automatic generation of efficient multithreaded random tests that effectively trigger concurrency bugs. BALLERINA makes tests efficient by having only two threads, each executing a single, randomly selected method. BALLERINA increases chances that such a simple parallel code finds bugs by appending it to more complex, randomly generated sequential code. We also propose a clustering technique to reduce the manual effort in inspecting failures of automatically generated multithreaded tests. We evaluate BALLERINA on 14 real-world bugs from 6 popular codebases: Groovy, Java JDK, jFreeChart, Log4j, Lucene, and Pool. The experiments show that tests generated by BALLERINA can find bugs on average 2X-10X faster than various configurations of basic random generation, and our clustering technique reduces the number of inspected failures on average 4X-8X. Using BALLERINA, we found three previously unknown bugs in Apache Pool and Log4j, one of which was already confirmed and fixed.",
"title": ""
},
{
"docid": "c59e0968b2d4dc314e52c116b21c3659",
"text": "This document aims to clarify frequent questions on using the Accord.NET Framework to perform statistical analyses. Here, we reproduce all steps of the famous Lindsay's Tutorial on Principal Component Analysis, in an attempt to give the reader a complete hands-on overview on the framework's basics while also discussing some of the results and sources of divergence between the results generated by Accord.NET and by other software packages.",
"title": ""
},
{
"docid": "9bb970d7a6c4f1c0f566cca6bc26750c",
"text": "The science goals of the SKA specify a field of view which is far greater than what current radio telescopes provide. Two possible feed architectures for reflector antennas are clusters of horns or phased-array feeds. This memo compares these two alternatives and finds that the beams produced by horn clusters fall short of fully sampling the sky and require interleaved pointings, whereas phased-array feeds can provide complete sampling with a single pointing. Thus for a given focal-plane area horn clusters incur an equivalent system temperature penalty of ∼ 2× or more. The situation is worse for wide-band feeds since the spacing of the beams is constant while the beamwidth is inversely proportional to frequency, increasing the number of pointings for a fully-sampled map at the high-end of an operating band. These disadvantages, along with adaptive beamforming capabilities, provide a strong argument for the development of phased-array technology for wide-field and wide-band feeds.",
"title": ""
},
{
"docid": "4c861a25442ed6c177853626382b3aa8",
"text": "In this paper we present a user study evaluating the benefits of geometrically correct user-perspective rendering using an Augmented Reality (AR) magic lens. In simulation we compared a user-perspective magic lens against the common device-perspective magic lens on both phone-sized and tablet-sized displays. Our results indicate that a tablet-sized display allows for significantly faster performance of a selection task and that a user-perspective lens has benefits over a device-perspective lens for a selection task. Based on these promising results, we created a proof-of-concept prototype, engineered with current off-the-shelf devices and software. To our knowledge, this is the first geometrically correct user-perspective magic lens.",
"title": ""
},
{
"docid": "95d5229599fcf91b7ea302aa5dafee2a",
"text": "The more the telecom services marketing paradigm evolves, the more important it becomes to retain high value customers. Traditional customer segmentation methods based on experience or ARPU (Average Revenue per User) consider neither customers’ future revenue nor the cost of servicing customers of different types. Therefore, it is very difficult to effectively identify high-value customers. In this paper, we propose a novel customer segmentation method based on customer lifecycle, which includes five decision models, i.e. current value, historic value, prediction of long-term value, credit and loyalty. Due to the difficulty of quantitative computation of long-term value, credit and loyalty, a decision tree method is used to extract important parameters related to long-term value, credit and loyalty. Then a judgments matrix formulated on the basis of characteristics of data and the experience of business experts is presented. Finally a simple and practical customer value evaluation system is built. This model is applied to telecom operators in a province in China and good accuracy is achieved. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c1ddefd126c6d338c4cd9238e9067435",
"text": "Tensor networks are efficient representations of high-dimensional tensors which have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images. For the MNIST data set we obtain less than 1% test set classification error. We discuss how the tensor network form imparts additional structure to the learned model and suggest a possible generative interpretation.",
"title": ""
},
{
"docid": "accebc4ebc062f9676977b375e0c4f32",
"text": "Microtask crowdsourcing organizes complex work into workflows, decomposing large tasks into small, relatively independent microtasks. Applied to software development, this model might increase participation in open source software development by lowering the barriers to contribu-tion and dramatically decrease time to market by increasing the parallelism in development work. To explore this idea, we have developed an approach to decomposing programming work into microtasks. Work is coordinated through tracking changes to a graph of artifacts, generating appropriate microtasks and propagating change notifications to artifacts with dependencies. We have implemented our approach in CrowdCode, a cloud IDE for crowd development. To evaluate the feasibility of microtask programming, we performed a small study and found that a small crowd of 12 workers was able to successfully write 480 lines of code and 61 unit tests in 14.25 person-hours of time.",
"title": ""
},
{
"docid": "a9f8f3946dd963066006f19a251eef7c",
"text": "Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe Atmosphere and the pedagogical affordances and constraints of the inscription tools, discourse tools, experiential tools, and resource tools of each application. The purpose of this review is to discuss the implications of using each application for educational initiatives by exploring how the various design features of each may support and enhance the design of interactive learning environments.",
"title": ""
},
{
"docid": "2b1adb51eafbcd50675513bc67e42140",
"text": "This text reviews the generic aspects of the central nervous system evolutionary development, emphasizing the developmental features of the brain structures related with behavior and with the cognitive functions that finally characterized the human being. Over the limbic structures that with the advent of mammals were developed on the top of the primitive nervous system of their ancestrals, the ultimate cortical development with neurons arranged in layers constituted the structural base for an enhanced sensory discrimination, for more complex motor activities, and for the development of cognitive and intellectual functions that finally characterized the human being. The knowledge of the central nervous system phylogeny allow us particularly to infer possible correlations between the brain structures that were developed along phylogeny and the behavior of their related beings. In this direction, without discussing its conceptual aspects, this review ends with a discussion about the central nervous system evolutionary development and the emergence of consciousness, in the light of its most recent contributions.",
"title": ""
},
{
"docid": "7e3cdead80a1d17b064b67ddacd5d8c1",
"text": "BACKGROUND\nThe aim of the study was to evaluate the relationship between depression and Internet addiction among adolescents.\n\n\nSAMPLING AND METHOD\nA total of 452 Korean adolescents were studied. First, they were evaluated for their severity of Internet addiction with consideration of their behavioral characteristics and their primary purpose for computer use. Second, we investigated correlations between Internet addiction and depression, alcohol dependence and obsessive-compulsive symptoms. Third, the relationship between Internet addiction and biogenetic temperament as assessed by the Temperament and Character Inventory was evaluated.\n\n\nRESULTS\nInternet addiction was significantly associated with depressive symptoms and obsessive-compulsive symptoms. Regarding biogenetic temperament and character patterns, high harm avoidance, low self-directedness, low cooperativeness and high self-transcendence were correlated with Internet addiction. In multivariate analysis, among clinical symptoms depression was most closely related to Internet addiction, even after controlling for differences in biogenetic temperament.\n\n\nCONCLUSIONS\nThis study reveals a significant association between Internet addiction and depressive symptoms in adolescents. This association is supported by temperament profiles of the Internet addiction group. The data suggest the necessity of the evaluation of the potential underlying depression in the treatment of Internet-addicted adolescents.",
"title": ""
},
{
"docid": "b678ca4c649a2e69637b84c3e35f88f5",
"text": "Induced expression of the Flock House virus in the soma of C. elegans results in the RNAi-dependent production of virus-derived, small-interfering RNAs (viRNAs), which in turn silence the viral genome. We show here that the viRNA-mediated viral silencing effect is transmitted in a non-Mendelian manner to many ensuing generations. We show that the viral silencing agents, viRNAs, are transgenerationally transmitted in a template-independent manner and work in trans to silence viral genomes present in animals that are deficient in producing their own viRNAs. These results provide evidence for the transgenerational inheritance of an acquired trait, induced by the exposure of animals to a specific, biologically relevant physiological challenge. The ability to inherit such extragenic information may provide adaptive benefits to an animal.",
"title": ""
},
{
"docid": "d3156738608e92d69b5ec7a5fa91af18",
"text": "Carotid intima-media thickness (CIMT) has been shown to predict cardiovascular (CV) risk in multiple large studies. Careful evaluation of CIMT studies reveals discrepancies in the comprehensiveness with which CIMT is assessed-the number of carotid segments evaluated (common carotid artery [CCA], internal carotid artery [ICA], or the carotid bulb), the type of measurements made (mean or maximum of single measurements, mean of the mean, or mean of the maximum for multiple measurements), the number of imaging angles used, whether plaques were included in the intima-media thickness (IMT) measurement, the report of adjusted or unadjusted models, risk association versus risk prediction, and the arbitrary cutoff points for CIMT and for plaque to predict risk. Measuring the far wall of the CCA was shown to be the least variable method for assessing IMT. However, meta-analyses suggest that CCA-IMT alone only minimally improves predictive power beyond traditional risk factors, whereas inclusion of the carotid bulb and ICA-IMT improves prediction of both cardiac risk and stroke risk. Carotid plaque appears to be a more powerful predictor of CV risk compared with CIMT alone. Quantitative measures of plaques such as plaque number, plaque thickness, plaque area, and 3-dimensional assessment of plaque volume appear to be progressively more sensitive in predicting CV risk than mere assessment of plaque presence. Limited data show that plaque characteristics including plaque vascularity may improve CV disease risk stratification further. IMT measurement at the CCA, carotid bulb, and ICA that allows inclusion of plaque in the IMT measurement or CCA-IMT measurement along with plaque assessment in all carotid segments is emerging as the focus of carotid artery ultrasound imaging for CV risk prediction.",
"title": ""
},
{
"docid": "06413e71fbbe809ee2ffbdb31dc8fe59",
"text": "This paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input, generally a syntactic parse tree, has yet to be fully exploited. We propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performed. We further show that different features are needed for different subtasks. Finally, we show that by using a Maximum Entropy classifier and fewer features, we achieved results comparable with the best previously reported results obtained with SVM models. We believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateof-the-art in semantic analysis.",
"title": ""
},
{
"docid": "d80fc668073878c476bdf3997b108978",
"text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system",
"title": ""
},
{
"docid": "1db6ecf2059b749f0ad640f9c53b1826",
"text": "U.S. hotel brands and international hotel brands headquartered in the United States have increasingly evolved away from being hotel operating companies to being brand management and franchise administration organizations. This trend has allowed for the accelerated growth and development of many major hotel brands, and the increasing growth of franchised hotels. There are numerous strategic implications related to this trend. This study seeks to analyze some of these strategic implications by evaluating longitudinal data regarding the performance of major hotel brands in the marketplace, both in terms of guest satisfaction and revenue indicators. Specifically, the authors test whether guest satisfaction at various U.S. and international brands influences both brand occupancy percentage and average daily room rate 3 years later. In addition, the authors investigate whether the percentage of franchised hotel properties influences both guest satisfaction and occupancy 3 years later. Also, they test whether overall brand size has a positive or detrimental effect on future hotel occupancy. Finally, whether the change in guest satisfaction for hotel brands effects the change in average daily rate during the same 3-year period is tested.",
"title": ""
},
{
"docid": "c467edcb0c490034776ba2dc2cde9d9e",
"text": "BACKGROUND\nPostoperative complications of blepharoplasty range from cutaneous changes to vision-threatening emergencies. Some of these can be prevented with careful preoperative evaluation and surgical technique. When complications arise, their significance can be diminished by appropriate management. This article addresses blepharoplasty complications based on the typical postoperative timeframe when they are encountered.\n\n\nMETHODS\nThe authors conducted a review article of major blepharoplasty complications and their treatment.\n\n\nRESULTS\nComplications within the first postoperative week include corneal abrasions and vision-threatening retrobulbar hemorrhage; the intermediate period (weeks 1 through 6) addresses upper and lower eyelid malpositions, strabismus, corneal exposure, and epiphora; and late complications (>6 weeks) include changes in eyelid height and contour along with asymmetries, scarring, and persistent edema.\n\n\nCONCLUSIONS\nA thorough knowledge of potential complications of blepharoplasty surgery is necessary for the practicing aesthetic surgeon. Within this article, current concepts and relevant treatment strategies are reviewed with the use of the most recent and/or appropriate peer-reviewed literature available.",
"title": ""
}
] |
scidocsrr
|
db54145f9c2868e71344d248df9765f3
|
Feature-Rich Unsupervised Word Alignment Models
|
[
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] |
[
{
"docid": "2c49c7c3694358d9e3ee6101f5f2ffe5",
"text": "We present a system that approximates the answer to complex ad-hoc queries in big-data clusters by injecting samplers on-the-fly and without requiring pre-existing samples. Improvements can be substantial when big-data queries take multiple passes over data and when samplers execute early in the query plan. We present a new, universe, sampler which is able to sample multiple join inputs. By incorporating samplers natively into a cost-based query optimizer, we automatically generate plans with appropriate samplers at appropriate locations. We devise an accuracy analysis method using which we ensure that query plans with samplers will not miss groups and that aggregate values are within a small ratio of their true value. An implementation on a cluster with tens of thousands of machines shows that queries in the TPC-DS benchmark use a median of 2X fewer resources. In contrast, approaches that construct input samples even when given 10X the size of the input to store samples improve only 22% of the queries, i.e., a median speed up of 0X.",
"title": ""
},
{
"docid": "10d203d3aab332d3e8775993097544be",
"text": "Web cookies are used widely by publishers and 3rd parties to track users and their behaviors. Despite the ubiquitous use of cookies, there is little prior work on their characteristics such as standard attributes, placement policies, and the knowledge that can be amassed via 3rd party cookies. In this paper, we present an empirical study of web cookie characteristics, placement practices and information transmission. To conduct this study, we implemented a lightweight web crawler that tracks and stores the cookies as it navigates to websites. We use this crawler to collect over 3.2M cookies from the two crawls, separated by 18 months, of the top 100K Alexa web sites. We report on the general cookie characteristics and add context via a cookie category index and website genre labels. We consider privacy implications by examining specific cookie attributes and placement behavior of 3rd party cookies. We find that 3rd party cookies outnumber 1st party cookies by a factor of two, and we illuminate the connection between domain genres and cookie attributes. We find that less than 1% of the entities that place cookies can aggregate information across 75% of web sites. Finally, we consider the issue of information transmission and aggregation by domains via 3rd party cookies. We develop a mathematical framework to quantify user information leakage for a broad class of users, and present findings using real world domains. In particular, we demonstrate the interplay between a domain’s footprint across the Internet and the browsing behavior of users, which has significant impact on information transmission.",
"title": ""
},
{
"docid": "b7944edc9e6704cbf59489f112f46c11",
"text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001",
"title": ""
},
{
"docid": "437b448b27cbc77969664d73895d93f2",
"text": "In this manuscript, we study the problem of detecting coordinated free text campaigns in large-scale social media. These campaigns—ranging from coordinated spam messages to promotional and advertising campaigns to political astro-turfing—are growing in significance and reach with the commensurate rise in massive-scale social systems. Specifically, we propose and evaluate a content-driven framework for effectively linking free text posts with common “talking points” and extracting campaigns from large-scale social media. Three of the salient features of the campaign extraction framework are: (i) first, we investigate graph mining techniques for isolating coherent campaigns from large message-based graphs; (ii) second, we conduct a comprehensive comparative study of text-based message correlation in message and user levels; and (iii) finally, we analyze temporal behaviors of various campaign types. Through an experimental study over millions of Twitter messages we identify five major types of campaigns—namely Spam, Promotion, Template, News, and Celebrity campaigns—and we show how these campaigns may be extracted with high precision and recall.",
"title": ""
},
{
"docid": "754c7cd279c8f3c1a309071b8445d6fa",
"text": "We present a framework for describing insiders and their actions based on the organization, the environment, the system, and the individual. Using several real examples of unwelcome insider action (hard drive removal, stolen intellectual property, tax fraud, and proliferation of e-mail responses), we show how the taxonomy helps in understanding how each situation arose and could have been addressed. The differentiation among types of threats suggests how effective responses to insider threats might be shaped, what choices exist for each type of threat, and the implications of each. Future work will consider appropriate strategies to address each type of insider threat in terms of detection, prevention, mitigation, remediation, and punishment.",
"title": ""
},
{
"docid": "146c58e49221a9e8f8dbcdc149737924",
"text": "Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.",
"title": ""
},
{
"docid": "dffb89c39f11934567f98a31a0ef157c",
"text": "We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.",
"title": ""
},
{
"docid": "34208fafbb3009a1bb463e3d8d983e61",
"text": "A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with \"relevant\" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems.",
"title": ""
},
{
"docid": "09a6f724e5b2150a39f89ee1132a33e9",
"text": "This paper concerns a deep learning approach to relevance ranking in information retrieval (IR). Existing deep IR models such as DSSM and CDSSM directly apply neural networks to generate ranking scores, without explicit understandings of the relevance. According to the human judgement process, a relevance label is generated by the following three steps: 1) relevant locations are detected; 2) local relevances are determined; 3) local relevances are aggregated to output the relevance label. In this paper we propose a new deep learning architecture, namely DeepRank, to simulate the above human judgment process. Firstly, a detection strategy is designed to extract the relevant contexts. Then, a measure network is applied to determine the local relevances by utilizing a convolutional neural network (CNN) or two-dimensional gated recurrent units (2D-GRU). Finally, an aggregation network with sequential integration and term gating mechanism is used to produce a global relevance score. DeepRank well captures important IR characteristics, including exact/semantic matching signals, proximity heuristics, query term importance, and diverse relevance requirement. Experiments on both benchmark LETOR dataset and a large scale clickthrough data show that DeepRank can significantly outperform learning to ranking methods, and existing deep learning methods.",
"title": ""
},
{
"docid": "552baf04d696492b0951be2bb84f5900",
"text": "We examined whether reduced perceptual specialization underlies atypical perception in autism spectrum disorder (ASD) testing classifications of stimuli that differ either along integral dimensions (prototypical integral dimensions of value and chroma), or along separable dimensions (prototypical separable dimensions of value and size). Current models of the perception of individuals with an ASD would suggest that on these tasks, individuals with ASD would be as, or more, likely to process dimensions as separable, regardless of whether they represented separable or integrated dimensions. In contrast, reduced specialization would propose that individuals with ASD would respond in a more integral manner to stimuli that differ along separable dimensions, and at the same time, respond in a more separable manner to stimuli that differ along integral dimensions. A group of nineteen adults diagnosed with high functioning ASD and seventeen typically developing participants of similar age and IQ, were tested on speeded and restricted classifications tasks. Consistent with the reduced specialization account, results show that individuals with ASD do not always respond more analytically than typically developed (TD) observers: Dimensions identified as integral for TD individuals evoke less integral responding in individuals with ASD, while those identified as separable evoke less analytic responding. These results suggest that perceptual representations are more broadly tuned and more flexibly represented in ASD. Autism Res 2017, 10: 1510-1522. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "706acd04d939c795979fddba98ffed30",
"text": "a Information Systems and Quantitative Sciences Area, Rawls College of Business Administration, Texas Tech University, Lubbock, TX 79409, United States b Department of Management Information Systems, School of Business and Management, American University of Sharjah, Sharjah, United Arab Emirates c Department of Decision and Information Sciences, C.T. Bauer College of Business, University of Houston, Houston, TX 77204, United States",
"title": ""
},
{
"docid": "93ec9adabca7fac208a68d277040c254",
"text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\[email protected].",
"title": ""
},
{
"docid": "8f54213e38130e9b80ff786103cfbf9b",
"text": "Falling, and the fear of falling, is a serious health problem among the elderly. It often results in physical and mental injuries that have the potential to severely reduce their mobility, independence and overall quality of life. Nevertheless, the consequences of a fall can be largely diminished by providing fast assistance. These facts have lead to the development of several automatic fall detection systems. Recently, many researches have focused particularly on smartphone-based applications. In this paper, we study the capacity of smartphone built-in sensors to differentiate fall events from activities of daily living. We explore, in particular, the information provided by the accelerometer, magnetometer and gyroscope sensors. A collection of features is analyzed and the efficiency of different sensor output combinations is tested using experimental data. Based on these results, a new, simple, and reliable algorithm for fall detection is proposed. The proposed method is a threshold-based algorithm and is designed to require a low battery power consumption. The evaluation of the performance of the algorithm in collected data indicates 100 % for sensitivity and 93 % for specificity. Furthermore, evaluation conducted on a public dataset, for comparison with other existing smartphone-based fall detection algorithms, shows the high potential of the proposed method.",
"title": ""
},
{
"docid": "01a636d56a324f8bb8367b8fc73c8687",
"text": "Formal risk analysis and management in software engineering is still an emerging part of project management. We provide a brief introduction to the concepts of risk management for software development projects, and then an overview of a new risk management framework. Risk management for software projects is intended to minimize the chances of unexpected events, or more specifically to keep all possible outcomes under tight management control. Risk management is also concerned with making judgments about how risk events are to be treated, valued, compared and combined. The ProRisk management framework is intended to account for a number of the key risk management principles required for managing the process of software development. It also provides a support environment to operationalize these management tasks.",
"title": ""
},
{
"docid": "d0cdbd1137e9dca85d61b3d90789d030",
"text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).",
"title": ""
},
{
"docid": "eb90d55afac27ff7d1e43c04002a3478",
"text": "BACKGROUND\nThe detection and molecular characterization of circulating tumor cells (CTCs) are one of the most active areas of translational cancer research, with >400 clinical studies having included CTCs as a biomarker. The aims of research on CTCs include (a) estimation of the risk for metastatic relapse or metastatic progression (prognostic information), (b) stratification and real-time monitoring of therapies, (c) identification of therapeutic targets and resistance mechanisms, and (d) understanding metastasis development in cancer patients.\n\n\nCONTENT\nThis review focuses on the technologies used for the enrichment and detection of CTCs. We outline and discuss the current technologies that are based on exploiting the physical and biological properties of CTCs. A number of innovative technologies to improve methods for CTC detection have recently been developed, including CTC microchips, filtration devices, quantitative reverse-transcription PCR assays, and automated microscopy systems. Molecular-characterization studies have indicated, however, that CTCs are very heterogeneous, a finding that underscores the need for multiplex approaches to capture all of the relevant CTC subsets. We therefore emphasize the current challenges of increasing the yield and detection of CTCs that have undergone an epithelial-mesenchymal transition. Increasing assay analytical sensitivity may lead, however, to a decrease in analytical specificity (e.g., through the detection of circulating normal epithelial cells).\n\n\nSUMMARY\nA considerable number of promising CTC-detection techniques have been developed in recent years. The analytical specificity and clinical utility of these methods must be demonstrated in large prospective multicenter studies to reach the high level of evidence required for their introduction into clinical practice.",
"title": ""
},
{
"docid": "72b5a8ab7fc92d6adea3d401ae864243",
"text": "Based Heart Pulse Detector N. M. Z. Hashim*, N. A. Ali*, A. Salleh*3, A. S. Ja’afar*4, N. A. Z. Abidin* * Faculty of Electronics & Computer Engineering, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia *[email protected], *[email protected], *[email protected], *[email protected], *[email protected] Abstract— The development of heart pulse instruments rapidly fast in market since 21 century. However, the heart pulse detector is expensive due to the complicated system and it is used widely only in hospitals and clinics. The project is targeting to develop a significant photosensor to the medical fields that is easy to use and monitor their health by the user everywhere. The other target is to develop a comfortable instrument, reliable, accurate result to develop of heart pulse using low cost photosensors. This project involved both hardware and software with related to signal processing, mathematical, computational, formalisms, modeling techniques for transforming, transmitting and also for analog or digital signal. This project also used Peripheral Interface Controller (PIC) 16F877A microcontroller as the main function to control other elements. Result showed this project functioned smoothly and successfully with overall objectives were achieved. Apart from that, this project give good services for people to monitor their heart condition form time to time. In the future, wireless connection e.g. Global System for Mobile Communications (GSM) and Zigbee would be developed to make the system more reliable to the current world. Furthermore, the system should be compatible to various environments such as Android based OS so that it can be controlled away from the original location. KeywordColour Wavelength, Heart Rate, Photosensor, PIC 16F877A Microcontroller, Sensor",
"title": ""
},
{
"docid": "227fa1a36ba6b664e37e8c93e133dfd0",
"text": "The notion of complex number is intimately related to the Fundamental Theorem of Algebra and is therefore at the very foundation of mathematical analysis. The development of complex algebra, however, has been far from straightforward.1 The human idea of ‘number’ has evolved together with human society. The natural numbers (1, 2, . . . ∈ N) are straightforward to accept, and they have been used for counting in many cultures, irrespective of the actual base of the number system used. At a later stage, for sharing, people introduced fractions in order to answer a simple problem such as ‘if we catch U fish, I will have two parts 5 U and you will have three parts 3 5 U of the whole catch’. The acceptance of negative numbers and zero has been motivated by the emergence of economy, for dealing with profit and loss. It is rather impressive that ancient civilisations were aware of the need for irrational numbers such as √ 2 in the case of the Babylonians [77] and π in the case of the ancient Greeks.2 The concept of a new ‘number’ often came from the need to solve a specific practical problem. For instance, in the above example of sharing U number of fish caught, we need to solve for 2U = 5 and hence to introduce fractions, whereas to solve x2 = 2 (diagonal of a square) irrational numbers needed to be introduced. Complex numbers came from the necessity to solve equations such as x2 = −1.",
"title": ""
},
{
"docid": "fe52b7bff0974115a0e326813604997b",
"text": "Deep learning is a model of machine learning loosely based on our brain. Artificial neural network has been around since the 1950s, but recent advances in hardware like graphical processing units (GPU), software like cuDNN, TensorFlow, Torch, Caffe, Theano, Deeplearning4j, etc. and new training methods have made training artificial neural networks fast and easy. In this paper, we are comparing some of the deep learning frameworks on the basis of parameters like modeling capability, interfaces available, platforms supported, parallelizing techniques supported, availability of pre-trained models, community support and documentation quality.",
"title": ""
},
{
"docid": "417ec8f2867323551c0767aace4ff4ad",
"text": "FOR SPEECH ENHANCEMENT ALGORITHMS John H.L. Hansen and Bryan L. Pellom Robust Speech Processing Laboratory Duke University, Box 90291, Durham, NC 27708-0291 http://www.ee.duke.edu/Research/Speech [email protected] [email protected] ABSTRACT Much progress has been made in speech enhancement algorithm formulation in recent years. However, while researchers in the speech coding and recognition communities have standard criteria for algorithm performance comparison, similar standards do not exist for researchers in speech enhancement. This paper discusses the necessary ingredients for an e ective speech enhancement evaluation. We propose that researchers use the evaluation core test set of TIMIT (192 sentences), with a set of noise les, and a combination of objective measures and subjective testing for broad and ne phone-level quality assessment. Evaluation results include overall objective speech quality measure scores, measure histograms, and phoneme class and individual phone scores. The reported results are meant to illustrate speci c ways of detailing quality assessment for an enhancement algorithm.",
"title": ""
}
] |
scidocsrr
|
e445a6b262d6fd72eaf4be86a519a9b8
|
Software Test Data Generation using Ant Colony Optimization
|
[
{
"docid": "5a0e5596f77d036852621c1f15788ee2",
"text": "The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are highlevel frameworks, which utilise heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey-box properties, for example safety constraints; and also non-functional properties, such as worst-case execution time. This paper surveys some of the work undertaken in this field, discussing possible new future directions of research for each of its different individual areas.",
"title": ""
}
] |
[
{
"docid": "822e37a65bc226c2de9ed323d4ecdaa9",
"text": "Rainfall is one of the major source of freshwater for all the organism around the world. Rainfall prediction model provides the information regarding various climatological variables on the amount of rainfall. In recent days, Deep Learning enabled the self-learning data labels which allows to create a data-driven model for a time series dataset. It allows to make the anomaly/change detection from the time series data and also predicts the future event's data with respect to the events occurred in the past. This paper deals with obtaining models of the rainfall precipitation by using Deep Learning Architectures (LSTM and ConvNet) and determining the better architecture with RMSE of LSTM as 2.55 and RMSE of ConvNet as 2.44 claiming that for any time series dataset, Deep Learning models will be effective and efficient for the modellers.",
"title": ""
},
{
"docid": "3465c3bc8f538246be5d7f8c8d1292c2",
"text": "The minimal depth of a maximal subtree is a dimensionless order statistic measuring the predictiveness of a variable in a survival tree. We derive the distribution of the minimal depth and use it for high-dimensional variable selection using random survival forests. In big p and small n problems (where p is the dimension and n is the sample size), the distribution of the minimal depth reveals a “ceiling effect” in which a tree simply cannot be grown deep enough to properly identify predictive variables. Motivated by this limitation, we develop a new regularized algorithm, termed RSF-Variable Hunting. This algorithm exploits maximal subtrees for effective variable selection under such scenarios. Several applications are presented demonstrating the methodology, including the problem of gene selection using microarray data. In this work we focus only on survival settings, although our methodology also applies to other random forests applications, including regression and classification settings. All examples presented here use the R-software package randomSurvivalForest.",
"title": ""
},
{
"docid": "fc0327de912ec8ef6ca33467d34bcd9e",
"text": "In this paper, a progressive fingerprint image compression (for storage or transmission) using edge detection scheme is adopted. The image is decomposed into two components. The first component is the primary component, which contains the edges, the other component is the secondary component, which contains the textures and the features. In this paper, a general grasp for the image is reconstructed in the first stage at a bit rate of 0.0223 bpp for Sample (1) and 0.0245 bpp for Sample (2) image. The quality of the reconstructed images is competitive to the 0.75 bpp target bit set by FBI standard. Also, the compression ratio and the image quality of this algorithm is competitive to other existing methods given in the literature [6]-[9]. The compression ratio for our algorithm is about 45:1 (0.180 bpp).",
"title": ""
},
{
"docid": "484da9ea27df49d6a8a5ff5af884c433",
"text": "In this paper, we argue that any effort to understand the state of the Information Systems field has to view IS research as a series of normative choices and value judgments about the ends of research. To assist a systematic questioning of the various ends of IS research, we propose a pragmatic framework that explores the choices IS researchers make around theories and methodologies, ethical methods of conduct, desirable outcomes, and the long-term impact of the research beyond a single site and topic area. We illustrate our framework by considering and questioning the explicit and implicit choices of topics, design and execution, and the representation of knowledge in experimental research—research often considered to be largely beyond value judgments and power relations. We conclude with the implications of our pragmatic framework by proposing practical questions for all IS researchers to consider in making choices about relevant topics, design and execution, and representation of findings in their research.",
"title": ""
},
{
"docid": "e4cefd3932ea07682e4eef336dda278b",
"text": "Rubinstein-Taybi syndrome (RSTS) is a developmental disorder characterized by a typical face and distal limbs abnormalities, intellectual disability, and a vast number of other features. Two genes are known to cause RSTS, CREBBP in 60% and EP300 in 8-10% of clinically diagnosed cases. Both paralogs act in chromatin remodeling and encode for transcriptional co-activators interacting with >400 proteins. Up to now 26 individuals with an EP300 mutation have been published. Here, we describe the phenotype and genotype of 42 unpublished RSTS patients carrying EP300 mutations and intragenic deletions and offer an update on another 10 patients. We compare the data to 308 individuals with CREBBP mutations. We demonstrate that EP300 mutations cause a phenotype that typically resembles the classical RSTS phenotype due to CREBBP mutations to a great extent, although most facial signs are less marked with the exception of a low-hanging columella. The limb anomalies are more similar to those in CREBBP mutated individuals except for angulation of thumbs and halluces which is very uncommon in EP300 mutated individuals. The intellectual disability is variable but typically less marked whereas the microcephaly is more common. All types of mutations occur but truncating mutations and small rearrangements are most common (86%). Missense mutations in the HAT domain are associated with a classical RSTS phenotype but otherwise no genotype-phenotype correlation is detected. Pre-eclampsia occurs in 12/52 mothers of EP300 mutated individuals versus in 2/59 mothers of CREBBP mutated individuals, making pregnancy with an EP300 mutated fetus the strongest known predictor for pre-eclampsia. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "c35eb92007a41be4b4011a5b83b05642",
"text": "Soil bacteria are very important in biogeochemical cycles and have been used for crop production for decades. Plant–bacterial interactions in the rhizosphere are the determinants of plant health and soil fertility. Free-living soil bacteria beneficial to plant growth, usually referred to as plant growth promoting rhizobacteria (PGPR), are capable of promoting plant growth by colonizing the plant root. PGPR are also termed plant health promoting rhizobacteria (PHPR) or nodule promoting rhizobacteria (NPR). These are associated with the rhizosphere, which is an important soil ecological environment for plant–microbe interactions. Symbiotic nitrogen-fixing bacteria include the cyanobacteria of the genera Rhizobium, Bradyrhizobium, Azorhizobium, Allorhizobium, Sinorhizobium and Mesorhizobium. Free-living nitrogen-fixing bacteria or associative nitrogen fixers, for example bacteria belonging to the species Azospirillum, Enterobacter, Klebsiella and Pseudomonas, have been shown to attach to the root and efficiently colonize root surfaces. PGPR have the potential to contribute to sustainable plant growth promotion. Generally, PGPR function in three different ways: synthesizing particular compounds for the plants, facilitating the uptake of certain nutrients from the soil, and lessening or preventing the plants from diseases. Plant growth promotion and development can be facilitated both directly and indirectly. Indirect plant growth promotion includes the prevention of the deleterious effects of phytopathogenic organisms. This can be achieved by the production of siderophores, i.e. small metal-binding molecules. Biological control of soil-borne plant pathogens and the synthesis of antibiotics have also been reported in several bacterial species. Another mechanism by which PGPR can inhibit phytopathogens is the production of hydrogen cyanide (HCN) and/or fungal cell wall degrading enzymes, e.g., chitinase and ß-1,3-glucanase. Direct plant growth promotion includes symbiotic and non-symbiotic PGPR which function through production of plant hormones such as auxins, cytokinins, gibberellins, ethylene and abscisic acid. Production of indole-3-ethanol or indole-3-acetic acid (IAA), the compounds belonging to auxins, have been reported for several bacterial genera. Some PGPR function as a sink for 1-aminocyclopropane-1-carboxylate (ACC), the immediate precursor of ethylene in higher plants, by hydrolyzing it into α-ketobutyrate and ammonia, and in this way promote root growth by lowering indigenous ethylene levels in the micro-rhizo environment. PGPR also help in solubilization of mineral phosphates and other nutrients, enhance resistance to stress, stabilize soil aggregates, and improve soil structure and organic matter content. PGPR retain more soil organic N, and other nutrients in the plant–soil system, thus reducing the need for fertilizer N and P and enhancing release of the nutrients.",
"title": ""
},
{
"docid": "1c72c4edd063a91e098da7cf2143d267",
"text": "/ n this chapter, we consider modesty and its importance. We begin by defining modesty, proceed to argue that being modest is hard work, and then lay out some reasons why this is so. Next, we make the case that modesty correlates with, and may even cause, several desirable outcomes—intrapersonal, interpersonal, and group. We conclude by attempting to reconcile the discrepancies between two empirical literatures, one suggesting that modesty entails social and mental health benefits, the other suggesting that self-enhancement does.",
"title": ""
},
{
"docid": "2c704a11e212b90520e92adf85696674",
"text": "The authors in this study examined the function and public reception of critical tweeting in online campaigns of four nationalist populist politicians during major national election campaigns. Using a mix of qualitative coding and case study inductive methods, we analyzed the tweets of Narendra Modi, Nigel Farage, Donald Trump, and Geert Wilders before the 2014 Indian general elections, the 2016 UK Brexit referendum, the 2016 US presidential election, and the 2017 Dutch general election, respectively. Our data show that Trump is a consistent outlier in terms of using critical language on Twitter when compared to Wilders, Farage, and Modi, but that all four leaders show significant investment in various forms of antagonistic messaging including personal insults, sarcasm, and labeling, and that these are rewarded online by higher retweet rates. Building on the work of Murray Edelman and his notion of a political spectacle, we examined Twitter as a performative space for critical rhetoric within the frame of nationalist politics. We found that cultural and political differences among the four settings also impact how each politician employs these tactics. Our work proposes that studies of social media spaces need to bring normative questions into traditional notions of collaboration. As we show here, political actors may benefit from in-group coalescence around antagonistic messaging, which while serving as a call to arms for online collaboration for those ideologically aligned, may on a societal level lead to greater polarization.",
"title": ""
},
{
"docid": "feeb51ad0c491c86a6018e92e728c3f0",
"text": "This paper discusses why traditional reinforcement learning methods, and algorithms applied to those models, result in poor performance in situated domains characterized by multiple goals, noisy state, and inconsistent reinforcement. We propose a methodology for designing reinforcement functions that take advantage of implicit domain knowledge in order to accelerate learning in such domains. The methodology involves the use of heterogeneous reinforcement functions and progress estimators, and applies to learning in domains with a single agent or with multiple agents. The methodology is experimentally validated on a group of mobile robots learning a foraging task.",
"title": ""
},
{
"docid": "d06393c467e19b0827eea5f86bbf4e98",
"text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.",
"title": ""
},
{
"docid": "2a7002f1c3bf4460ca535966698c12b9",
"text": "In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1%, 79.7% and 60.9%, respectively.",
"title": ""
},
{
"docid": "6b3db3006f8314559bbbe41620466c6e",
"text": "Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "b13d4d5253a116153778d0f343bf76d7",
"text": "OBJECTIVES\nThe purpose of this study was to investigate the effect of dynamic soft tissue mobilisation (STM) on hamstring flexibility in healthy male subjects.\n\n\nMETHODS\nForty five males volunteered to participate in a randomised, controlled single blind design study. Volunteers were randomised to either control, classic STM, or dynamic STM intervention. The control group was positioned prone for 5 min. The classic STM group received standard STM techniques performed in a neutral prone position for 5 min. The dynamic STM group received all elements of classic STM followed by distal to proximal longitudinal strokes performed during passive, active, and eccentric loading of the hamstring. Only specific areas of tissue tightness were treated during the dynamic phase. Hamstring flexibility was quantified as hip flexion angle (HFA) which was the difference between the total range of straight leg raise and the range of pelvic rotation. Pre- and post-testing was conducted for the subjects in each group. A one-way ANCOVA followed by pairwise post-hoc comparisons was used to determine whether change in HFA differed between groups. The alpha level was set at 0.05.\n\n\nRESULTS\nIncrease in hamstring flexibility was significantly greater in the dynamic STM group than either the control or classic STM groups with mean (standard deviation) increase in degrees in the HFA measures of 4.7 (4.8), -0.04 (4.8), and 1.3 (3.8), respectively.\n\n\nCONCLUSIONS\nDynamic soft tissue mobilisation (STM) significantly increased hamstring flexibility in healthy male subjects.",
"title": ""
},
{
"docid": "0e7da1ef24306eea2e8f1193301458fe",
"text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "b8334d21af0d511b13dcaf27b6916dc5",
"text": "Almost all of today’s knowledge is stored in databases and thus can only be accessed with the help of domain specific query languages, strongly limiting the number of people which can access the data. In this work, we demonstrate an end-to-end trainable question answering (QA) system that allows a user to query an external NoSQL database by using natural language. A major challenge of such a system is the non-differentiability of database operations which we overcome by applying policy-based reinforcement learning. We evaluate our approach on Facebook’s bAbI Movie Dialog dataset and achieve a competitive score of 84.2% compared to several benchmark models. We conclude that our approach excels with regard to real-world scenarios where knowledge resides in external databases and intermediate labels are too costly to gather for non-end-to-end trainable QA systems.",
"title": ""
},
{
"docid": "b8e2bd6e7a852f3995813397920ababf",
"text": "ion of the a-proton from acetyl-CoA by Asp375, creating the enolate form. It was long believed that this was converted into an enol (e.g., by proton transfer from His274), but several computer modeling studies (in particular using high-level QM/MM methods) indicated that the enolate is the true transient intermediate species in the enzyme reaction (e.g., Mulholland et al. 2000; Van der Kamp et al. 2010). The enolate form is stabilized by electrostatic interactions in the enzyme active site, which include conventional hydrogen bonds from His274 and a conserved water molecule to the enolate oxygen (Fig. 2); no “lowbarrier hydrogen bonds” are involved (Mulholland et al. 2000). When the enolate intermediate is formed, the carbonyl carbon of oxaloacetate can undergo a nucleophilic attack. Citryl-CoA is formed as an intermediate, which requires proton donation. Initially, it was suggested that His320 donated the proton, but high-level QM/MM studies indicate that donation by Arg329 is most likely (Van der Kamp et al. 2008) (Fig. 2). This unusual role of an arginine as proton donor probably prevents overstabilization of the citryl-CoA intermediate and may trigger opening of the enzyme active site (vide supra), which is likely to be important for hydrolysis. Through its involvement in catalysis, Arg329 thereby is proposed to provide a mechanism for coupling condensation and hydrolysis in citrate synthase, and for coupling the chemical and conformational changes during the catalytic cycle (Van der Kamp et al. 2008). Citryl-CoA subsequently undergoes hydrolysis to form citrate and CoA-SH. Asp375 is implicated to play a role in this step, but the precise mechanism for this step is as yet unknown. The breaking of the thioester-linkage is energetically very favorable, which helps to drive the reaction in the forward direction, making it possible for the citric acid cycle to turn over, even with the typically low concentration of oxaloacetate in vivo (Voet and Voet 2011).",
"title": ""
},
{
"docid": "02a276b26400fe37804298601b16bc13",
"text": "Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred.\n In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.\n The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.",
"title": ""
},
{
"docid": "a9a22c9c57e9ba8c3deefbea689258d5",
"text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.",
"title": ""
},
{
"docid": "e0a08bac6769382c3168922bdee1939d",
"text": "This paper presents the state of art research progress on multilingual multi-document summarization. Our method utilizes hLDA (hierarchical Latent Dirichlet Allocation) algorithm to model the documents firstly. A new feature is proposed from the hLDA modeling results, which can reflect semantic information to some extent. Then it combines this new feature with different other features to perform sentence scoring. According to the results of sentence score, it extracts candidate summary sentences from the documents to generate a summary. We have also attempted to verify the effectiveness and robustness of the new feature through experiments. After the comparison with other summarization methods, our method reveals better performance in some respects.",
"title": ""
}
] |
scidocsrr
|
83f108f5bfb5b010755739fa5b05a995
|
Continuum robots for space applications based on layer-jamming scales with stiffening capability
|
[
{
"docid": "42ed573d8e3fbbb9e178c6cfceccc996",
"text": "We introduce a new method for synthesizing kinematic relationships for a general class of continuous backbone, or continuum , robots. The resulting kinematics enable real-time task and shape control by relating workspace (Cartesian) coordinates to actuator inputs, such as tendon lengths or pneumatic pressures, via robot shape coordinates. This novel approach, which carefully considers physical manipulator constraints, avoids artifacts of simplifying assumptions associated with previous approaches, such as the need to fit the resulting solutions to the physical robot. It is applicable to a wide class of existing continuum robots and models extension, as well as bending, of individual sections. In addition, this approach produces correct results for orientation, in contrast to some previously published approaches. Results of real-time implementations on two types of spatial multisection continuum manipulators are reported.",
"title": ""
},
{
"docid": "be1ac4321c710c325ed4ad5dae927b6c",
"text": "Current work at NASA's Johnson Space Center is focusing on the identification and design of novel robotic archetypes to fill roles complimentary to current space robots during in-space assembly and maintenance tasks. Tendril, NASA's latest robot designed for minimally invasive inspection, is one system born of this effort. Inspired by the biology of snakes, tentacles, and climbing plants, the Tendril robot is a long slender manipulator that can extend deep into crevasses and under thermal blankets to inspect areas largely inaccessible by conventional means. The design of the Tendril, with its multiple bending segments and 1 cm diameter, also serves as an initial step in exploring the whole body control known to continuum robots coupled with the small scale and dexterity found in medical and commercial minimally invasive devices. An overview of Tendril's design is presented along with preliminary results from testing that seeks to improve Tendril's performance through an iterative design process",
"title": ""
},
{
"docid": "be749e59367ee1033477bb88503032cf",
"text": "This paper describes the results of field trials and associated testing of the OctArm series of multi-section continuous backbone \"continuum\" robots. This novel series of manipulators has recently (Spring 2005) undergone a series of trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulators demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. Implications for the deployment of continuum robots in a variety of applications are discussed",
"title": ""
},
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
}
] |
[
{
"docid": "8954672b2e2b6351abfde0747fd5d61c",
"text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.",
"title": ""
},
{
"docid": "5188032027c67f0e91ed0681d4a871b4",
"text": "This paper defines an advanced methodology for modeling applications based on Data Mining methods that represents a logical framework for development of Data Mining applications. Methodology suggested here for Data Mining modeling process has been applied and tested through Data Mining applications for predicting Prepaid users churn in the telecom industry. The main emphasis of this paper is defining of a successful model for prediction of potential Prepaid churners, in which the most important part is to identify the very set of input variables that are high enough to make the prediction model precise and reliable. Several models have been created and compared on the basis of different Data Mining methods and algorithms (neural networks, decision trees, logistic regression). For the modeling examples we used WEKA analysis tool.",
"title": ""
},
{
"docid": "8ac9d212ee98c8dea54ead0bdd43052d",
"text": "This paper discusses two analytical methods used in estimating the equivalent thermal conductivity of impregnated electrical windings constructed with Litz wire. Both methods are based on a double-homogenisation approach consecutively employing the individual winding conductors and wire bundles. The first method is suitable for Litz wire with round-profiled enamel-coated conductors and round-shaped bundles; whereas the second method is tailored for compacted Litz wires with conductors and/or bundles having square or rectangular profiles. The work conducted herein expands upon established methods for cylindrical conductor forms [1], and develops an equivalent lumped-parameter thermal network for rectangular forms. This network derives analytical formulae which represents the winding's equivalent thermal conductivity and directly accounts for any thermal anisotropy. The estimates of equivalent thermal conductivity from theoretical, analytical and finite element (FE) methods have been supplemented with experimental data using impregnated winding samples and are shown to have good correlation.",
"title": ""
},
{
"docid": "175890538c681d55dfce51918c8a1909",
"text": "We recently reported that the brain showed greater responsiveness to some cognitive demands following total sleep deprivation (TSD). Specifically, verbal learning led to increased cerebral activation following TSD while arithmetic resulted in decreased activation. Here we report data from a divided attention task that combined verbal learning and arithmetic. Thirteen normal control subjects performed the task while undergoing functional magnetic resonance imaging (FMRI) scans after a normal night of sleep and following 35 h TSD. Behaviourally, subjects showed only modest impairments following TSD. With respect to cerebral activation, the results showed (a) increased activation in the prefrontal cortex and parietal lobes, particularly in the right hemisphere, following TSD, (b) activation in left inferior frontal gyrus correlated with increased subjective sleepiness after TSD, and (c) activation in bilateral parietal lobes correlated with the extent of intact memory performance after TSD. Many of the brain regions showing a greater response after TSD compared with normal sleep are thought to be involved in control of attention. These data imply that the divided attention task required more attentional resources (specifically, performance monitoring and sustained attention) following TSD than after normal sleep. Other neuroimaging results may relate to the verbal learning and/or arithmetic demands of the task. This is the first study to examine divided attention performance after TSD with neuroimaging and supports our previous suggestion that the brain may be more plastic during cognitive performance following TSD than previously thought.",
"title": ""
},
{
"docid": "89d283980d5a6d95d56a675f89ea823c",
"text": "Desynchronization between the master clock in the brain, which is entrained by (day) light, and peripheral organ clocks, which are mainly entrained by food intake, may have negative effects on energy metabolism. Bile acid metabolism follows a clear day/night rhythm. We investigated whether in rats on a normal chow diet the daily rhythm of plasma bile acids and hepatic expression of bile acid metabolic genes is controlled by the light/dark cycle or the feeding/fasting rhythm. In addition, we investigated the effects of high caloric diets and time-restricted feeding on daily rhythms of plasma bile acids and hepatic genes involved in bile acid synthesis. In experiment 1 male Wistar rats were fed according to three different feeding paradigms: food was available ad libitum for 24 h (ad lib) or time-restricted for 10 h during the dark period (dark fed) or 10 h during the light period (light fed). To allow further metabolic phenotyping, we manipulated dietary macronutrient intake by providing rats with a chow diet, a free choice high-fat-high-sugar diet or a free choice high-fat (HF) diet. In experiment 2 rats were fed a normal chow diet, but food was either available in a 6-meals-a-day (6M) scheme or ad lib. During both experiments, we measured plasma bile acid levels and hepatic mRNA expression of genes involved in bile acid metabolism at eight different time points during 24 h. Time-restricted feeding enhanced the daily rhythm in plasma bile acid concentrations. Plasma bile acid concentrations are highest during fasting and dropped during the period of food intake with all diets. An HF-containing diet changed bile acid pool composition, but not the daily rhythmicity of plasma bile acid levels. Daily rhythms of hepatic Cyp7a1 and Cyp8b1 mRNA expression followed the hepatic molecular clock, whereas for Shp expression food intake was leading. Combining an HF diet with feeding in the light/inactive period annulled CYp7a1 and Cyp8b1 gene expression rhythms, whilst keeping that of Shp intact. In conclusion, plasma bile acids and key genes in bile acid biosynthesis are entrained by food intake as well as the hepatic molecular clock. Eating during the inactivity period induced changes in the plasma bile acid pool composition similar to those induced by HF feeding.",
"title": ""
},
{
"docid": "85b169515b4e4b86117abcdd83f002ea",
"text": "While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.",
"title": ""
},
{
"docid": "3fc94de55342ff7560ed0c13a18e526c",
"text": "Linear optics with photon counting is a prominent candidate for practical quantum computing. The protocol by Knill, Laflamme, and Milburn 2001, Nature London 409, 46 explicitly demonstrates that efficient scalable quantum computing with single photons, linear optical elements, and projective measurements is possible. Subsequently, several improvements on this protocol have started to bridge the gap between theoretical scalability and practical implementation. The original theory and its improvements are reviewed, and a few examples of experimental two-qubit gates are given. The use of realistic components, the errors they induce in the computation, and how these errors can be corrected is discussed.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "f65e55d992bff2ce881aaf197a734adf",
"text": "hypervisor as a nondeterministic sequential program prove invariant properties of individual ϋobjects and compose them 14 Phase1 Startup Phase2 Intercept Phase3 Exception Proofs HW initiated concurrent execution Concurrent execution HW initiated sequential execution Sequential execution Intro. Motivating. Ex. Impl. Verif. Results Perf. Concl. Architecture",
"title": ""
},
{
"docid": "50d27a921703202a5fb329d6f615d19f",
"text": "This paper proposes an analytically-based approach for the design of a miniaturized single-band and dual-band two-way Wilkinson power divider. This miniaturization is achieved by realizing the power divider's impedance transformers using slow wave structures. These slow wave structures are designed by periodically loading transmission lines with capacitances, which reduces the phase velocity of the propagating waves and hence engender higher electric lengths using smaller physical lengths. The dispersive analysis of the slow wave structure used is included in the design approach to ensure a smooth nondispersive transmission line operation in the case of dual-band applications. The design methodology is validated with the design of a single-band, reduced size, two-way Wilkinson power divider at 850 and 620 MHz. An approximate length reduction of 25%-35% is achieved with this technique. For dual-band applications, this paper describes the design of a reduced size, two-way Wilkinson power divider for dual-band global system for mobile communications and code division multiple access applications at 850 and 1960 MHz, respectively. An overall reduction factor of 28%, in terms of chip area occupied by the circuit, is achieved. The electromagnetic simulation and experimental results validate the design approach. The circuit is realized with microstrip technology, which can be easily fabricated using conventional printed circuit processes.",
"title": ""
},
{
"docid": "60a0c63f6c1166970d440c1302ca0dbe",
"text": "In vehicle routing problems with time windows (VRPTW), a set of vehicles with limits on capacity and travel time are available to service a set of customers with demands and earliest and latest time for servicing. The objective is to minimize the cost of servicing the set of customers without being tardy or exceeding the capacity or travel time of the vehicles. As finding a feasible solution to the problem is NP-complete, search methods based upon heuristics are most promising for problems of practical size. In this paper we describe GIDEON, a genetic algorithm heuristic for solving the VRPTW. GIDEON consists of a global customer clustering method and a local post-optimization method. The global customer clustering method uses an adaptive search strategy based upon population genetics, to assign vehicles to customers. The best solution obtained from the clustering method is improved by a local post-optimization method. The synergy a between global adaptive clustering method and a local route optimization method produce better results than those obtained by competing heuristic search methods. On a standard set of 56 VRPTW problems obtained from the literature the GIDEON system obtained 41 new best known solutions.",
"title": ""
},
{
"docid": "b191b9829aac1c1e74022c33e2488bbd",
"text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.",
"title": ""
},
{
"docid": "b5c2d3295cd563983c81e048e59d6541",
"text": "In this paper, a real-time Human-Computer Interaction (HCI) based on the hand data glove and K-NN classifier for gesture recognition is proposed. HCI is moving more and more natural and intuitive way to be used. One of the important parts of our body is our hand which is most frequently used for the Interaction in Digital Environment and thus complexity and flexibility of motion of hands are the research topics. To recognize these hand gestures more accurately and successfully data glove is used. Here, gloves are used to capture current position of the hand and the angles between the joints and then these features are used to classify the gestures using K-NN classifier. The gestures classified are categorized as clicking, rotating, dragging, pointing and ideal position. Recognizing these gestures relevant actions are taken, such as air writing and 3D sketching by tracking the path helpful in virtual augmented reality (VAR). The results show that glove used for interaction is better than normal static keyboard and mouse as the interaction process is more accurate and natural in dynamic environment with no distance limitations. Also it enhances the user’s interaction and immersion feeling.",
"title": ""
},
{
"docid": "e2280d602e8110dbaf512d6e187ecd9f",
"text": "There are problems in the delimitation/identification of Plectranthus species and this investigation aims to contribute toward solving such problems through structural and histochemical study of the trichomes. Considering the importance of P. zuluensis as restricted to semi-coastal forests of Natal that possess only two fertile stamens not four as the other species of this genus. The objective of this work was to study in detail the distribution, morphology and histochemistry of the foliar trichomes of this species using light and electron microscopy. Distribution and morphology of two types of non-glandular, capitate and peltate glandular trichomes are described on both leaf sides. This study provides a description of the different secretion modes of glandular trichomes. Results of histochemical tests showed a positive reaction to terpenoids, lipids, polysaccharides and phenolics in the glandular trichomes. We demonstrated that the presence, types and structure of glandular and non-glandular trichomes are important systematic criteria for the species delimitation in the genus.",
"title": ""
},
{
"docid": "37c2f0cface4943e6332f29d41ada5b0",
"text": "Although substantial research has explored the emergence of collective intelligence in real-time human-based collaborative systems, much of this work has focused on rigid scenarios such as the Prisoner’s Dilemma (PD). (Pinheiro et al., 2012; Santos et al., 2012). While such work is of great research value, there’s a growing need for a flexible real-world platform that fosters collective intelligence in authentic decision-making situations. This paper introduces a new platform called UNUM that allows groups of online users to collectively answer questions, make decisions, and resolve dilemmas by working together in unified dynamic systems. Modeled after biological swarms, the UNUM platform enables online groups to work in real-time synchrony, collaboratively exploring a decision-space and converging on preferred solutions in a matter of seconds. We call the process “social swarming” and early real-world testing suggests it has great potential for harnessing collective intelligence.",
"title": ""
},
{
"docid": "ee0d89ccd67acc87358fa6dd35f6b798",
"text": "Lessons learned from developing four graph analytics applications reveal good research practices and grand challenges for future research. The application domains include electric-power-grid analytics, social-network and citation analytics, text and document analytics, and knowledge domain analytics.",
"title": ""
},
{
"docid": "05cea038adce7f5ae2a09a7fd5e024a7",
"text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.",
"title": ""
},
{
"docid": "6a96678b14ec12cb4bb3db4e1c4c6d4e",
"text": "Emoticons are widely used to express positive or negative sentiment on Twitter. We report on a study with live users to determine whether emoticons are used to merely emphasize the sentiment of tweets, or whether they are the main elements carrying the sentiment. We found that the sentiment of an emoticon is in substantial agreement with the sentiment of the entire tweet. Thus, emoticons are useful as predictors of tweet sentiment and should not be ignored in sentiment classification. However, the sentiment expressed by an emoticon agrees with the sentiment of the accompanying text only slightly better than random. Thus, using the text accompanying emoticons to train sentiment models is not likely to produce the best results, a fact that we show by comparing lexicons generated using emoticons with others generated using simple textual features.",
"title": ""
},
{
"docid": "3fe2cb22ac6aa37d8f9d16dea97649c5",
"text": "The term biosensors encompasses devices that have the potential to quantify physiological, immunological and behavioural responses of livestock and multiple animal species. Novel biosensing methodologies offer highly specialised monitoring devices for the specific measurement of individual and multiple parameters covering an animal's physiology as well as monitoring of an animal's environment. These devices are not only highly specific and sensitive for the parameters being analysed, but they are also reliable and easy to use, and can accelerate the monitoring process. Novel biosensors in livestock management provide significant benefits and applications in disease detection and isolation, health monitoring and detection of reproductive cycles, as well as monitoring physiological wellbeing of the animal via analysis of the animal's environment. With the development of integrated systems and the Internet of Things, the continuously monitoring devices are expected to become affordable. The data generated from integrated livestock monitoring is anticipated to assist farmers and the agricultural industry to improve animal productivity in the future. The data is expected to reduce the impact of the livestock industry on the environment, while at the same time driving the new wave towards the improvements of viable farming techniques. This review focusses on the emerging technological advancements in monitoring of livestock health for detailed, precise information on productivity, as well as physiology and well-being. Biosensors will contribute to the 4th revolution in agriculture by incorporating innovative technologies into cost-effective diagnostic methods that can mitigate the potentially catastrophic effects of infectious outbreaks in farmed animals.",
"title": ""
},
{
"docid": "088d6f1cd3c19765df8a16cd1a241d18",
"text": "Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.",
"title": ""
}
] |
scidocsrr
|
975434c682886d981f6ec79602811241
|
Interest-based personalized search
|
[
{
"docid": "1272563e64ca327aba1be96f2e045c30",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
}
] |
[
{
"docid": "62c71a412a8b715e2fda64cd8b6a2a66",
"text": "We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.",
"title": ""
},
{
"docid": "9be252c72f5f11a391ea180baca6b6dd",
"text": "In a typical cloud computing diverse facilitating components like hardware, software, firmware, networking, and services integrate to offer different computational facilities, while Internet or a private network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud system delimit the benefits of cloud computing like “on-demand, customized resource availability and performance management”. It is understood that current IT and enterprise security solutions are not adequate to address the cloud security issues. This paper explores the challenges and issues of security concerns of cloud computing through different standard and novel solutions. We propose analysis and architecture for incorporating different security schemes, techniques and protocols for cloud computing, particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and is not coupled with the underlying backbone. This would facilitate to manage the cloud system more effectively and provide the administrator to include the specific solution to counter the threat. We have also shown using experimental data how a cloud service provider can estimate the charging based on the security service it provides and security-related cost-benefit analysis can be estimated.",
"title": ""
},
{
"docid": "28ff3b1e9f29d7ae4b57f6565330cde5",
"text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.",
"title": ""
},
{
"docid": "445b3f542e785425cd284ad556ef825a",
"text": "Despite the success of neural networks (NNs), there is still a concern among many over their “black box” nature. Why do they work? Yes, we have Universal Approximation Theorems, but these concern statistical consistency, a very weak property, not enough to explain the exceptionally strong performance reports of the method. Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models, with the effective degree of the polynomial growing at each hidden layer. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available. 1 ar X iv :1 80 6. 06 85 0v 2 [ cs .L G ] 2 9 Ju n 20 18 1 The Mystery of NNs Neural networks (NNs), especially in the currently popular form of many-layered deep learning networks (DNNs), have become many analysts’ go-to method for predictive analytics. Indeed, in the popular press, the term artificial intelligence has become virtually synonymous with NNs.1 Yet there is a feeling among many in the community that NNs are “black boxes”; just what is going on inside? Various explanations have been offered for the success of NNs, a prime example being [Shwartz-Ziv and Tishby(2017)]. However, the present paper will present significant new insights. 2 Contributions of This Paper The contribution of the present work will be as follows:2 (a) We will show that, at each layer of an NY, there is a rough correspondence to some fitted ordinary parametric polynomial regression (PR) model; in essence, NNs are a form of PR. We refer to this loose correspondence here as NNAEPR, Neural Nets Are Essentially Polynomial Models. (b) A very important aspect of NNAEPR is that the degree of the approximating polynomial increases with each hidden layer. In other words, our findings should not be interpreted as merely saying that the end result of an NN can be approximated by some polynomial. (c) We exploit NNAEPR to learn about general properties of NNs via our knowledge of the properties of PR. This will turn out to provide new insights into aspects such as the numbers of hidden layers and numbers of units per layer, as well as how convergence problems arise. For example, we use NNAEPR to predict and confirm a multicollinearity property of NNs not previous reported in the literature. (d) Property (a) suggests that in many applications, one might simply fit a polynomial model in the first place, bypassing NNs. This would have the advantage of avoiding the problems of choosing tuning parameters (the polynomial approach has just one, the degree), nonconvergence and so on. 1There are many different variants of NNs, but for the purposes of this paper, we can consider them as a group. 2 Author listing is alphabetical by surname. XC wrote the entire core code for the polyreg package; NM conceived of the main ideas underlying the work, developed the informal mathematical material and wrote support code; BK assembled the brain and kidney cancer data, wrote some of the support code, and provided domain expertise guidance for genetics applications; PM wrote extensive support code, including extending his kerasformula package, and provided specialized expertise on NNs. All authors conducted data experiments.",
"title": ""
},
{
"docid": "4a87e61106125ffdd49c42517ce78b87",
"text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.",
"title": ""
},
{
"docid": "0075c4714b8e7bf704381d3a3722ab59",
"text": "This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.",
"title": ""
},
{
"docid": "e8eba986ab77d519ce8808b3d33b2032",
"text": "In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.",
"title": ""
},
{
"docid": "741dbabfa94b787f31bccf12471724a4",
"text": "In this paper is proposed a Takagi-Sugeno Fuzzy controller (TSF) applied to the direct torque control scheme with space vector modulation. In conventional DTC-SVM scheme, two PI controllers are used to generate the reference stator voltage vector. To improve the drawback of this conventional DTC-SVM scheme is proposed the TSF controller to substitute both PI controllers. The proposed controller calculates the reference quadrature components of the stator voltage vector. The rule base for the proposed controller is defined in function of the stator flux error and the electromagnetic torque error using trapezoidal and triangular membership functions. Constant switching frequency and low torque ripple are obtained using space vector modulation technique. Performance of the proposed DTC-SVM with TSF controller is analyzed in terms of several performance measures such as rise time, settling time and torque ripple considering different operating conditions. The simulation results shown that the proposed scheme ensure fast torque response and low torque ripple validating the proposed scheme.",
"title": ""
},
{
"docid": "c2177b7e3cdca3800b3d465229835949",
"text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .",
"title": ""
},
{
"docid": "a10a51d1070396e1e8a8b186af18f87d",
"text": "An upcoming trend for automobile manufacturers is to provide firmware updates over the air (FOTA) as a service. Since the firmware controls the functionality of a vehicle, security is important. To this end, several secure FOTA protocols have been developed. However, the secure FOTA protocols only solve the security for the transmission of the firmware binary. Once the firmware is downloaded, an attacker could potentially modify its contents before it is flashed to the corresponding ECU'S ROM. Thus, there is a need to extend the flashing procedure to also verify that the correct firmware has been flashed to the ECU. We present a framework for self-verification of firmware updates over the air. We include a verification code in the transmission to the vehicle, and after the firmware has been flashed, the integrity of the memory contents can be verified using the verification code. The verification procedure entails only simple hash functions and is thus suitable for the limited resources in the vehicle. Virtualization techniques are employed to establish a trusted computing base in the ECU, which is then used to perform the verification. The proposed framework allows the ECU itself to perform self-verification and can thus ensure the successful flashing of the firmware.",
"title": ""
},
{
"docid": "bad5040a740421b3079c3fa7bf598d71",
"text": "Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multipath, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.",
"title": ""
},
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
},
{
"docid": "c02e7ece958714df34539a909c2adb7d",
"text": "Despite the growing evidence of the association between shame experiences and eating psychopathology, the specific effect of body image-focused shame memories on binge eating remains largely unexplored. The current study examined this association and considered current body image shame and self-criticism as mediators. A multi-group path analysis was conducted to examine gender differences in these relationships. The sample included 222 women and 109 men from the Portuguese general and college student populations who recalled an early body image-focused shame experience and completed measures of the centrality of the shame memory, current body image shame, binge eating symptoms, depressive symptoms, and self-criticism. For both men and women, the effect of the centrality of shame memories related to body image on binge eating symptoms was fully mediated by body image shame and self-criticism. In women, these effects were further mediated by self-criticism focused on a sense of inadequacy and also on self-hatred. In men, only the form of self-criticism focused on a sense of inadequacy mediated these associations. The present study has important implications for the conceptualization and treatment of binge eating symptoms. Findings suggest that, in both genders, body image-focused shame experiences are associated with binge eating symptoms via their effect on current body image shame and self-criticism.",
"title": ""
},
{
"docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a",
"text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053",
"title": ""
},
{
"docid": "a6e18aa7f66355fb8407798a37f53f45",
"text": "We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.",
"title": ""
},
{
"docid": "69fb4deab14bd651e20209695c6b50a2",
"text": "An impediment to Web-based retail sales is the impersonal nature of Web-based shopping. A solution to this problem is to use an avatar to deliver product information. An avatar is a graphic representation that can be animated by means of computer technology. Study 1 shows that using an avatar sales agent leads to more satisfaction with the retailer, a more positive attitude toward the product, and a greater purchase intention. Study 2 shows that an attractive avatar is a more effective sales agent at moderate levels of product involvement, but an expert avatar is a more effective sales agent at high levels of product involvement.",
"title": ""
},
{
"docid": "4a85e3b10ecc4c190c45d0dfafafb388",
"text": "The number of malicious applications targeting the Android system has literally exploded in recent years. While the security community, well aware of this fact, has proposed several methods for detection of Android malware, most of these are based on permission and API usage or the identification of expert features. Unfortunately, many of these approaches are susceptible to instruction level obfuscation techniques. Previous research on classic desktop malware has shown that some high level characteristics of the code, such as function call graphs, can be used to find similarities between samples while being more robust against certain obfuscation strategies. However, the identification of similarities in graphs is a non-trivial problem whose complexity hinders the use of these features for malware detection. In this paper, we explore how recent developments in machine learning classification of graphs can be efficiently applied to this problem. We propose a method for malware detection based on efficient embeddings of function call graphs with an explicit feature map inspired by a linear-time graph kernel. In an evaluation with 12,158 malware samples our method, purely based on structural features, outperforms several related approaches and detects 89% of the malware with few false alarms, while also allowing to pin-point malicious code structures within Android applications.",
"title": ""
},
{
"docid": "edd8ac16c7eaebf5b5b06964eacb6e8c",
"text": "The authors examined White and Black participants' emotional, physiological, and behavioral responses to same-race or different-race evaluators, following rejecting social feedback or accepting social feedback. As expected, in ingroup interactions, the authors observed deleterious responses to social rejection and benign responses to social acceptance. Deleterious responses included cardiovascular (CV) reactivity consistent with threat states and poorer performance, whereas benign responses included CV reactivity consistent with challenge states and better performance. In intergroup interactions, however, a more complex pattern of responses emerged. Social rejection from different-race evaluators engendered more anger and activational responses, regardless of participants' race. In contrast, social acceptance produced an asymmetrical race pattern--White participants responded more positively than did Black participants. The latter appeared vigilant and exhibited threat responses. Discussion centers on implications for attributional ambiguity theory and potential pathways from discrimination to health outcomes.",
"title": ""
},
{
"docid": "a564d62de4afc7e6e5c76f1955809b61",
"text": "The implementation of a polycrystalline silicon solar cell as a microwave groundplane in a low-profile, reduced-footprint microstrip patch antenna design for autonomous communication applications is reported. The effects on the antenna/solar performances due to the integration, different electrical conductivities in the silicon layer and variation in incident light intensity are investigated. The antenna sensitivity to the orientation of the anisotropic solar cell geometry is discussed.",
"title": ""
},
{
"docid": "3f72e02928b5fcc6e8a9155f0344e6e1",
"text": "Due to the limitations of power amplifiers or loudspeakers, the echo signals captured in the microphones are not in a linear relationship with the far-end signals even when the echo path is perfectly linear. The nonlinear components of the echo cannot be successfully removed by a linear acoustic echo canceller. Residual echo suppression (RES) is a technique to suppress the remained echo after acoustic echo suppression (AES). Conventional approaches compute RES gain using Wiener filter or spectral subtraction method based on the estimated statistics on related signals. In this paper, we propose a deep neural network (DNN)-based RES gain estimation based on both the far-end and the AES output signals in all frequency bins. A DNN architecture, which is suitable to model a complicated nonlinear mapping between high-dimensional vectors, is employed as a regression function from these signals to the optimal RES gain. The proposed method can suppress the residual components without any explicit double-talk detectors. The experimental results show that our proposed approach outperforms a conventional method in terms of the echo return loss enhancement (ERLE) for single-talk periods and the perceptual evaluation of speech quality (PESQ) score for double-talk periods.",
"title": ""
}
] |
scidocsrr
|
47bb7c744642a9af905bc728025b3552
|
FatCBST: Concurrent Binary Search Tree with Fatnodes
|
[
{
"docid": "3311ef081d181ce715713dacf735d644",
"text": "The advent of multicore processors as the standard computing platform will force major changes in software design.",
"title": ""
},
{
"docid": "3f0b2f3739a6b9fdf3681dd4296405e6",
"text": "One approach to achieving high performance in a database management system is to store the database in main memorv rather than on disk. -One can then design new data structures aid algorithms oriented towards making eflicient use of CPU cycles and memory space rather than minimizing disk accesses and &ing disk space efliciently. In this paper we present some results on index structures from an ongoing study of main memory database management systems. We propose a new index structure, the T Tree, and we compare it to existing index structures in a main memory database environment. Our results indicate that the T Tree provides good overall performance in main memory.",
"title": ""
}
] |
[
{
"docid": "16186ff81d241ecaea28dcf5e78eb106",
"text": "Different kinds of people use computers now than several decades ago, but operating systems have not fully kept pace with this change. It is true that we have point-and-click GUIs now instead of command line interfaces, but the expectation of the average user is different from what it used to be, because the user is different. Thirty or 40 years ago, when operating systems began to solidify into their current form, almost all computer users were programmers, scientists, engineers, or similar professionals doing heavy-duty computation, and they cared a great deal about speed. Few teenagers and even fewer grandmothers spent hours a day behind their terminal. Early users expected the computer to crash often; reboots came as naturally as waiting for the neighborhood TV repairman to come replace the picture tube on their home TVs. All that has changed and operating systems need to change with the times.",
"title": ""
},
{
"docid": "5e2b8d3ed227b71869550d739c61a297",
"text": "Dairy cattle experience a remarkable shift in metabolism after calving, after which milk production typically increases so rapidly that feed intake alone cannot meet energy requirements (Bauman and Currie, 1980; Baird, 1982). Cows with a poor adaptive response to negative energy balance may develop hyperketonemia (ketosis) in early lactation. Cows that develop ketosis in early lactation lose milk yield and are at higher risk for other postpartum diseases and early removal from the herd.",
"title": ""
},
{
"docid": "6bbc6a3f4f8d6f050f4317837cf30144",
"text": "Characterizing driving styles of human drivers using vehicle sensor data, e.g., GPS, is an interesting research problem and an important real-world requirement from automotive industries. A good representation of driving features can be highly valuable for autonomous driving, auto insurance, and many other application scenarios. However, traditional methods mainly rely on handcrafted features, which limit machine learning algorithms to achieve a better performance. In this paper, we propose a novel deep learning solution to this problem, which could be the first attempt of studying deep learning for driving behavior analysis. The proposed approach can effectively extract high level and interpretable features describing complex driving patterns from GPS data. It also requires significantly less human experience and work. The power of the learned driving style representations are validated through the driver identification problem using a large real dataset.",
"title": ""
},
{
"docid": "7d8617c12c24e61b7ef003a5055fbf2f",
"text": "We present the first approximation algorithms for a large class of budgeted learning problems. One classicexample of the above is the budgeted multi-armed bandit problem. In this problem each arm of the bandithas an unknown reward distribution on which a prior isspecified as input. The knowledge about the underlying distribution can be refined in the exploration phase by playing the arm and observing the rewards. However, there is a budget on the total number of plays allowed during exploration. After this exploration phase,the arm with the highest (posterior) expected reward is hosen for exploitation. The goal is to design the adaptive exploration phase subject to a budget constraint on the number of plays, in order to maximize the expected reward of the arm chosen for exploitation. While this problem is reasonably well understood in the infinite horizon discounted reward setting, the budgeted version of the problem is NP-Hard. For this problem and several generalizations, we provide approximate policies that achieve a reward within constant factor of the reward optimal policy. Our algorithms use a novel linear program rounding technique based on stochastic packing.",
"title": ""
},
{
"docid": "c95da5ee6fde5cf23b551375ff01e709",
"text": "The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9 % of outdoor and indoor devices, if the latter is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95 % of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.",
"title": ""
},
{
"docid": "354b35bb1c51442a7e855824ab7b91e0",
"text": "Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.",
"title": ""
},
{
"docid": "e72fa6412ba935448c7a7b7a00d64ec2",
"text": "This Critical Review on environmental concerns of desalination plants suggests that planning and monitoring stages are critical aspects of successful management and operation of plants. The site for the desalination plants should be selected carefully and should be away from residential areas particularly for forward planning for possible future expansions. The concerning issues identified are noise pollution, visual pollution, reduction in recreational fishing and swimming areas, emission of materials into the atmosphere, the brine discharge and types of disposal methods used are the main cause of pollution. The reverse osmosis (RO) method is the preferred option in modern times especially when fossil fuels are becoming expensive. The RO has other positives such as better efficiency (30-50%) when compared with distillation type plants (10-30%). However, the RO membranes are susceptible to fouling and scaling and as such they need to be cleaned with chemicals regularly that may be toxic to receiving waters. The input and output water in desalination plants have to be pre and post treated, respectively. This involves treating for pH, coagulants, Cl, Cu, organics, CO(2), H(2)S and hypoxia. The by-product of the plant is mainly brine with concentration at times twice that of seawater. This discharge also includes traces of various chemicals used in cleaning including any anticorrosion products used in the plant and has to be treated to acceptable levels of each chemical before discharge but acceptable levels vary depending on receiving waters and state regulations. The discharge of the brine is usually done by a long pipe far into the sea or at the coastline. Either way the high density of the discharge reaches the bottom layers of receiving waters and may affect marine life particularly at the bottom layers or boundaries. The longer term effects of such discharge concentrate has not been documented but it is possible that small traces of toxic substances used in the cleaning of RO membranes may be harmful to marine life and ecosystem. The plants require saline water and thus the construction of input and discharge output piping is vital. The piping are often lengthy and underground as it is in Tugun (QLD, Australia), passing below the ground. Leakage of the concentrate via cracks in rocks to aquifers is a concern and therefore appropriate monitoring quality is needed. Leakage monitoring devices ought to be attached to such piping during installation. The initial environment impact assessment should identify key parameters for monitoring during discharge processes and should recommend ongoing monitoring with devices attached to structures installed during construction of plants.",
"title": ""
},
{
"docid": "45447ab4e0a8bd84fcf683ac482f5497",
"text": "Most of the current learning analytic techniques have as starting point the data recorded by Learning Management Systems (LMS) about the interactions of the students with the platform and among themselves. But there is a tendency on students to rely less on the functionality offered by the LMS and use more applications that are freely available on the net. This situation is magnified in studies in which students need to interact with a set of tools that are easily installed on their personal computers. This paper shows an approach using Virtual Machines by which a set of events occurring outside of the LMS are recorded and sent to a central server in a scalable and unobtrusive manner.",
"title": ""
},
{
"docid": "6018c84c0e5666b5b4615766a5bb98a9",
"text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.",
"title": ""
},
{
"docid": "2c93fcf96c71c7c0a8dcad453da53f81",
"text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.",
"title": ""
},
{
"docid": "38cccac8ee9371c55a54b2b43c25e2d9",
"text": "Blepharophimosis-ptosis-epicanthus inversus syndrome (BPES) is a rare autosomal dominant disorder whose main features are the abnormal shape, position and alignment of the eyelids. Type I refers to BPES with female infertility from premature ovarian failure while type II is limited to the ocular features. A causative gene, FOXL2, has been localized to 3q23. We report a black female who carried a de novo chromosomal translocation and 3.13 Mb deletion at 3q23, 1.2 Mb 5' to FOXL2. This suggests the presence of distant cis regulatory elements at the extended FOXL2 locus. In spite of 21 protein coding genes in the 3.13 Mb deleted segment, the patient had no other malformation and a strictly normal psychomotor development at age 2.5 years. Our observation confirms panethnicity of BPES and adds to the knowledge of the complex cis regulation of human FOXL2 gene expression.",
"title": ""
},
{
"docid": "2b086723a443020118b7df7f4021b4d9",
"text": "Random undersampling and oversampling are simple but well-known resampling methods applied to solve the problem of class imbalance. In this paper we show that the random oversampling method can produce better classification results than the random undersampling method, since the oversampling can increase the minority class recognition rate by sacrificing less amount of majority class recognition rate than the undersampling method. However, the random oversampling method would increase the computational cost associated with the SVM training largely due to the addition of new training examples. In this paper we present an investigation carried out to develop efficient resampling methods that can produce comparable classification results to the random oversampling results, but with the use of less amount of data. The main idea of the proposed methods is to first select the most informative data examples located closer to the class boundary region by using the separating hyperplane found by training an SVM model on the original imbalanced dataset, and then use only those examples in resampling. We demonstrate that it would be possible to obtain comparable classification results to the random oversampling results through two sets of efficient resampling methods which use 50% less amount of data and 75% less amount of data, respectively, compared to the sizes of the datasets generated by the random oversampling method.",
"title": ""
},
{
"docid": "c88c4097b0cf90031bbf3778d25bb87a",
"text": "In this paper we introduce a new data set consisting of user comments posted to the website of a German-language Austrian newspaper. Professional forum moderators have annotated 11,773 posts according to seven categories they considered crucial for the efficient moderation of online discussions in the context of news articles. In addition to this taxonomy and annotated posts, the data set contains one million unlabeled posts. Our experimental results using six methods establish a first baseline for predicting these categories. The data and our code are available for research purposes from https://ofai.github.io/million-post-corpus.",
"title": ""
},
{
"docid": "32059170608532d89b2d20724f282f4a",
"text": "Functional near infrared spectroscopy (fNIRS) is a rapidly developing neuroimaging modality for exploring cortical brain behaviour. Despite recent advances, the quality of fNIRS experimentation may be compromised in several ways: firstly, by altering the optical properties of the tissues encountered in the path of light; secondly, through adulteration of the recovered biological signals (noise) and finally, by modulating neural activity. Currently, there is no systematic way to guide the researcher regarding these factors when planning fNIRS studies. Conclusions extracted from fNIRS data will only be robust if appropriate methodology and analysis in accordance with the research question under investigation are employed. In order to address these issues and facilitate the quality control process, a taxonomy of factors influencing fNIRS data have been established. For each factor, a detailed description is provided and previous solutions are reviewed. Finally, a series of evidence-based recommendations are made with the aim of improving consistency and quality of fNIRS research.",
"title": ""
},
{
"docid": "abdf1edfb2b93b3991d04d5f6d3d63d3",
"text": "With the rapid growing of internet and networks applications, data security becomes more important than ever before. Encryption algorithms play a crucial role in information security systems. In this paper, we have a study of the two popular encryption algorithms: DES and Blowfish. We overviewed the base functions and analyzed the security for both algorithms. We also evaluated performance in execution speed based on different memory sizes and compared them. The experimental results show the relationship between function run speed and memory size.",
"title": ""
},
{
"docid": "0d1da055e444a90ec298a2926de9fe7b",
"text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.",
"title": ""
},
{
"docid": "444bcff9a7fdcb80041aeb01b8724eed",
"text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.",
"title": ""
},
{
"docid": "a4c80a334a6f9cd70fe5c7000740c18f",
"text": "CMOS SRAM cell is very less power consuming and have less read and write time. Higher cell ratios can decrease the read and write time and improve stability. PMOS transistor with less width reduces the power consumption. This paper implements 6T SRAM cell with reduced read and write time, area and power consumption. It has been noticed often that increased memory capacity increases the bit-line parasitic capacitance which in turn slows down voltage sensing and make bit-line voltage swings energy expensive. This result in slower and more energy hungry memories.. In this paper Two SRAM cell is being designed for 4 Kb of memory core with supply voltage 1.8 V. A technique of global bit line is used for reducing the power consumption and increasing the memory capacity.",
"title": ""
},
{
"docid": "d5a9f4e5cf1f15a7e39e0b49e571b936",
"text": "Article history: With the growth and evolu First received in February 6, 2005 and was under review for 9 months",
"title": ""
},
{
"docid": "c32c1c16aec9bc6dcfb5fa8fb4f25140",
"text": "Logo detection is a challenging task with many practical applications in our daily life and intellectual property protection. The two main obstacles here are lack of public logo datasets and effective design of logo detection structure. In this paper, we first manually collected and annotated 6,400 images and mix them with FlickrLogo-32 dataset, forming a larger dataset. Secondly, we constructed Faster R-CNN frameworks with several widely used classification models for logo detection. Furthermore, the transfer learning method was introduced in the training process. Finally, clustering was used to guarantee suitable hyper-parameters and more precise anchors of RPN. Experimental results show that the proposed framework outper-forms the state of-the-art methods with a noticeable margin.",
"title": ""
}
] |
scidocsrr
|
c4ea876b5e385a6a37bcd2274bff570c
|
Reducing drift in visual odometry by inferring sun direction using a Bayesian Convolutional Neural Network
|
[
{
"docid": "b2ba17cb2e2e2ef878bd87f657e3dd5e",
"text": "We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples.",
"title": ""
},
{
"docid": "3b6e3884a9d3b09d221d06f3dea20683",
"text": "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data – as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN’s kernels. We approximate our model’s intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-theart results for CIFAR-10.",
"title": ""
},
{
"docid": "2353942ce5857a8d7163fce6cb00d509",
"text": "Here, we present a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method. The method shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features. The proposed on-line method starts with visual odometry to estimate the ego-motion and to register point clouds from a scanning lidar at a high frequency but low fidelity. Then, scan matching based lidar odometry refines the motion estimation and point cloud registration simultaneously.We show results with datasets collected in our own experiments as well as using the KITTI odometry benchmark. Our proposed method is ranked #1 on the benchmark in terms of average translation and rotation errors, with a 0.75% of relative position drift. In addition to comparison of the motion estimation accuracy, we evaluate robustness of the method when the sensor suite moves at a high speed and is subject to significant ambient lighting changes.",
"title": ""
}
] |
[
{
"docid": "0868a18f156526ab4f5f1a2648bd3093",
"text": "BACKGROUND\nThe correlation between noninvasive markers with endoscopic activity according to the modified Baron Index in patients with ulcerative colitis (UC) is unknown. We aimed to evaluate the correlation between endoscopic activity and fecal calprotectin (FC), C-reactive protein (CRP), hemoglobin, platelets, blood leukocytes, and the Lichtiger Index (clinical score).\n\n\nMETHODS\nUC patients undergoing complete colonoscopy were prospectively enrolled and scored clinically and endoscopically. Samples from feces and blood were analyzed in UC patients and controls.\n\n\nRESULTS\nWe enrolled 228 UC patients and 52 healthy controls. Endoscopic disease activity correlated best with FC (Spearman's rank correlation coefficient r = 0.821), followed by the Lichtiger Index (r = 0.682), CRP (r = 0.556), platelets (r = 0.488), blood leukocytes (r = 0.401), and hemoglobin (r = -0.388). FC was the only marker that could discriminate between different grades of endoscopic activity (grade 0, 16 [10-30] μg/g; grade 1, 35 [25-48] μg/g; grade 2, 102 [44-159] μg/g; grade 3, 235 [176-319] μg/g; grade 4, 611 [406-868] μg/g; P < 0.001 for discriminating the different grades). FC with a cutoff of 57 μg/g had a sensitivity of 91% and a specificity of 90% to detect endoscopically active disease (modified Baron Index ≥ 2).\n\n\nCONCLUSIONS\nFC correlated better with endoscopic disease activity than clinical activity, CRP, platelets, hemoglobin, and blood leukocytes. The strong correlation with endoscopic disease activity suggests that FC represents a useful biomarker for noninvasive monitoring of disease activity in UC patients.",
"title": ""
},
{
"docid": "799912616c6978f63938bfac6b21b1ec",
"text": "Friction stir welding is a solid state joining process. High strength aluminum alloys are widely used in aircraft and marine industries. Generally, the mechanical properties of fusion welded aluminum joints are poor. As friction stir welding occurs in solid state, no solidification structures are created thereby eliminating the brittle and eutectic phases common in fusion welding of high strength aluminum alloys. In this review the process parameters, microstructural evolution, and effect of friction stir welding on the properties of weld specific to aluminum alloys have been discussed. Keywords—Aluminum alloys, Friction stir welding (FSW), Microstructure, Properties.",
"title": ""
},
{
"docid": "2013fc509f8f6d3fa2966d7d76169f43",
"text": "Graphene, whose discovery won the 2010 Nobel Prize in physics, has been a shining star in the material science in the past few years. Owing to its interesting electrical, optical, mechanical and chemical properties, graphene has found potential applications in a wide range of areas, including biomedicine. In this article, we will summarize the latest progress of using graphene for various biomedical applications, including drug delivery, cancer therapies and biosensing, and discuss the opportunities and challenges in this emerging field.",
"title": ""
},
{
"docid": "23a5152da5142048332c09164bade40f",
"text": "Knowledge bases extracted automatically from the Web present new opportunities for data mining and exploration. Given a large, heterogeneous set of extracted relations, new tools are needed for searching the knowledge and uncovering relationships of interest. We present WikiTables, a Web application that enables users to interactively explore tabular knowledge extracted from Wikipedia.\n In experiments, we show that WikiTables substantially outperforms baselines on the novel task of automatically joining together disparate tables to uncover \"interesting\" relationships between table columns. We find that a \"Semantic Relatedness\" measure that leverages the Wikipedia link structure accounts for a majority of this improvement. Further, on the task of keyword search for tables, we show that WikiTables performs comparably to Google Fusion Tables despite using an order of magnitude fewer tables. Our work also includes the release of a number of public resources, including over 15 million tuples of extracted tabular data, manually annotated evaluation sets, and public APIs.",
"title": ""
},
{
"docid": "b634d8eb5016f93604ed460cebe07468",
"text": "The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist \"Adam,\" which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.",
"title": ""
},
{
"docid": "73300c22cc92eac1133d84cdad0d00e7",
"text": "BACKGROUND\nVideo-games are becoming a common tool to guide patients through rehabilitation because of their power of motivating and engaging their users. Video-games may also be integrated into an infrastructure that allows patients, discharged from the hospital, to continue intensive rehabilitation at home under remote monitoring by the hospital itself, as suggested by the recently funded Rewire project.\n\n\nOBJECTIVE\nGoal of this work is to describe a novel low cost platform, based on video-games, targeted to neglect rehabilitation.\n\n\nMETHODS\nThe patient is guided to explore his neglected hemispace by a set of specifically designed games that ask him to reach targets, with an increasing level of difficulties. Visual and auditory cues helped the patient in the task and are progressively removed. A controlled randomization of scenarios, targets and distractors, a balanced reward system and music played in the background, all contribute to make rehabilitation more attractive, thus enabling intensive prolonged treatment.\n\n\nRESULTS\nResults from our first patient, who underwent rehabilitation for half an hour, for five days a week for one month, showed on one side a very positive attitude of the patient towards the platform for the whole period, on the other side a significant improvement was obtained. Importantly, this amelioration was confirmed at a follow up evaluation five months after the last rehabilitation session and generalized to everyday life activities.\n\n\nCONCLUSIONS\nSuch a system could well be integrated into a home based rehabilitation system.",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "35b668eeecb71fc1931e139a90f2fd1f",
"text": "In this article we present novel learning methods for estimating the quality of results returned by a search engine in response to a query. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. We demonstrate the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval. Experiments on TREC data demonstrate the robustness and the effectiveness of our learning algorithms.",
"title": ""
},
{
"docid": "a5cd7d46dc74d15344e2f3e9b79388a3",
"text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"title": ""
},
{
"docid": "1f8be01ff656d9414a8bd1e12111081d",
"text": "Gaining an architectural level understanding of a software system is important for many reasons. When the description of a system's architecture does not exist, attempts must be made to recover it. In recent years, researchers have explored the use of clustering for recovering a software system's architecture, given only its source code. The main contributions of this paper are given as follows. First, we review hierarchical clustering research in the context of software architecture recovery and modularization. Second, to employ clustering meaningfully, it is necessary to understand the peculiarities of the software domain, as well as the behavior of clustering measures and algorithms in this domain. To this end, we provide a detailed analysis of the behavior of various similarity and distance measures that may be employed for software clustering. Third, we analyze the clustering process of various well-known clustering algorithms by using multiple criteria, and we show how arbitrary decisions taken by these algorithms during clustering affect the quality of their results. Finally, we present an analysis of two recently proposed clustering algorithms, revealing close similarities in their apparently different clustering approaches. Experiments on four legacy software systems provide insight into the behavior of well-known clustering algorithms and their characteristics in the software domain.",
"title": ""
},
{
"docid": "4f6979ca99ec7fb0010fd102e7796248",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "e7d5dd2926238db52cf406f20947f90e",
"text": "The development of the capital markets is changing the relevance and empirical validity of the efficient market hypothesis. The dynamism of capital markets determines the need for efficiency research. The authors analyse the development and the current status of the efficient market hypothesis with an emphasis on the Baltic stock market. Investors often fail to earn an excess profit, but yet stock market anomalies are observed and market prices often deviate from their intrinsic value. The article presents an analysis of the concept of efficient market. Also, the market efficiency evolution is reviewed and its current status is analysed. This paper presents also an examination of stock market efficiency in the Baltic countries. Finally, the research methods are reviewed and the methodology of testing the weak-form efficiency in a developing market is suggested.",
"title": ""
},
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "6be88914654c736c8e1575aeb37532a3",
"text": "Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and mis-interpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",
"title": ""
},
{
"docid": "caa660feb6bb35ad92f6da6293cb0279",
"text": "Our ability to express and accurately assess emotional states is central to human life. The present study examines how people express and detect emotions during text-based communication, an environment that eliminates the nonverbal cues typically associated with emotion. The results from 40 dyadic interactions suggest that users relied on four strategies to express happiness versus sadness, including disagreement, negative affect terms, punctuation, and verbosity. Contrary to conventional wisdom, communication partners readily distinguished between positive and negative valence emotional communicators in this text-based context. The results are discussed with respect to the Social Information Processing model of strategic relational adaptation in mediated communication.",
"title": ""
},
{
"docid": "d29cca7c16b0e5b43c85e1a8701d735f",
"text": "The sparse matrix solver by LU factorization is a serious bottleneck in Simulation Program with Integrated Circuit Emphasis (SPICE)-based circuit simulators. The state-of-the-art Graphics Processing Units (GPU) have numerous cores sharing the same memory, provide attractive memory bandwidth and compute capability, and support massive thread-level parallelism, so GPUs can potentially accelerate the sparse solver in circuit simulators. In this paper, an efficient GPU-based sparse solver for circuit problems is proposed. We develop a hybrid parallel LU factorization approach combining task-level and data-level parallelism on GPUs. Work partitioning, number of active thread groups, and memory access patterns are optimized based on the GPU architecture. Experiments show that the proposed LU factorization approach on NVIDIA GTX580 attains an average speedup of 7.02× (geometric mean) compared with sequential PARDISO, and 1.55× compared with 16-threaded PARDISO. We also investigate bottlenecks of the proposed approach by a parametric performance model. The performance of the sparse LU factorization on GPUs is constrained by the global memory bandwidth, so the performance can be further improved by future GPUs with larger memory bandwidth.",
"title": ""
},
{
"docid": "1e464db177e96b6746f8f827c582cc31",
"text": "In order to respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents the first work on a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.",
"title": ""
},
{
"docid": "019d5deed0ed1e5b50097d5dc9121cb6",
"text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.",
"title": ""
}
] |
scidocsrr
|
fb7eb6890e9bad6ce0e007dcbea99576
|
Vehicle detection and localization on bird's eye view elevation images using convolutional neural network
|
[
{
"docid": "7d2f5505b2a60fb113524903aa5acc7d",
"text": "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "e4b02298a2ff6361c0a914250f956911",
"text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.",
"title": ""
}
] |
[
{
"docid": "19c3bd8d434229d98741b04d3041286b",
"text": "The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).",
"title": ""
},
{
"docid": "97281ba9e6da8460f003bb860836bb10",
"text": "In this letter, a novel miniaturized periodic element for constructing a bandpass frequency selective surface (FSS) is proposed. Compared to previous miniaturized structures, the FSS proposed has better miniaturization performance with the dimension of a unit cell only 0.061 λ × 0.061 λ , where λ represents the wavelength of the resonant frequency. Moreover, the miniaturization characteristic is stable with respect to different polarizations and incident angles of the waves illuminating. Both simulation and measurement are taken, and the results obtained demonstrate the claimed performance.",
"title": ""
},
{
"docid": "634b30b81da7139082927109b4c22d5e",
"text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.",
"title": ""
},
{
"docid": "6827fc6b1096dbfc7dfbd1886911a4ff",
"text": "This paper proposes a new formulation and solution to image-based 3D modeling (aka “multi-view stereo”) based on generative statistical modeling and inference. The proposed new approach, named statistical inverse ray tracing, models and estimates the occlusion relationship accurately through optimizing a physically sound image generation model based on volumetric ray tracing. Together with geometric priors, they are put together into a Bayesian formulation known as Markov random field (MRF) model. This MRF model is different from typical MRFs used in image analysis in the sense that the ray clique, which models the ray-tracing process, consists of thousands of random variables instead of two to dozens. To handle the computational challenges associated with large clique size, an algorithm with linear computational complexity is developed by exploiting, using dynamic programming, the recursive chain structure of the ray clique. We further demonstrate the benefit of exact modeling and accurate estimation of the occlusion relationship by evaluating the proposed algorithm on several challenging data sets.",
"title": ""
},
{
"docid": "09085472d12ed72d5c0fe27b5eb5e175",
"text": "BACKGROUND\nUse of exergames can complement conventional therapy and increase the amount and intensity of visuospatial neglect (VSN) training. A series of 9 exergames-games based on therapeutic principles-aimed at improving exploration of the neglected space for patients with VSN symptoms poststroke was developed and tested for its feasibility.\n\n\nOBJECTIVES\nThe goal was to determine the feasibility of the exergames with minimal supervision in terms of (1) implementation of the intervention, including adherence, attrition and safety, and (2) limited efficacy testing, aiming to document possible effects on VSN symptoms in a case series of patients early poststroke.\n\n\nMETHODS\nA total of 7 patients attended the 3-week exergames training program on a daily basis. Adherence of the patients was documented in a training diary. For attrition, the number of participants lost during the intervention was registered. Any adverse events related to the exergames intervention were noted to document safety. Changes in cognitive and spatial exploration skills were measured with the Zürich Maxi Mental Status Inventory and the Neglect Test. Additionally, we developed an Eye Tracker Neglect Test (ETNT) using an infrared camera to detect and measure neglect symptoms pre- and postintervention.\n\n\nRESULTS\nThe median was 14 out of 15 (93%) attended sessions, indicating that the adherence to the exergames training sessions was high. There were no adverse events and no drop-outs during the exergame intervention. The individual cognitive and spatial exploration skills slightly improved postintervention (P=.06 to P=.98) and continued improving at follow-up (P=.04 to P=.92) in 5 out of 7 (71%) patients. Calibration of the ETNT was rather error prone. The ETNT showed a trend for a slight median group improvement from 15 to 16 total located targets (+6%).\n\n\nCONCLUSIONS\nThe high adherence rate and absence of adverse events showed that these exergames were feasible and safe for the participants. The results of the amount of exergames use is promising for future applications and warrants further investigations-for example, in the home setting of patients to augment training frequency and intensity. The preliminary results indicate the potential of these exergames to cause improvements in cognitive and spatial exploration skills over the course of training for stroke patients with VSN symptoms. Thus, these exergames are proposed as a motivating training tool to complement usual care. The ETNT showed to be a promising assessment for quantifying spatial exploration skills. However, further adaptations are needed, especially regarding calibration issues, before its use can be justified in a larger study sample.",
"title": ""
},
{
"docid": "ff93e77bb0e0b24a06780a05cc16123d",
"text": "Models in science may be used for various purposes: organizing data, synthesizing information, and making predictions. However, the value of model predictions is undermined by their uncertainty, which arises primarily from the fact that our models of complex natural systems are always open. Models can never fully specify the systems that they describe, and therefore their predictions are always subject to uncertainties that we cannot fully specify. Moreover, the attempt to make models capture the complexities of natural systems leads to a paradox: the more we strive for realism by incorporating as many as possible of the different processes and parameters that we believe to be operating in the system, the more difficult it is for us to know if our tests of the model are meaningful. A complex model may be more realistic, yet it is ironic that as we add more factors to a model, the certainty of its predictions may decrease even as our intuitive faith in the model increases. For this and other reasons, model output should not be viewed as an accurate prediction of the future state of the system. Short timeframe model output can and should be used to evaluate models and suggest avenues for future study. Model output can also generate “what if” scenarios that can help to evaluate alternative courses of action (or inaction), including worst-case and best-case outcomes. But scientists should eschew long-range deterministic predictions, which are likely to be erroneous and may damage the credibility of the communities that generate them.",
"title": ""
},
{
"docid": "bbf6525cac19ca016d8972a1c4bbc8fe",
"text": "The Host Identity Protocol (HIP) is an inter-networking architecture and an associated set of protocols, developed at the IETF since 1999 and reaching their first stable version in 2007. HIP enhances the original Internet architecture by adding a name space used between the IP layer and the transport protocols. This new name space consists of cryptographic identifiers, thereby implementing the so-called identifier/locator split. In the new architecture, the new identifiers are used in naming application level end-points (sockets), replacing the prior identification role of IP addresses in applications, sockets, TCP connections, and UDP-based send and receive system calls. IPv4 and IPv6 addresses are still used, but only as names for topological locations in the network. HIP can be deployed such that no changes are needed in applications or routers. Almost all pre-compiled legacy applications continue to work, without modifications, for communicating with both HIP-enabled and non-HIP-enabled peer hosts. The architectural enhancement implemented by HIP has profound consequences. A number of the previously hard networking problems become suddenly much easier. Mobility, multi-homing, and baseline end-to-end security integrate neatly into the new architecture. The use of cryptographic identifiers allows enhanced accountability, thereby providing a base for easier build up of trust. With privacy enhancements, HIP allows good location anonymity, assuring strong identity only towards relevant trusted parties. Finally, the HIP protocols have been carefully designed to take middle boxes into account, providing for overlay networks and enterprise deployment concerns. This article provides an in-depth look at HIP, discussing its architecture, design, benefits, potential drawbacks, and ongoing work.",
"title": ""
},
{
"docid": "1516f9d674d911cef4b8d5cd8780afe7",
"text": "This paper describes a novel approach to event-based debugging. The approach is based on a (coarsegrained) dataflow view of events: a high-level event is recognized when an appropriate combination of lower-level events on which it depends has occurred. Event recognition is controlled using familiar programming language constructs. This approach is more flexible and powerful than current ones. It allows arbitrary debugger language commands to be executed when attempting to form higher-level events. It also allows users to specify event recognition in much the same way that they write programs. This paper also describes a prototype, Dalek, that employs the dataflow approach for debugging sequential programs. Dalek demonstrates the feasibility and attractiveness of the dataflow approach. One important motivation for this work is that current sequential debugging tools are inadequate. Dalek contributes toward remedying such inadequacies by providing events and a powerful debugging language. Generalizing the dataflow approach so that it can aid in the debugging of concurrent programs is under investigation.",
"title": ""
},
{
"docid": "6e589911b6822ea0ecbf65188e7932bf",
"text": "This work proposes a novel test architecture that combines the advantages of both scan-based and built-in self-test (BIST) designs. The main idea is to record (store) all required compressed test data in a novel scan chain structure such that the stored data can be extracted, reconstructed and decompressed into required deterministic patterns using an on-chip test controller with a test pattern decompressor. The recording of test data is achieved by modifying the connections between scan cells. Techniques to extract test data from the modified scan cells and to deliver decompressed test patterns to the modified scan cells are presented. The on-chip test controller can automatically generate all required control signals for the whole test procedure. This significantly reduces the requirements on external ATE. Experimental results on OpenSPARC T2, a publicly accessible 8-core processor containing 5.7M gates, show that all required test data for 100% testable stuck-at fault coverage can be stored in the scan chains of the processor with less than 3% total area overhead for the whole test architecture.",
"title": ""
},
{
"docid": "a54f2e7a7d00cf5c9879e86009b60221",
"text": "OBJECTIVES\nThis study was aimed to compare the effectiveness of aromatherapy and acupressure massage intervention strategies on the sleep quality and quality of life (QOL) in career women.\n\n\nDESIGN\nThe randomized controlled trial experimental design was used in the present study. One hundred and thirty-two career women (24-55 years) voluntarily participated in this study and they were randomly assigned to (1) placebo (distilled water), (2) lavender essential oil (Lavandula angustifolia), (3) blended essential oil (1:1:1 ratio of L. angustifolia, Salvia sclarea, and Origanum majorana), and (4) acupressure massage groups for a 4-week treatment. The Pittsburgh Sleep Quality Index and Short Form 36 Health Survey were used to evaluate the intervention effects at pre- and postintervention.\n\n\nRESULTS\nAfter a 4-week treatment, all experimental groups (blended essential oil, lavender essential oil, and acupressure massage) showed significant improvements in sleep quality and QOL (p < 0.05). Significantly greater improvement in QOL was observed in the participants with blended essential oil treatment compared with those with lavender essential oil (p < 0.05), and a significantly greater improvement in sleep quality was observed in the acupressure massage and blended essential oil groups compared with the lavender essential oil group (p < 0.05).\n\n\nCONCLUSIONS\nThe blended essential oil exhibited greater dual benefits on improving both QOL and sleep quality compared with the interventions of lavender essential oil and acupressure massage in career women. These results suggest that aromatherapy and acupressure massage improve the sleep and QOL and may serve as the optimal means for career women to improve their sleep and QOL.",
"title": ""
},
{
"docid": "b4b6d9c35542b90eaee5de29664c86db",
"text": "In this paper, two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance–complexity tradeoff and can be efficiently implemented in software as well as in hardware, with possibly quantized received values.",
"title": ""
},
{
"docid": "e13e0a64d9c9ede58590d1cc113fbada",
"text": "Background The blood-brain barrier (BBB) has been hypothesized to play a role in migraine since the late 1970s. Despite this, limited investigation of the BBB in migraine has been conducted. We used the inflammatory soup rat model of trigeminal allodynia, which closely mimics chronic migraine, to determine the impact of repeated dural inflammatory stimulation on BBB permeability. Methods The sodium fluorescein BBB permeability assay was used in multiple brain regions (trigeminal nucleus caudalis (TNC), periaqueductal grey, frontal cortex, sub-cortex, and cortex directly below the area of dural activation) during the episodic and chronic stages of repeated inflammatory dural stimulation. Glial activation was assessed in the TNC via GFAP and OX42 immunoreactivity. Minocycline was tested for its ability to prevent BBB disruption and trigeminal sensitivity. Results No astrocyte or microglial activation was found during the episodic stage, but BBB permeability and trigeminal sensitivity were increased. Astrocyte and microglial activation, BBB permeability, and trigeminal sensitivity were increased during the chronic stage. These changes were only found in the TNC. Minocycline treatment prevented BBB permeability modulation and trigeminal sensitivity during the episodic and chronic stages. Discussion Modulation of BBB permeability occurs centrally within the TNC following repeated dural inflammatory stimulation and may play a role in migraine.",
"title": ""
},
{
"docid": "dcb7fd880064d27028df8412bfcc0422",
"text": "We recorded high-density EEG in a flanker task experiment (31 subjects) and an online BCI control paradigm (4 subjects). On these datasets, we evaluated the use of transfer learning for error decoding with deep convolutional neural networks (deep ConvNets). In comparison with a regularized linear discriminant analysis (rLDA) classifier, ConvNets were significantly better in both intra- and inter-subject decoding, achieving an average accuracy of 84.1 % within subject and 81.7 % on unknown subjects (flanker task). Neither method was, however, able to generalize reliably between paradigms. Visualization of features the ConvNets learned from the data showed plausible patterns of brain activity, revealing both similarities and differences between the different kinds of errors. Our findings indicate that deep learning techniques are useful to infer information about the correctness of action in BCI applications, particularly for the transfer of pre-trained classifiers to new recording sessions or subjects.",
"title": ""
},
{
"docid": "1fa6abe84be2f1240b4c5c077bbbb171",
"text": "The largest eigenvalue of the adjacency matrix of a network (referred to as the spectral radius) is an important metric in its own right. Further, for several models of epidemic spread on networks (e.g., the ‘flu-like’ SIS model), it has been shown that an epidemic dies out quickly if the spectral radius of the graph is below a certain threshold that depends on the model parameters. This motivates a strategy to control epidemic spread by reducing the spectral radius of the underlying network. In this paper, we develop a suite of provable approximation algorithms for reducing the spectral radius by removing the minimum cost set of edges (modeling quarantining) or nodes (modeling vaccinations), with different time and quality tradeoffs. Our main algorithm, GreedyWalk, is based on the idea of hitting closed walks of a given length, and gives an O(log n)-approximation, where n denotes the number of nodes; it also performs much better in practice compared to all prior heuristics proposed for this problem. We further present a novel sparsification method to improve its running time. In addition, we give a new primal-dual based algorithm with an even better approximation guarantee (O(log n)), albeit with slower running time. We also give lower bounds on the worst-case performance of some of the popular heuristics. Finally we demonstrate the applicability of our algorithms and the properties of our solutions via extensive experiments on multiple synthetic and real networks.",
"title": ""
},
{
"docid": "a8202f10b75aaffc3b74a2fdaad20e61",
"text": "Plant nitrogen (N) deficiency often limits crop productivity. Early detection of plant N deficiency is important for improving fertilizer N-use efficiency and crop yield. An experiment was conducted in sunlit, controlled environment chambers in the 2001 growing season to determine responses of corn (Zea mays L. cv. 33A14) growth and leaf hyperspectral reflectance properties to varying N supply. Four N treatments were: (1) half-strength Hoagland's nutrient solution applied throughout the experiment (control); (2) 20% of control N starting 15 days after emergence (DAE); (3) 0% N starting 15 DAE; and (4) 0% N starting 23 DAE (0% NL). Plant height, the number of leaves, and leaf lengths were examined for nine plants per treatment every 3–4 days. Leaf hyperspectral reflectance, concentrations of chlorophyll a, chlorophyll b,and carotenoids, leaf and canopy photosynthesis, leaf area, and leaf N concentration were also determined during the experiment. The various N treatments led to a wide range of N concentrations (11 – 48 g kg−1 DW) in uppermost fully expanded leaves. Nitrogen deficiency suppressed plant growth rate and leaf photosynthesis. At final harvest (42 DAE), plant height, leaf area and shoot biomass were 64–66% of control values for the 20% N treatment, and 46-56% of control values for the 0% N treatment. Nitrogen deficit treatments of 20% N and 0% N (Treatment 3) could be distinguished by changes in leaf spectral reflectance in wavelengths of 552 and 710 nm 7 days after treatment. Leaf reflectance at these two wavebands was negatively correlated with either leaf N (r = –0.72 and –0.75**) or chlorophyll (r = –0.60 and –0.72**) concentrations. In addition, higher correlations were found between leaf N concentration and reflectance ratios. The identified N-specific spectral algorithms may be used for image interpretation and diagnosis of corn N status for site-specific N management.",
"title": ""
},
{
"docid": "10204deb0dfdde9559b7cc97050c9ece",
"text": "We describe the current state of a system that recognizes printed text of various fonts and sizes for the Roman alphabet. The system combines several techniques in order to improve the overall recognition rate. Thinning and shape extraction are performed directly on a graph of the run-length encoding of a binary image. The resulting strokes and other shapes are mapped, using a shape-clustering approach, into binary features which are then fed into a statistical Bayesian classifier. Large-scale trials have shown better than 97 percent top choice correct performance on mixtures of six dissimilar fonts, and over 99 percent on most single fonts, over a range of point sizes. Certain remaining confusion classes are disambiguated through contour analysis, and characters suspected of being merged are broken and reclassified. Finally, layout and linguistic context are applied. The results are illustrated by sample pages.",
"title": ""
},
{
"docid": "e48f641ad2ca9a61611b48e1a6f82a52",
"text": "We present a methodology to design cavity-excited omega-bianisotropic metasurface (O-BMS) antennas capable of producing arbitrary radiation patterns, prescribed by antenna array theory. The method relies on previous work, in which we proved that utilizing the three O-BMS degrees of freedom, namely, electric and magnetic polarizabilities, and magnetoelectric coupling, any field transformation that obeys local power conservation can be implemented via passive lossless components. When the O-BMS acts as the top cover of a metallic cavity excited by a point source, this property allows optimization of the metasurface modal reflection coefficients to establish any desirable power profile on the aperture. Matching in this way the excitation profile to the target power profile corresponding to the desirable aperture fields allows emulation of arbitrary discrete antenna array radiation patterns. The resultant low-profile probed-fed cavity-excited O-BMS antennas offer a new means for meticulous pattern control, without requiring complex, expensive, and often lossy, feed networks.",
"title": ""
},
{
"docid": "e9c523662963a7c609eb59a4c19eff7f",
"text": "We propose a sampling theory for signals that are supported on either directed or undirected graphs. The theory follows the same paradigm as classical sampling theory. We show that perfect recovery is possible for graph signals bandlimited under the graph Fourier transform. The sampled signal coefficients form a new graph signal, whose corresponding graph structure preserves the first-order difference of the original graph signal. For general graphs, an optimal sampling operator based on experimentally designed sampling is proposed to guarantee perfect recovery and robustness to noise; for graphs whose graph Fourier transforms are frames with maximal robustness to erasures as well as for Erdös-Rényi graphs, random sampling leads to perfect recovery with high probability. We further establish the connection to the sampling theory of finite discrete-time signal processing and previous work on signal recovery on graphs. To handle full-band graph signals, we propose a graph filter bank based on sampling theory on graphs. Finally, we apply the proposed sampling theory to semi-supervised classification of online blogs and digit images, where we achieve similar or better performance with fewer labeled samples compared to previous work.",
"title": ""
},
{
"docid": "e92e097189bd6135dd68b787bb4881aa",
"text": "Figure 1: (a) Our method with 6.7M triangles Rungholt scene. 55K shaded samples. Inset picture was taken through the lens of the Oculus Rift HMD. (b) Naı̈ve ray tracing. 1M shaded samples. Visual quality in our method is equivalent to the one produced by the naı̈ve method when seen through the HMD. (c) Our foveated sampling pattern and k-NN filtering method. Each cell corresponds to a sampling point. Real-time rendering over 60 fps is achieved with the OpenCLray tracer, running on four RadeonR9 290X GPUs.",
"title": ""
},
{
"docid": "ff7c790af7eaaea4bf3a354d21fd9189",
"text": "Among the large number of contributions concerning the localization techniques for wireless sensor networks (WSNs), there is still no simple, energy and cost efficient solution suitable in outdoor scenarios. In this paper, a technique based on antenna arrays and angle-ofarrival (AoA) measurements is carefully discussed. While the AoA algorithms are rarely considered for WSNs due to the large dimensions of directional antennas, some system configurations are investigated that can be easily incorporated in pocket-size wireless devices. A heuristic weighting function that enables decreasing the location errors is introduced. Also, the detailed performance analysis of the presented system is provided. The localization accuracy is validated through realistic Monte-Carlo simulations that take into account the specificity of propagation conditions in WSNs as well as the radio noise effects. Finally, trade-offs between the accuracy, localization time and the number of anchors in a network are addressed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
3cce1b6f8907c4514d7c7bac11762241
|
Design of low inertia manipulator with high stiffness and strength using tension amplifying mechanisms
|
[
{
"docid": "241a1589619c2db686675327cab1e8da",
"text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.",
"title": ""
},
{
"docid": "e7686824a9449bf793554fcf78b66c0e",
"text": "In this paper, tension propagation analysis of a newly designed multi-DOF robotic platform for single-port access surgery (SPS) is presented. The analysis is based on instantaneous kinematics of the proposed 6-DOF surgical instrument, and provides the decision criteria for estimating the payload of a surgical instrument according to its pose changes and specifications of a driving-wire. Also, the wire-tension and the number of reduction ratio to manage such a payload can be estimated, quantitatively. The analysis begins with derivation of the power transmission efficiency through wire-interfaces from each instrument joint to an actuator. Based on the energy conservation law and the capstan equation, we modeled the degradation of power transmission efficiency due to 1) the reducer called wire-reduction mechanism, 2) bending of proximal instrument joints, and 3) bending of hyper-redundant guide tube. Based on the analysis, the tension of driving-wires was computed according to various manipulation poses and loading conditions. In our experiment, a newly designed surgical instrument successfully managed the external load of 1kgf, which was applied to the end effector of a surgical manipulator.",
"title": ""
},
{
"docid": "619926d6d08a41c1f5d55926f3a2a3d9",
"text": "Transforming research results into marketable products requires considerable endurance and a strong sense of entrepreneurship. The KUKA Lightweight Robot (LWR) is the latest outcome of a bilateral research collaboration between KUKA Roboter, Augsburg, and the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), Wessling. The LWR has unique characteristics including a low mass-payload ratio and a programmable, active compliance which enables researchers and engineers to develop new industrial and service robotics applications with unprecedented performance, making it a unique reference platform for robotics research and future manufacturing. The stages of product genesis, the most innovative features and first application examples are presented.",
"title": ""
}
] |
[
{
"docid": "af75e646afb0cf67130496397534eddc",
"text": "Prior laboratory studies have shown that PhishGuru, an embedded training system, is an effective way to teach users to identify phishing scams. PhishGuru users are sent simulated phishing attacks and trained after they fall for the attacks. In this current study, we extend the PhishGuru methodology to train users about spear phishing and test it in a real world setting with employees of a Portuguese company. Our results demonstrate that the findings of PhishGuru laboratory studies do indeed hold up in a real world deployment. Specifically, the results from the field study showed that a large percentage of people who clicked on links in simulated emails proceeded to give some form of personal information to fake phishing websites, and that participants who received PhishGuru training were significantly less likely to fall for subsequent simulated phishing attacks one week later. This paper also presents some additional new findings. First, people trained with spear phishing training material did not make better decisions in identifying spear phishing emails compared to people trained with generic training material. Second, we observed that PhishGuru training could be effective in training other people in the organization who did not receive training messages directly from the system. Third, we also observed that employees in technical jobs were not different from employees with non-technical jobs in identifying phishing emails before and after the training. We conclude with some lessons that we learned in conducting the real world study.",
"title": ""
},
{
"docid": "58f093ac65039299c40da33fbce3f7ee",
"text": "Currently computers are changing from single isolated devices into entry points into a worldwide network of information exchange and business transactions. Support in data, information, and knowledge exchange is becoming the key issue in current computer technology. Ontologies will play a major role in supporting information exchange processes in various areas. A prerequisite for such a role is the development of a joint standard for specifying and exchanging ontologies. The purpose of the paper is precisely concerned with this necessity. We will present OIL, which is a proposal for such a standard. It is based on existing proposals such as OKBC, XOL and RDF schema, enriching them with necessary features for expressing ontologies. The paper sketches the main ideas of OIL.",
"title": ""
},
{
"docid": "028f272130f8b03c8f4ac97158c3dd70",
"text": "While the importance of literature studies in the IS discipline is well recognized, little attention has been paid to the underlying structure and method of conducting effective literature reviews. Despite the fact that literature is often used to refine the research context and direct the pathways for successful research outcomes, there is very little evidence of the use of resource management tools to support the literature review process. In this paper we want to contribute to advancing the way in which literature studies in Information Systems are conducted, by proposing a systematic, pre-defined and tool-supported method to extract, analyse and report literature. This paper presents how to best identify relevant IS papers to review within a feasible and justifiable scope, how to extract relevant content from identified papers, how to synthesise and analyse the findings of a literature review and what are ways to effectively write and present the results of a literature review. The paper is specifically targeted towards novice IS researchers, who would seek to conduct a systematic detailed literature review in a focused domain. Specific contributions of our method are extensive tool support, the identification of appropriate papers including primary and secondary paper sets and a precodification scheme. We use a literature study on shared services as an illustrative example to present the proposed approach.",
"title": ""
},
{
"docid": "0ff1837d40bbd6bbfe4f5ec69f83de90",
"text": "Nowadays, Telemarketing is an interactive technique of direct marketing that many banks apply to present a long term deposit to bank customers via the phone. Although the offering like this manner is powerful, it may make the customers annoyed. The data prediction is a popular task in data mining because it can be applied to solve this problem. However, the predictive performance may be decreased in case of the input data have many features like the bank customer information. In this paper, we focus on how to reduce the feature of input data and balance the training set for the predictive model to help the bank to increase the prediction rate. In the system performance evaluation, all accuracy rates of each predictive model based on the proposed approach compared with the original predictive model based on the truth positive and receiver operating characteristic measurement show the high performance in which the smaller number of features.",
"title": ""
},
{
"docid": "a16e16484e3fca05f97916a48f6a6da5",
"text": "A novel integrated magnetic structure suitable for the transformer-linked interleaved boost chopper circuit is proposed in this paper. The coupled inductor is known to be effective for miniaturization in high coupling area because the DC flux in the core can be canceled and the inductor current ripple become to high frequency. However, coupled inductor with E-E core and E-I core are realistically difficult to obtain necessary leakage inductance in high coupling area. The cause is fringing effect and the effects leads to complication of magnetic design. To solve this problem, novel integrated magnetic structure with reduction of fringing flux and high frequency ripple current performance, is proposed. Furthermore, the design method for novel integrated magnetic structure suitable for coupled inductor is proposed from analyzing of the magnetic circuit model. Finally, effectiveness of reduction of fringing flux and design method for novel coupled inductor are discussed from experimental point of view.",
"title": ""
},
{
"docid": "8518dc45e3b0accfc551111489842359",
"text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.",
"title": ""
},
{
"docid": "ee596f4ef7d41c6b627a6990d54b07c2",
"text": "The objective of this study is to develop effective computational models that can predict student learning gains, preferably as early as possible. We compared a series of Bayesian Knowledge Tracing (BKT) models against vanilla RNNs and Long Short Term Memory (LSTM) based models. Our results showed that the LSTM-based model achieved the highest accuracy and the RNN based model have the highest F1-measure. Interestingly, we found that RNN can achieve a reasonably accurate prediction of student final learning gains using only the first 40% of the entire training sequence; using the first 70% of the sequence would produce a result comparable to using the entire sequence.",
"title": ""
},
{
"docid": "fd9461aeac51be30c9d0fbbba298a79b",
"text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.",
"title": ""
},
{
"docid": "38570075c31812866646d47d25667a49",
"text": "Mercator is a program that uses hop-limited probes—the same primitive used in traceroute—to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route ca p ble routers wherever possible to enhance the fidelity of the resulting ma p, and employs novel mechanisms for resolvingaliases(interfaces belonging to the same router). This paper describes the design of these heuri stics and our experiences with Mercator, and presents some preliminary a nalysis of the resulting Internet map.",
"title": ""
},
{
"docid": "258269307e097a89fd089cf44ba50ecd",
"text": "The Visual Notation for OWL Ontologies (VOWL) provides a visual language for the representation of ontologies. In contrast to related work, VOWL aims for an intuitive and interactive visualization that is also understandable to users less familiar with ontologies. This paper presents ProtégéVOWL, a first implementation of VOWL realized as a plugin for the ontology editor Protégé. It accesses the internal ontology representation provided by the OWL API and defines graphical mappings according to the VOWL specification. The information visualization toolkit Prefuse is used to render the visual elements and to combine them to a force-directed graph layout. Results from a preliminary user study indicate that ProtégéVOWL does indeed provide a comparatively intuitive and usable ontology visualization.",
"title": ""
},
{
"docid": "8b0c5091847f6d199387616fe9ad1076",
"text": "Humans have rich understanding of liquid containers and their contents; for example, we can effortlessly pour water from a pitcher to a cup. Doing so requires estimating the volume of the cup, approximating the amount of water in the pitcher, and predicting the behavior of water when we tilt the pitcher. Very little attention in computer vision has been made to liquids and their containers. In this paper, we study liquid containers and their contents, and propose methods to estimate the volume of containers, approximate the amount of liquid in them, and perform comparative volume estimations all from a single RGB image. Furthermore, we show the results of the proposed model for predicting the behavior of liquids inside containers when one tilts the containers. We also introduce a new dataset of Containers Of liQuid contEnt (COQE) that contains more than 5,000 images of 10,000 liquid containers in context labelled with volume, amount of content, bounding box annotation, and corresponding similar 3D CAD models.",
"title": ""
},
{
"docid": "af48f00757d8e95d92facca57cd9d13c",
"text": "Remaining useful life (RUL) prediction allows for predictive maintenance of machinery, thus reducing costly unscheduled maintenance. Therefore, RUL prediction of machinery appears to be a hot issue attracting more and more attention as well as being of great challenge. This paper proposes a model-based method for predicting RUL of machinery. The method includes two modules, i.e., indicator construction and RUL prediction. In the first module, a new health indicator named weighted minimum quantization error is constructed, which fuses mutual information from multiple features and properly correlates to the degradation processes of machinery. In the second module, model parameters are initialized using the maximum-likelihood estimation algorithm and RUL is predicted using a particle filtering-based algorithm. The proposed method is demonstrated using vibration signals from accelerated degradation tests of rolling element bearings. The prediction result identifies the effectiveness of the proposed method in predicting RUL of machinery.",
"title": ""
},
{
"docid": "22a57d6a7ec76fa9a9fa761113853e32",
"text": "This paper presents a single-phase 11-level (5 H-bridges) cascade multilevel DC-AC grid-tied inverter. Each inverter bridge is connected to a 200 W solar panel. OPAL-RT lab was used as the hardware in the loop (HIL) real-time control system platform where a Maximum Power Point Tracking (MPPT) algorithm was implemented based on the inverter output power to assure optimal operation of the inverter when connected to the power grid as well as a Phase Locked Loop (PLL) for phase and frequency match. A novel SPWM scheme is proposed in this paper to be used with the solar panels that can account for voltage profile fluctuations among the panels during the day. Simulation and experimental results are shown for voltage and current during synchronization mode and power transferring mode to validate the methodology for grid connection of renewable resources.",
"title": ""
},
{
"docid": "3c60c99cf32bb97129f3d91c7ada383c",
"text": "An adaptive neuro-fuzzy inference system is developed and tested for traffic signal controlling. From a given input data set, the developed adaptive neuro-fuzzy inference system can draw the membership functions and corresponding rules by its own, thus making the designing process easier and reliable compared to standard fuzzy logic controllers. Among useful inputs of fuzzy signal control systems, gap between two vehicles, delay at intersections, vehicle density, flow rate and queue length are often used. By considering the practical applicability, the average vehicle inflow rate of each lane is considered in this work as inputs to model the adaptive neuro-fuzzy signal control system. In order to define the desired objectives of reducing the waiting time of vehicles at the signal control, the combined delay of vehicles within one signal cycle is minimized using a simple mathematical optimization method The performance of the control system was tested further by developing an event driven traffic simulation program in Matlab under Windows environment. As expected, the neuro-fuzzy logic controller performed better than the fixed time controller due to its real time adaptability. The neuro-fuzzy controlling system allows more vehicles to pass the junction in congestion and less number of vehicles when the flow rate is low. In particular, the performance of the developed system was superior when there were abrupt changes in traffic flow rates.",
"title": ""
},
{
"docid": "de4c96ee05fc711253e4dc41edfed07f",
"text": "Face Recognition has been studied for many decades. As opposed to traditional hand-crafted features such as LBP and HOG, much more sophisticated features can be learned automatically by deep learning methods in a data-driven way. In this paper, we propose a two-stage approach that combines a multi-patch deep CNN and deep metric learning, which extracts low dimensional but very discriminative features for face verification and recognition. Experiments show that this method outperforms other state-of-the-art methods on LFW dataset, achieving 99.77% pair-wise verification accuracy and significantly better accuracy under other two more practical protocols. This paper also discusses the importance of data size and the number of patches, showing a clear path to practical high-performance face recognition systems in real world.",
"title": ""
},
{
"docid": "c9a23d1c5618914ea9c8c02d0faf0c8a",
"text": "Channel density is a fundamental factor in determining neuronal firing and is primarily regulated during development through transcriptional and translational regulation. In adult rats, striatal cholinergic interneurons have a prominent A-type current and co-express Kv4.1 and Kv4.2 mRNAs. There is evidence that Kv4.2 plays a primary role in producing the current in adult neurons. The contribution of Kv4.2 and Kv4.1 to the A-type current in cholinergic interneurons during development, however, is not known. Here, using patch-clamp recording and semi-quantitative single-cell reverse transcription-polymerase chain reaction (RT-PCR) techniques, we have examined the postnatal development of A-type current and the expression of Kv4.2 and Kv4.1 in rat striatal cholinergic interneurons. A-type current was detectable at birth, and its amplitude was up-regulated with age, reaching a plateau at about 3 wk after birth. At all ages, the current inactivated with two time constants: one ranging from 15 to 27 ms and the other ranging from 99 to 142 ms. Kv4.2 mRNA was detectable at birth, and the expression level increased exponentially with age, reaching a plateau by 3 wk postnatal. In contrast, Kv4.1 mRNA was not detectable during the first week after birth, and the expression level did not show a clear tendency with age. Taken together, our results suggest that Kv4.2 plays an essential role in producing the A-type current in striatal cholinergic interneurons during the entire course of postnatal development.",
"title": ""
},
{
"docid": "0b0e9d5bedcb24a65a9a43b6b0875860",
"text": "Purpose – This paper summarizes and discusses the results from the LIVING LAB design study, a project within the 7 Framework Programme of the European Union. The aim of this project was to develop the conceptual design of the LIVING LAB Research Infrastructure that will be used to research human interaction with, and stimulate the adoption of, sustainable, smart and healthy innovations around the home. Design/methodology/approach – A LIVING LAB is a combined lab-/household system, analysing existing product-service-systems as well as technical and socioeconomic influences focused on the social needs of people, aiming at the development of integrated technical and social innovations and simultaneously promoting the conditions of sustainable development (highest resource efficiency, highest user orientation, etc.). This approach allows the development and testing of sustainable domestic technologies, while putting the user on centre stage. Findings – As this paper discusses the design study, no actual findings can be presented here but the focus is on presenting the research approach. LIVING LAB: Research and development of sustainable products and services through userdriven innovation in experimental-oriented environments 2 Originality/value – The two elements (real homes and living laboratories) of this approach are what make the LIVING LAB research infrastructure unique. The research conducted in LIVING LAB will be innovative in several respects. First, it will contribute to market innovation by producing breakthroughs in sustainable domestic technologies that will be easy to install, user friendly and that meet environmental performance standards in real life. Second, research from LIVING LAB will contribute to innovation in practice by pioneering new forms of in-context, user-centred research, including long-term and cross-cultural research.",
"title": ""
},
{
"docid": "ebfa889f9ba51267823aac9b92b0ee66",
"text": "8 9 10 A Synthetic Aperture Radar (SAR) is an active sensor transmitting pulses of polarized 11 electromagnetic waves and receiving the backscattered radiation. SAR sensors at different 12 wavelengths and with different polarimetric capabilities are being used in remote sensing of 13 the Earth. The value of an analysis of backscattered energy alone is limited due to ambiguities 14 in the possible ecological factor configurations causing the signal. From two SAR images 15 taken from similar viewing positions with a short time-lag, interference between the two 16 waves can be observed. By subtracting the two phases of the signals, it is feasible to eliminate 17 the random contribution of the scatterers to the phase. The interferometric correlation and the 18 interferometric phase contain additional information on the three-dimensional structure of the 19 scattering elements in the imaged area. 20 A brief review of SAR sensors is given, followed by an outline of the physical foundations of 21 SAR interferometry and the practical data processing steps involved. An overview of 22 applications of InSAR to forest mapping and monitoring is given, covering tree bole volume 23 and biomass, forest types and land cover, fire scars, forest thermal state and forest canopy 24",
"title": ""
},
{
"docid": "baf9e931df45d010c44083973d1281fd",
"text": "Error vector magnitude (EVM) is one of the widely accepted figure of merits used to evaluate the quality of communication systems. In the literature, EVM has been related to signal-to-noise ratio (SNR) for data-aided receivers, where preamble sequences or pilots are used to measure the EVM, or under the assumption of high SNR values. In this paper, this relation is examined for nondata-aided receivers and is shown to perform poorly, especially for low SNR values or high modulation orders. The EVM for nondata-aided receivers is then evaluated and its value is related to the SNR for quadrature amplitude modulation (QAM) and pulse amplitude modulation (PAM) signals over additive white Gaussian noise (AWGN) channels and Rayleigh fading channels, and for systems with IQ imbalances. The results show that derived equations can be used to reliably estimate SNR values using EVM measurements that are made based on detected data symbols. Thus, presented work can be quite useful for measurement devices such as vector signal analyzers (VSA), where EVM measurements are readily available.",
"title": ""
},
{
"docid": "dd2819d0413a1d41c602aef4830888a4",
"text": "Presented here is a fast method that combines curve matching techniques with a surface matching algorithm to estimate the positioning and respective matching error for the joining of three-dimensional fragmented objects. Furthermore, this paper describes how multiple joints are evaluated and how the broken artefacts are clustered and transformed to form potential solutions of the assemblage problem. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
97e9c7b3d490dc41be471334ed63a541
|
The e-puck , a Robot Designed for Education in Engineering
|
[
{
"docid": "aaba5dc8efc9b6a62255139965b6f98d",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
}
] |
[
{
"docid": "f1b691a8072eaaaaf0e540a2d24445fa",
"text": "We describe a framework for finding and tracking “trails” for autonomous outdoor robot navigation. Through a combination of visual cues and ladar-derived structural information, the algorithm is able to follow paths which pass through multiple zones of terrain smoothness, border vegetation, tread material, and illumination conditions. Our shape-based visual trail tracker assumes that the approaching trail region is approximately triangular under perspective. It generates region hypotheses from a learned distribution of expected trail width and curvature variation, and scores them using a robust measure of color and brightness contrast with flanking regions. The structural component analogously rewards hypotheses which correspond to empty or low-density regions in a groundstrike-filtered ladar obstacle map. Our system's performance is analyzed on several long sequences with diverse appearance and structural characteristics. Ground-truth segmentations are used to quantify performance where available, and several alternative algorithms are compared on the same data.",
"title": ""
},
{
"docid": "27465b2c8ce92ccfbbda6c802c76838f",
"text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.",
"title": ""
},
{
"docid": "02a6e024c1d318862ad4c17b9a56ca36",
"text": "Artificial food colors (AFCs) have not been established as the main cause of attention-deficit hyperactivity disorder (ADHD), but accumulated evidence suggests that a subgroup shows significant symptom improvement when consuming an AFC-free diet and reacts with ADHD-type symptoms on challenge with AFCs. Of children with suspected sensitivities, 65% to 89% reacted when challenged with at least 100 mg of AFC. Oligoantigenic diet studies suggested that some children in addition to being sensitive to AFCs are also sensitive to common nonsalicylate foods (milk, chocolate, soy, eggs, wheat, corn, legumes) as well as salicylate-containing grapes, tomatoes, and orange. Some studies found \"cosensitivity\" to be more the rule than the exception. Recently, 2 large studies demonstrated behavioral sensitivity to AFCs and benzoate in children both with and without ADHD. A trial elimination diet is appropriate for children who have not responded satisfactorily to conventional treatment or whose parents wish to pursue a dietary investigation.",
"title": ""
},
{
"docid": "1ebdcfe9c477e6a29bfce1ddeea960aa",
"text": "Bitcoin—a cryptocurrency built on blockchain technology—was the first currency not controlled by a single entity.1 Initially known to a few nerds and criminals,2 bitcoin is now involved in hundreds of thousands of transactions daily. Bitcoin has achieved values of more than US$15,000 per coin (at the end of 2017), and this rising value has attracted attention. For some, bitcoin is digital fool’s gold. For others, its underlying blockchain technology heralds the dawn of a new digital era. Both views could be right. The fortunes of cryptocurrencies don’t define blockchain. Indeed, the biggest effects of blockchain might lie beyond bitcoin, cryptocurrencies, or even the economy. Of course, the technical questions about blockchain have not all been answered. We still struggle to overcome the high levels of processing intensity and energy use. These questions will no doubt be confronted over time. If the technology fails, the future of blockchain will be different. In this article, I’ll assume technical challenges will be solved, and although I’ll cover some technical issues, these aren’t the main focus of this paper. In a 2015 article, “The Trust Machine,” it was argued that the biggest effects of blockchain are on trust.1 The article referred to public trust in economic institutions, that is, that such organizations and intermediaries will act as expected. When they don’t, trust deteriorates. Trust in economic institutions hasn’t recovered from the recession of 2008.3 Technology can exacerbate distrust: online trades with distant counterparties can make it hard to settle disputes face to face. Trusted intermediaries can be hard to find, and that’s where blockchain can play a part. Permanent record-keeping that can be sequentially updated but not erased creates visible footprints of all activities conducted on the chain. This reduces the uncertainty of alternative facts or truths, thus creating the “trust machine” The Economist describes. As trust changes, so too does governance.4 Vitalik Buterin of the Ethereum blockchain platform calls blockchain “a magic computer” to which anyone can upload self-executing programs.5 All states of every Beyond Bitcoin: The Rise of Blockchain World",
"title": ""
},
{
"docid": "bc3f2f0c2e33668668714dcebe1365a2",
"text": "Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a novel thumb and palm articulation that would facilitate in-hand object manipulation while avoiding mechanical design complexity. Our key innovation is the use of a tendon-driven ball joint as a basis for an articulated thumb. The design innovation enables our under-actuated hand to perform complex in-hand object manipulation such as passing a ball between the fingers or even writing text messages on a smartphone with the thumb's end-point while holding the phone in the palm of the same hand. We then proceed to compare the dexterity of our novel robotic hand design to other designs in prosthetics, robotics and humans using simulated and physical kinematic data to demonstrate the enhanced dexterity of our novel articulation exceeding previous designs by a factor of two. Our innovative approach achieves naturalistic movement of the human hand, without requiring translation in the hand joints, and enables teleoperation of complex tasks, such as single (robot) handed messaging on a smartphone without the need for haptic feedback. Our simple, under-actuated design outperforms current state-of-the-art prostheses or robotic and prosthetic hands regarding abilities that encompass from grasps to activities of daily living which involve complex in-hand object manipulation.",
"title": ""
},
{
"docid": "60736095287074c8a81c9ce5afa93f75",
"text": "The visualization of high-quality isosurfaces at interactive rates is an important tool in many simulation and visualization applications. Today, isosurfaces are most often visualized by extracting a polygonal approximation that is then rendered via graphics hardware or by using a special variant of preintegrated volume rendering. However, these approaches have a number of limitations in terms of the quality of the isosurface, lack of performance for complex data sets, or supported shading models. An alternative isosurface rendering method that does not suffer from these limitations is to directly ray trace the isosurface. However, this approach has been much too slow for interactive applications unless massively parallel shared-memory supercomputers have been used. In this paper, we implement interactive isosurface ray tracing on commodity desktop PCs by building on recent advances in real-time ray tracing of polygonal scenes and using those to improve isosurface ray tracing performance as well. The high performance and scalability of our approach will be demonstrated with several practical examples, including the visualization of highly complex isosurface data sets, the interactive rendering of hybrid polygonal/isosurface scenes, including high-quality ray traced shading effects, and even interactive global illumination on isosurfaces.",
"title": ""
},
{
"docid": "467c538a696027d92f1b510d6179f73f",
"text": "We investigated the acute and chronic effects of low-intensity concentric or eccentric resistance training with blood flow restriction (BFR) on muscle size and strength. Ten young men performed 30% of concentric one repetition maximal dumbbell curl exercise (four sets, total 75 reps) 3 days/week for 6 weeks. One arm was randomly chosen for concentric BFR (CON-BFR) exercise only and the other arm performed eccentric BFR (ECC-BFR) exercise only at the same exercise load. During the exercise session, iEMG for biceps brachii muscles increased progressively during CON-BFR, which was greater (p<0.05) than that of the ECC-BFR. Immediately after the exercise, muscle thickness (MTH) of the elbow flexors acutely increased (p<0.01) with both CON-BFR and ECC-BFR, but was greater with CON-BFR (11.7%) (p<0.01) than ECC-BFR (3.9%) at 10-cm above the elbow joint. Following 6-weeks of training, MRI-measured muscle cross-sectional area (CSA) at 10-cm position and mid-upper arm (12.0% and 10.6%, respectively) as well as muscle volume (12.5%) of the elbow flexors were increased (p<0.01) with CON-BFR. Increases in muscle CSA and volume were lower in ECC-BFR (5.1%, 0.8% and 2.9%, respectively) than in the CON-BFR and only muscle CSA at 10-cm position increased significantly (p<0.05) after the training. Maximal voluntary isometric strength of elbow flexors was increased (p<0.05) in CON-BFR (8.6%), but not in ECC (3.8%). These results suggest that CON-BFR training leads to pronounced acute changes in muscle size, an index of muscle cell swelling, the response to which may be an important factor for promoting muscle hypertrophy with BFR resistance training.",
"title": ""
},
{
"docid": "c4a895af5fe46e91f599f71403948a2b",
"text": "The rise in popularity of the Android platform has resulted in an explosion of malware threats targeting it. As both Android malware and the operating system itself constantly evolve, it is very challenging to design robust malware mitigation techniques that can operate for long periods of time without the need for modifications or costly re-training. In this paper, we present MAMADROID, an Android malware detection system that relies on app behavior. MAMADROID builds a behavioral model, in the form of a Markov chain, from the sequence of abstracted API calls performed by an app, and uses it to extract features and perform classification. By abstracting calls to their packages or families, MAMADROID maintains resilience to API changes and keeps the feature set size manageable. We evaluate its accuracy on a dataset of 8.5K benign and 35.5K malicious apps collected over a period of six years, showing that it not only effectively detects malware (with up to 99% F-measure), but also that the model built by the system keeps its detection capabilities for long periods of time (on average, 87% and 73% F-measure, respectively, one and two years after training). Finally, we compare against DROIDAPIMINER, a state-of-the-art system that relies on the frequency of API calls performed by apps, showing that MAMADROID significantly outperforms it.",
"title": ""
},
{
"docid": "c8dc06de68e4706525e98f444e9877e4",
"text": "This study used two field trials with 5 and 34 years of liming histories, respectively, and aimed to elucidate the long-term effect of liming on soil organic C (SOC) in acid soils. It was hypothesized that long-term liming would increase SOC concentration, macro-aggregate stability and SOC concentration within aggregates. Surface soils (0–10 cm) were sampled and separated into four aggregate-size classes: large macro-aggregates (>2 mm), small macro-aggregates (0.25–2 mm), micro-aggregates (0.053–0.25 mm) and silt and clay fraction (<0.053 mm) by wet sieving, and the SOC concentration of each aggregate-size was quantified. Liming decreased SOC in the bulk soil and in aggregates as well as macro-aggregate stability in the low-input and cultivated 34-year-old trial. In contrast, liming did not significantly change the concentration of SOC in the bulk soil or in aggregates but improved macro-aggregate stability in the 5-year-old trial under undisturbed unimproved pastures. Furthermore, the single application of lime to the surface soil increased pH in both topsoil (0–10 cm) and subsurface soil (10–20 cm) and increased K2SO4-extractable C, microbial biomass C (Cmic) and basal respiration (CO2) in both soil layers of both lime trials. Liming increased the percentage of SOC present as microbial biomass C (Cmic/Corg) and decreased the respiration rate per unit biomass (qCO2). The study concludes that despite long-term liming decreased total SOC in the low-input systems, it increased labile C pools and the percentage of SOC present as microbial biomass C.",
"title": ""
},
{
"docid": "cea0f4b7409729fd310024d2e9a31b71",
"text": "Relative ranging between Wireless Sensor Network (WSN) nod es is considered to be an important requirement for a number of dis tributed applications. This paper focuses on a two-way, time of flight (ToF) te chnique which achieves good accuracy in estimating the point-to-point di s ance between two wireless nodes. The underlying idea is to utilize a two-way t ime transfer approach in order to avoid the need for clock synchronization b etween the participating wireless nodes. Moreover, by employing multipl e ToF measurements, sub-clock resolution is achieved. A calibration stage is us ed to estimate the various delays that occur during a message exchange and require subtraction from the initial timed value. The calculation of the range betwee n the nodes takes place on-node making the proposed scheme suitable for distribute d systems. Care has been taken to exclude the erroneous readings from the set of m easurements that are used in the estimation of the desired range. The two-way T oF technique has been implemented on commercial off-the-self (COTS) device s without the need for additional hardware. The system has been deployed in var ous experimental locations both indoors and outdoors and the obtained result s reveal that accuracy between 1m RMS and 2.5m RMS in line-of-sight conditions over a 42m range can be achieved.",
"title": ""
},
{
"docid": "9e84bd8c033bf04592b732e6c6a604c6",
"text": "In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.",
"title": ""
},
{
"docid": "25bddb3111da2485c341eec1d7fdf7c0",
"text": "Security protocols are building blocks in secure communications. Security protocols deploy some security mechanisms to provide certain security services. Security protocols are considered abstract when analyzed. They might involve more vulnerabilities when implemented. This manuscript provides a holistic study on security protocols. It reviews foundations of security protocols, taxonomy of attacks on security protocols and their implementations, and different methods and models for security analysis of protocols. Specifically, it clarifies differences between information-theoretic and computational security, and computational and symbolic models. Furthermore, a survey on computational security models for authenticated key exchange (AKE) and passwordauthenticated key exchange (PAKE) protocols, as the most important and well-studied type of security protocols, is provided.",
"title": ""
},
{
"docid": "e01d5be587c73aaa133acb3d8aaed996",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "c3f1a534afe9f5c48aac88812a51ab09",
"text": "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158% in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.",
"title": ""
},
{
"docid": "4d9a4cb23ad4ac56a3fbfece57fb6647",
"text": "Gene therapy refers to a rapidly growing field of medicine in which genes are introduced into the body to treat or prevent diseases. Although a variety of methods can be used to deliver the genetic materials into the target cells and tissues, modified viral vectors represent one of the more common delivery routes because of its transduction efficiency for therapeutic genes. Since the introduction of gene therapy concept in the 1970s, the field has advanced considerably with notable clinical successes being demonstrated in many clinical indications in which no standard treatment options are currently available. It is anticipated that the clinical success the field observed in recent years can drive requirements for more scalable, robust, cost effective, and regulatory-compliant manufacturing processes. This review provides a brief overview of the current manufacturing technologies for viral vectors production, drawing attention to the common upstream and downstream production process platform that is applicable across various classes of viral vectors and their unique manufacturing challenges as compared to other biologics. In addition, a case study of an industry-scale cGMP production of an AAV-based gene therapy product performed at 2,000 L-scale is presented. The experience and lessons learned from this largest viral gene therapy vector production run conducted to date as discussed and highlighted in this review should contribute to future development of commercial viable scalable processes for vial gene therapies.",
"title": ""
},
{
"docid": "f92087a8e81c45cd8bedc12fddd682fc",
"text": "This paper presented a novel power conversion method of realizing the galvanic isolation by dual safety capacitors (Y-cap) instead of conventional transformer. With limited capacitance of the Y capacitor, series resonant is proposed to achieve the power transfer. The basic concept is to control the power path impedance, which blocks the dominant low-frequency part of touch current and let the high-frequency power flow freely. Conceptual analysis, simulation and design considerations are mentioned in this paper. An 85W AC/AC prototype is designed and verified to substitute the isolation transformer of a CCFL LCD TV backlight system. Compared with the conventional transformer isolation, the new method is proved to meet the function and safety requirements of its specification while has higher efficiency and smaller size.",
"title": ""
},
{
"docid": "a88e52b2aff5d30a5b4314d59392910e",
"text": "The design and implementation of a compact monopole antenna with broadband circular polarization is presented in this letter. The proposed antenna consists of a simple C-shaped patch and a modified ground plane with the overall size of 0.33 λ × 0.37 λ. By properly embedding a slit in the C-shaped patch and improving the ground plane with two triangular stubs, the measured broadband 3-dB axial-ratio bandwidth of 104.7% (2.05–6.55 GHz) is obtained, while the measured impedance bandwidth of 106.3% (2.25–7.35 GHz), defined by –10-dB return loss, is achieved. The performance for different parameters is analyzed. The proposed antenna is a good candidate for the application of various wireless communication systems.",
"title": ""
},
{
"docid": "1ef0a2569a1e6a4f17bfdc742ad30a7f",
"text": "Internet of Things (IoT) is becoming more and more popular. Increasingly, European projects (CityPulse, IoT.est, IoT-i and IERC), standard development organizations (ETSI M2M, oneM2M and W3C) and developers are involved in integrating Semantic Web technologies to Internet of Things. All of them design IoT application uses cases which are not necessarily interoperable with each other. The main innovative research challenge is providing a unified system to build interoperable semantic-based IoT applications. In this paper, to overcome this challenge, we design the Semantic Web of Things (SWoT) generator to assist IoT projects and developers in: (1) building interoperable Semantic Web of Things (SWoT) applications by providing interoperable semantic-based IoT application templates, (2) easily inferring high-level abstractions from sensor measurements thanks to the rules provided by the template, (3) designing domain-specific or inter-domain IoT applications thanks to the interoperable domain knowledge provided by the template, and (4) encouraging to reuse as much as possible the background knowledge already designed. We demonstrate the usefulness of our contribution though three use cases: (1) cloud-based IoT developers, (2) mobile application developers, and (3) assisting IoT projects. A proof-of concept for providing Semantic Web of Things application templates is available at http://www.sensormeasurement.appspot.com/?p=m3api.",
"title": ""
},
{
"docid": "0ef2a90669c0469df0dc2281a414cf37",
"text": "Web Intelligence is a direction for scientific research that explores practical applications of Artificial Intelligence to the next generation of Web-empowered systems. In this paper, we present a Web-based intelligent tutoring system for computer programming. The decision making process conducted in our intelligent system is guided by Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.",
"title": ""
},
{
"docid": "fe2bc36e704b663c8b9a72e7834e6c7e",
"text": "Driven by deep learning, there has been a surge of specialized processors for matrix multiplication, referred to as Tensor Core Units (TCUs). These TCUs are capable of performing matrix multiplications on small matrices (usually 4× 4 or 16×16) to accelerate the convolutional and recurrent neural networks in deep learning workloads. In this paper we leverage NVIDIA’s TCU to express both reduction and scan with matrix multiplication and show the benefits — in terms of program simplicity, efficiency, and performance. Our algorithm exercises the NVIDIA TCUs which would otherwise be idle, achieves 89%− 98% of peak memory copy bandwidth, and is orders of magnitude faster (up to 100× for reduction and 3× for scan) than state-of-the-art methods for small segment sizes — common in machine learning and scientific applications. Our algorithm achieves this while decreasing the power consumption by up to 22% for reduction and 16% for scan.",
"title": ""
}
] |
scidocsrr
|
fc0dd612e5493af4741bdc4dead85fbe
|
Ensuring Security and Privacy Preservation for Cloud Data Services
|
[
{
"docid": "e2e71d3ba1a2cf1b4f0fa2c5d2bf9a10",
"text": "An important problem in public clouds is how to selectively share documents based on fine-grained attribute-based access control policies (acps). An approach is to encrypt documents satisfying different policies with different keys using a public key cryptosystem such as attribute-based encryption, and/or proxy re-encryption. However, such an approach has some weaknesses: it cannot efficiently handle adding/revoking users or identity attributes, and policy changes; it requires to keep multiple encrypted copies of the same documents; it incurs high computational costs. A direct application of a symmetric key cryptosystem, where users are grouped based on the policies they satisfy and unique keys are assigned to each group, also has similar weaknesses. We observe that, without utilizing public key cryptography and by allowing users to dynamically derive the symmetric keys at the time of decryption, one can address the above weaknesses. Based on this idea, we formalize a new key management scheme, called broadcast group key management (BGKM), and then give a secure construction of a BGKM scheme called ACV-BGKM. The idea is to give some secrets to users based on the identity attributes they have and later allow them to derive actual symmetric keys based on their secrets and some public information. A key advantage of the BGKM scheme is that adding users/revoking users or updating acps can be performed efficiently by updating only some public information. Using our BGKM construct, we propose an efficient approach for fine-grained encryption-based access control for documents stored in an untrusted cloud file storage.",
"title": ""
},
{
"docid": "01209a2ace1a4bc71ad4ff848bb8a3f4",
"text": "For data storage outsourcing services, it is important to allow data owners to efficiently and securely verify that the storage server stores their data correctly. To address this issue, several proof-of-retrievability (POR) schemes have been proposed wherein a storage server must prove to a verifier that all of a client's data are stored correctly. While existing POR schemes offer decent solutions addressing various practical issues, they either have a non-trivial (linear or quadratic) communication complexity, or only support private verification, i.e., only the data owner can verify the remotely stored data. It remains open to design a POR scheme that achieves both public verifiability and constant communication cost simultaneously.\n In this paper, we solve this open problem and propose the first POR scheme with public verifiability and constant communication cost: in our proposed scheme, the message exchanged between the prover and verifier is composed of a constant number of group elements; different from existing private POR constructions, our scheme allows public verification and releases the data owners from the burden of staying online. We achieved these by tailoring and uniquely combining techniques such as constant size polynomial commitment and homomorphic linear authenticators. Thorough analysis shows that our proposed scheme is efficient and practical. We prove the security of our scheme based on the Computational Diffie-Hellman Problem, the Strong Diffie-Hellman assumption and the Bilinear Strong Diffie-Hellman assumption.",
"title": ""
}
] |
[
{
"docid": "438d69760d828fe9f94a68dbd426778e",
"text": "Beginning with the assumption that implicit theories of personality are crucial tools for understanding social behavior, the authors tested the hypothesis that perceivers would process person information that violated their predominant theory in a biased manner. Using an attentional probe paradigm (Experiment 1) and a recognition memory paradigm (Experiment 2), the authors presented entity theorists (who believe that human attributes are fixed) and incremental theorists (who believe that human attributes are malleable) with stereotype-relevant information about a target person that supported or violated their respective theory. Both groups of participants showed evidence of motivated, selective processing only with respect to theory-violating information. In Experiment 3, the authors found that after exposure to theory-violating information, participants felt greater anxiety and worked harder to reestablish their sense of prediction and control mastery. The authors discuss the epistemic functions of implicit theories of personality and the impact of violated assumptions.",
"title": ""
},
{
"docid": "8399ff9241f59ce76937536cc8fc04a4",
"text": "NOTES: Basic EHR adoption requires the EHR system to have at least a basic set of EHR functions, including clinician notes, as defined in Table 2. A certified EHR is EHR technology that has been certified as meeting federal requirements for some or all of the hospital objectives of Meaningful Use. Possession means that the hospital has a legal agreement with the EHR vendor, but is not equivalent to adoption. *Significantly different from previous year (p < 0.05). SOURCE: ONC/American Hospital Association (AHA), AHA Annual Survey Information Technology Supplement",
"title": ""
},
{
"docid": "eca2bfe1b96489e155e19d02f65559d6",
"text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays",
"title": ""
},
{
"docid": "9157378112fedfd9959683effe7a0a47",
"text": "Studies indicate that substance use among Ethiopian adolescents is considerably rising; in particular college and university students are the most at risk of substance use. The aim of the study was to assess substance use and associated factors among university students. A cross-sectional survey was carried out among 1040 Haramaya University students using self-administered structured questionnaire. Multistage sampling technique was used to select students. Descriptive statistics, bivariate, and multivariate analysis were done. About two-thirds (62.4%) of the participants used at least one substance. The most commonly used substance was alcohol (50.2%). Being male had strong association with substance use (AOR (95% CI), 3.11 (2.20, 4.40)). The odds of substance use behaviour is higher among third year students (AOR (95% CI), 1.48 (1.01, 2.16)). Being a follower of Muslim (AOR (95% CI), 0.62 (0.44, 0.87)) and Protestant (AOR (95% CI), 0.25 (0.17, 0.36)) religions was shown to be protective of substance use. Married (AOR (95% CI), 1.92 (1.12, 3.30)) and depressed (AOR (95% CI), 3.30 (2.31, 4.72)) students were more likely to use substances than others. The magnitude of substance use was high. This demands special attention, emergency preventive measures, and targeted information, education and communication activity.",
"title": ""
},
{
"docid": "b52a29cd426c5861dbb97aeb91efda4b",
"text": "In recent years, inexact computing has been increasingly regarded as one of the most promising approaches for slashing energy consumption in many applications that can tolerate a certain degree of inaccuracy. Driven by the principle of trading tolerable amounts of application accuracy in return for significant resource savings-the energy consumed, the (critical path) delay, and the (silicon) area-this approach has been limited to application-specified integrated circuits (ASICs) so far. These ASIC realizations have a narrow application scope and are often rigid in their tolerance to inaccuracy, as currently designed; the latter often determining the extent of resource savings we would achieve. In this paper, we propose to improve the application scope, error resilience and the energy savings of inexact computing by combining it with hardware neural networks. These neural networks are fast emerging as popular candidate accelerators for future heterogeneous multicore platforms and have flexible error resilience limits owing to their ability to be trained. Our results in 65-nm technology demonstrate that the proposed inexact neural network accelerator could achieve 1.78-2.67× savings in energy consumption (with corresponding delay and area savings being 1.23 and 1.46×, respectively) when compared to the existing baseline neural network implementation, at the cost of a small accuracy loss (mean squared error increases from 0.14 to 0.20 on average).",
"title": ""
},
{
"docid": "3f23f5452c53ae5fcc23d95acdcdafd8",
"text": "Metamorphism is a technique that mutates the binary code using different obfuscations and never keeps the same sequence of opcodes in the memory. This stealth technique provides the capability to a malware for evading detection by simple signature-based (such as instruction sequences, byte sequences and string signatures) anti-malware programs. In this paper, we present a new scheme named Annotated Control Flow Graph (ACFG) to efficiently detect such kinds of malware. ACFG is built by annotating CFG of a binary program and is used for graph and pattern matching to analyse and detect metamorphic malware. We also optimize the runtime of malware detection through parallelization and ACFG reduction, maintaining the same accuracy (without ACFG reduction) for malware detection. ACFG proposed in this paper: (i) captures the control flow semantics of a program; (ii) provides a faster matching of ACFGs and can handle malware with smaller CFGs, compared with other such techniques, without compromising the accuracy; (iii) contains more information and hence provides more accuracy than a CFG. Experimental evaluation of the proposed scheme using an existing dataset yields malware detection rate of 98.9% and false positive rate of 4.5%.",
"title": ""
},
{
"docid": "d8c367a18d7a8248b0600e3f295d14d3",
"text": "The digital world is growing day by day; many new risks have emerged during the exchange of information around the world; and many ways have evolved to protect the information. In this paper, this paper will conceal information into an image by using three methods that concentrate on the compression of the date before hiding it into the image and then compare the results using Peak Signal to Noise Ratio (PSNR). The three methods that will be used are Least Significant Bit (LSB), Huffman Code, and Arithmetic Coding and then the result will be compared.",
"title": ""
},
{
"docid": "45458f6e7160b32a2e82c76568bfe46a",
"text": "PURPOSE\nTo assess the effectiveness and clinical outcomes of catheter-directed thrombolysis in patients with atresia of the inferior vena cava (IVC) and acute iliofemoral deep vein thrombosis (DVT).\n\n\nMATERIALS AND METHODS\nFrom 2001 to 2009, 11 patients (median age, 32 y) with atresia of the IVC and acute iliofemoral DVT in 13 limbs were admitted for catheter-directed thrombolysis. Through a multiple-side hole catheter inserted in the popliteal vein, continuous pulse-spray infusion of tissue plasminogen activator and heparin was performed. Thrombolysis was terminated when all thrombus was resolved and venous outflow through the paravertebral collateral vessels was achieved. After thrombolysis, all patients received lifelong anticoagulation and compression stockings and were followed up at regular intervals.\n\n\nRESULTS\nUltrasound or computed tomography revealed absence of the suprarenal segment of the IVC in two patients, and nine were diagnosed with absence of the infrarenal segment of the IVC. Median treatment time was 58 hours (range, 42-95 h). No deaths or serious complications occurred. Overall, complications were observed in four patients, one of whom required blood transfusion. Three patients were diagnosed with thrombophilia. Median follow-up was 37 months (range, 51 d to 96 mo). All patients had patent deep veins and one developed reflux in the popliteal fossa after 4 years. No thromboembolic recurrences were observed during follow-up.\n\n\nCONCLUSIONS\nCatheter-directed thrombolysis of patients with acute iliofemoral DVT and atresia of the IVC is a viable treatment option, as reasonable clinical outcomes can be obtained.",
"title": ""
},
{
"docid": "39208755abbd92af643d0e30029f6cc0",
"text": "The biomedical community makes extensive use of text mining technology. In the past several years, enormous progress has been made in developing tools and methods, and the community has been witness to some exciting developments. Although the state of the community is regularly reviewed, the sheer volume of work related to biomedical text mining and the rapid pace in which progress continues to be made make this a worthwhile, if not necessary, endeavor. This chapter provides a brief overview of the current state of text mining in the biomedical domain. Emphasis is placed on the resources and tools available to biomedical researchers and practitioners, as well as the major text mining tasks of interest to the community. These tasks include the recognition of explicit facts from biomedical literature, the discovery of previously unknown or implicit facts, document summarization, and question answering. For each topic, its basic challenges and methods are outlined and recent and influential work is reviewed.",
"title": ""
},
{
"docid": "dbc3355eb2b88432a4bd21d42c090ef1",
"text": "With advancement of technology things are becoming simpler and easier for us. Automatic systems are being preferred over manual system. This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project. Keywords-component; Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor",
"title": ""
},
{
"docid": "69058572e8baaef255a3be6ac9eef878",
"text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.",
"title": ""
},
{
"docid": "2d6718172b83ef2a109f91791af6a0c3",
"text": "BACKGROUND & AIMS\nWe previously established long-term culture conditions under which single crypts or stem cells derived from mouse small intestine expand over long periods. The expanding crypts undergo multiple crypt fission events, simultaneously generating villus-like epithelial domains that contain all differentiated types of cells. We have adapted the culture conditions to grow similar epithelial organoids from mouse colon and human small intestine and colon.\n\n\nMETHODS\nBased on the mouse small intestinal culture system, we optimized the mouse and human colon culture systems.\n\n\nRESULTS\nAddition of Wnt3A to the combination of growth factors applied to mouse colon crypts allowed them to expand indefinitely. Addition of nicotinamide, along with a small molecule inhibitor of Alk and an inhibitor of p38, were required for long-term culture of human small intestine and colon tissues. The culture system also allowed growth of mouse Apc-deficient adenomas, human colorectal cancer cells, and human metaplastic epithelia from regions of Barrett's esophagus.\n\n\nCONCLUSIONS\nWe developed a technology that can be used to study infected, inflammatory, or neoplastic tissues from the human gastrointestinal tract. These tools might have applications in regenerative biology through ex vivo expansion of the intestinal epithelia. Studies of these cultures indicate that there is no inherent restriction in the replicative potential of adult stem cells (or a Hayflick limit) ex vivo.",
"title": ""
},
{
"docid": "e4069b8312b8a273743b31b12b1dfbae",
"text": "Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.",
"title": ""
},
{
"docid": "07db8f037ff720c8b8b242879c14531f",
"text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.",
"title": ""
},
{
"docid": "da8a41e844c519842de524d791527ace",
"text": "Advances in NLP techniques have led to a great demand for tagging and analysis of the sentiments from unstructured natural language data over the last few years. A typical approach to sentiment analysis is to start with a lexicon of positive and negative words and phrases. In these lexicons, entries are tagged with their prior out of context polarity. Unfortunately all efforts found in literature deal mostly with English texts. In this squib, we propose a computational technique of generating an equivalent SentiWordNet (Bengali) from publicly available English Sentiment lexicons and English-Bengali bilingual dictionary. The target language for the present task is Bengali, though the methodology could be replicated for any new language. There are two main lexical resources widely used in English for Sentiment analysis: SentiWordNet (Esuli et. al., 2006) and Subjectivity Word List (Wilson et. al., 2005). SentiWordNet is an automatically constructed lexical resource for English which assigns a positivity score and a negativity score to each WordNet synset. The subjectivity lexicon was compiled from manually developed resources augmented with entries learned from corpora. The entries in the Subjectivity lexicon have been labelled for part of speech (POS) as well as either strong or weak subjective tag depending on reliability of the subjective nature of the entry.",
"title": ""
},
{
"docid": "b2c05f820195154dbbb76ee68740b5d9",
"text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.",
"title": ""
},
{
"docid": "9897f5e64b4a5d6d80fadb96cb612515",
"text": "Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.",
"title": ""
},
{
"docid": "33ae11cfc67a9afe34483444a03bfd5a",
"text": "In today’s interconnected digital world, targeted attacks have become a serious threat to conventional computer systems and critical infrastructure alike. Many researchers contribute to the fight against network intrusions or malicious software by proposing novel detection systems or analysis methods. However, few of these solutions have a particular focus on Advanced Persistent Threats or similarly sophisticated multi-stage attacks. This turns finding domain-appropriate methodologies or developing new approaches into a major research challenge. To overcome these obstacles, we present a structured review of semantics-aware works that have a high potential for contributing to the analysis or detection of targeted attacks. We introduce a detailed literature evaluation schema in addition to a highly granular model for article categorization. Out of 123 identified papers, 60 were found to be relevant in the context of this study. The selected articles are comprehensively reviewed and assessed in accordance to Kitchenham’s guidelines for systematic literature reviews. In conclusion, we combine new insights and the status quo of current research into the concept of an ideal systemic approach capable of semantically processing and evaluating information from different observation points.",
"title": ""
},
{
"docid": "60e3e47f0c12df306b6686ee358c4155",
"text": "Stroke affects 750,000 people annually, and 80% of stroke survivors are left with weakened limbs and hands. Repetitive hand movement is often used as a rehabilitation technique in order to regain hand movement and strength. In order to facilitate this rehabilitation, a robotic glove was designed to aid in the movement and coordination of gripping exercises. This glove utilizes a cable system to open and close a patients hand. The cables are actuated by servomotors, mounted in a backpack weighing 13.2lbs including battery power sources. The glove can be controlled in terms of finger position and grip force through switch interface, software program, or surface myoelectric (sEMG) signal. The primary control modes of the system provide: active assistance, active resistance and a preprogrammed mode. This project developed a working prototype of the rehabilitative robotic glove which actuates the fingers over a full range of motion across one degree-of-freedom, and is capable of generating a maximum 15N grip force.",
"title": ""
},
{
"docid": "fdd59ff419b9613a1370babe64ef1c98",
"text": "The disentangling problem is to discover multiple complex factors of variations hidden in data. One recent approach is to take a dataset with grouping structure and separately estimate a factor common within a group (content) and a factor specific to each group member (transformation). Notably, this approach can learn to represent a continuous space of contents, which allows for generalization to data with unseen contents. In this study, we aim at cultivating this approach within probabilistic deep generative models. Motivated by technical complication in existing groupbased methods, we propose a simpler probabilistic method, called group-contrastive variational autoencoders. Despite its simplicity, our approach achieves reasonable disentanglement with generalizability for three grouped datasets of 3D object images. In comparison with a previous model, although conventional qualitative evaluation shows little difference, our qualitative evaluation using few-shot classification exhibits superior performances for some datasets. We analyze the content representations from different methods and discuss their transformation-dependency and potential performance impacts.",
"title": ""
}
] |
scidocsrr
|
ef835e0a30f2d7a95ae1b824a2af24f0
|
Scalable and Consistent Radio Map Management Scheme for Participatory Sensing-based Wi-Fi Fingerprinting
|
[
{
"docid": "ebc7f54b969eb491afb7032f6c2a46b6",
"text": "The Wi-Fi fingerprinting (WF) technique normally suffers from the RSS (Received Signal Strength) variance problem caused by environmental changes that are inherent in both the training and localization phases. Several calibration algorithms have been proposed but they only focus on the hardware variance problem. Moreover, smartphones were not evaluated and these are now widely used in WF systems. In this paper, we analyze various aspect of the RSS variance problem when using smartphones for WF: device type, device placement, user direction, and environmental changes over time. To overcome the RSS variance problem, we also propose a smartphone-based, indoor pedestrian-tracking system. The scheme uses the location where the maximum RSS is observed, which is preserved even though RSS varies significantly. We experimentally validate that the proposed system is robust to the RSS variance problem.",
"title": ""
},
{
"docid": "9ad145cd939284ed77919b73452236c0",
"text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.",
"title": ""
}
] |
[
{
"docid": "2af524d484b7bb82db2dd92727a49fff",
"text": "Computer-based multimedia learning environments — consisting of pictures (such as animation) and words (such as narration) — offer a potentially powerful venue for improving student understanding. How can we use words and pictures to help people understand how scientific systems work, such as how a lightning storm develops, how the human respiratory system operates, or how a bicycle tire pump works? This paper presents a cognitive theory of multimedia learning which draws on dual coding theory, cognitive load theory, and constructivist learning theory. Based on the theory, principles of instructional design for fostering multimedia learning are derived and tested. The multiple representation principle states that it is better to present an explanation in words and pictures than solely in words. The contiguity principle is that it is better to present corresponding words and pictures simultaneously rather than separately when giving a multimedia explanation. The coherence principle is that multimedia explanations are better understood when they include few rather than many extraneous words and sounds. The modality principle is that it is better to present words as auditory narration than as visual on-screen text. The redundancy principle is that it is better to present animation and narration than to present animation, narration, and on-screen text. By beginning with a cognitive theory of how learners process multimedia information, we have been able to conduct focused research that yields some preliminary principles of instructional design for multimedia messages. 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "789a49a47e7d1a4096e69a08dcf23b5e",
"text": "Osha Saeed Al Neyadi is a B.Ed graduate from Al Ain Women's College. She now teaches at Asim Bin Thabet Primary School for Boys in Al Markaneya, Al Ain. Introduction This is a study of the effects of using games to practice vocabulary in the teaching of English to young learners. Teaching vocabulary through games was chosen as the focus area for my research for several reasons. Firstly, I observed during the course of many teaching practice placements during my undergraduate studies that new vocabulary in English lessons in UAE schools is mostly taught through the use of flashcards. Secondly, I observed that it is often taught out of context, as isolated words, and thirdly, I noticed that there is minimal variation in the teaching style used in English language teaching in UAE schools. The study was conducted with twenty-nine students in Grade Six in a primary girls' school in in the United Arab Emirates (UAE). According to my observations of how vocabulary is taught in schools, it relies on drilling the vocabulary to get the students to produce the correct pronunciation of words. Other strategies such as implementing games are very occasionally used to teach vocabulary; however, they are only used for a limited time. Using games is considered time consuming, so teachers prefer to use drilling as an immediate way of teaching and practicing vocabulary. In the school where the research was conducted, Arabic is the medium of instruction. In English class, students are encouraged to speak in English when they answer, and while they interact with their classmates. Translation is generally avoided, but it is sometimes used to clarify difficult linguistic concepts, and also to clarify meaning.",
"title": ""
},
{
"docid": "9e05a37d781d8a3ee0ecca27510f1ae9",
"text": "Context: Evidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews. Objectives: We extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process; and (2) provide evidence of challenges and related solutions for automotive software testing processes. Methods: In this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used. Results: In the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process. Conclusions: Overall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).",
"title": ""
},
{
"docid": "74ecab402c19d84194e17f43b379bbaa",
"text": "Alopecia areata (AA) is a common form of non-scarring hair loss of scalp and/or body. Genetic predisposition, autoimmunity, and environmental factors play a major role in the etiopathogenesis of AA. Patchy AA is the most common form. Atopy and autoimmune thyroiditis are most common associated conditions. Peribulbar and intrabulbar lymphocytic inflammatory infiltrate resembling \"swarm of bees\" is characteristic on histopathology. Treatment is mainly focused to contain the disease activity. Corticosteroids are the preferred treatments in form of topical, intralesional, or systemic therapy. Camouflage in the form of wigs may be an alternative option in refractory cases.",
"title": ""
},
{
"docid": "4941250a228f9494480d8dd175490671",
"text": "In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.",
"title": ""
},
{
"docid": "23d6e2407335a076526df89355b9c7fe",
"text": "In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. At the same time, this paper brings in variation rate to describe the load variation of system virtual machines, and it also introduces average load distance to measure the overall load balancing effect of the algorithm. The experiment shows that this strategy has fairly good global astringency and efficiency, and the algorithm of this paper is, to a great extent, able to solve the problems of load imbalance and high migration cost after system VM being scheduled. What is more, the average load distance does not grow with the increase of VM load variation rate, and the system scheduling algorithm has quite good resource utility.",
"title": ""
},
{
"docid": "1f3a3c1d3c452c8d4b69270481b74c56",
"text": "A smart city is a growing phenomenon of the last years with a lot of researches as well as implementation activities. A smart city is an interdisciplinary field that requires a high level of cooperation among experts from different fields and a contribution of the latest technologies in order to achieve the best results in six key areas. The six key areas cover economy, environment, mobility, people, living and governance. Following a system development methodology is in general a necessity for a successful implementation of a system or a project. Smart city projects introduce additionally new challenges. There is a need for cooperation across many fields, from technical or economic through legislation to humanitarian, together with sharing of resources. The traditional Systems Engineering methodologies fail with respect to such challenges. This paper provides an overview of the existing Systems Engineering methodologies and their limitations. A new Hybrid-Agile approach is proposed and its advantages with respect to smart city projects are discussed. However, the approach expects changes in our thinking. Customers (typically municipality or governmental organizations) have to become active and engaged in smart city projects. It is demonstrated that a city cannot be smart without smart government.",
"title": ""
},
{
"docid": "faad414eebea949d944e045f9cec3cf4",
"text": "This note introduces practical set invariance notions for physically interconnected, discrete–time systems, subject to additive but bounded disturbances. The developed approach provides a decentralized, non–conservative and computationally tractable way to study desirable robust positive invariance and stability notions for the overall system as well as to guarantee safe and independent operation of the constituting subsystems. These desirable properties are inherited, under mild assumptions, from the classical stability and invariance properties of the associated vector–valued dynamics which capture in a simple but appropriate and non– conservative way the dynamical behavior induced by the underlying set–dynamics of interest.",
"title": ""
},
{
"docid": "0a648f94b608b57827c8d6ce097037b1",
"text": "The emergence of PV inverter and Electric Vehicles (EVs) has created an increased demand for high power densities and high efficiency in power converters. Silicon carbide (SiC) is the candidate of choice to meet this demand, and it has, therefore, been the object of a growing interest over the past decade. The Boost converter is an essential part in most PV inverters and EVs. This paper presents a new generation of 1200V 20A SiC true MOSFET used in a 10KW hard-switching interleaved Boost converter with high switching frequency up to 100KHZ. It compares thermal and efficiency with Silicon high speed H3 IGBT. In both cases, results show a clear advantage for this new generation SiC MOSFET. Keywords—Silicon Cardbide; MOSFET; Interleaved; Hard Switching; Boost converter; IGBT",
"title": ""
},
{
"docid": "d6c490c24aaa6f3798f31e713441ef72",
"text": "High-level synthesis (HLS) has been gaining traction recently as a design methodology for FPGAs, with the promise of raising the productivity of FPGA hardware designers, and ultimately, opening the door to the use of FPGAs as computing devices targetable by software engineers. In this tutorial, we introduce LegUp, an open-source HLS tool for FPGAs developed at the University of Toronto. With LegUp, a user can compile a C program completely to hardware, or alternately, he/she can choose to compile the program to a hybrid hardware/software system comprising a processor along with one or more accelerators. LegUp supports the synthesis of most of the C language to hardware, including loops, structs, multi-dimensional arrays, pointer arithmetic, and floating point operations. The LegUp distribution includes the CHStone HLS benchmark suite, as well as a test suite and associated infrastructure for measuring quality of results, and for verifying the functionality of LegUp-generated circuits. LegUp is freely downloadable at www.legup.org, providing a powerful platform that can be leveraged for new high-level synthesis research.",
"title": ""
},
{
"docid": "7f6edf82ddbe5b63ba5d36a7d8691dda",
"text": "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.",
"title": ""
},
{
"docid": "5df529aca774edb0eb5ac93c9a0ce3b7",
"text": "The GRASP (Graphical Representations of Algorithms, Structures, and Processes) project, which has successfully prototyped a new algorithmic-level graphical representation for software—the control structure diagram (CSD)—is currently focused on the generation of a new fine-grained complexity metric called the complexity profile graph (CPG). The primary impetus for creation and refinement of the CSD and the CPG is to improve the comprehension efficiency of software and, as a result, improve reliability and reduce costs. The current GRASP release provides automatic CSD generation for Ada 95, C, C++, Java, and Very High-Speed Integrated Circuit Hardware Description Language (VHDL) source code, and CPG generation for Ada 95 source code. The examples and discussion in this article are based on using GRASP with Ada 95.",
"title": ""
},
{
"docid": "277edaaf026e541bc9abc83eaabbecbe",
"text": "In most situations, simple techniques for handling missing data (such as complete case analysis, overall mean imputation, and the missing-indicator method) produce biased results, whereas imputation techniques yield valid results without complicating the analysis once the imputations are carried out. Imputation techniques are based on the idea that any subject in a study sample can be replaced by a new randomly chosen subject from the same source population. Imputation of missing data on a variable is replacing that missing by a value that is drawn from an estimate of the distribution of this variable. In single imputation, only one estimate is used. In multiple imputation, various estimates are used, reflecting the uncertainty in the estimation of this distribution. Under the general conditions of so-called missing at random and missing completely at random, both single and multiple imputations result in unbiased estimates of study associations. But single imputation results in too small estimated standard errors, whereas multiple imputation results in correctly estimated standard errors and confidence intervals. In this article we explain why all this is the case, and use a simple simulation study to demonstrate our explanations. We also explain and illustrate why two frequently used methods to handle missing data, i.e., overall mean imputation and the missing-indicator method, almost always result in biased estimates.",
"title": ""
},
{
"docid": "222b853f23cbcea9794c83c1471273b8",
"text": "Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.",
"title": ""
},
{
"docid": "2779fabc9c858ba67fa8be2545cec0f1",
"text": "Abst rac t -A meta-analysis of 32 comparative studies showed that computer-based education has generally had positive effects on the achievement of elementary school pupils. These effects have been different, however, for programs of @line computer-managed instruction (CMI) and for interactive computer-assisted instruction (CAI). The average effect in 28 studies of CAI programs was an increase in pupil achievement scores of O. 47 standard deviations, or from the 50th to the 68th percentile. The average effect in four studies of CMI programs, however, was an increase in scores of only O. 07 standard deviations. Study features were not significantly related to study outcomes.",
"title": ""
},
{
"docid": "3e691cf6055eb564dedca955b816a654",
"text": "Many Internet-based services have already been ported to the mobile-based environment, embracing the new services is therefore critical to deriving revenue for services providers. Based on a valence framework and trust transfer theory, we developed a trust-based customer decision-making model of the non-independent, third-party mobile payment services context. We empirically investigated whether a customer’s established trust in Internet payment services is likely to influence his or her initial trust in mobile payment services. We also examined how these trust beliefs might interact with both positive and negative valence factors and affect a customer’s adoption of mobile payment services. Our SEM analysis indicated that trust indeed had a substantial impact on the cross-environment relationship and, further, that trust in combination with the positive and negative valence determinants directly and indirectly influenced behavioral intention. In addition, the magnitudes of these effects on workers and students were significantly different from each other. 2011 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +86 27 8755 8100; fax: +86 27 8755 6437. E-mail addresses: [email protected] (Y. Lu), [email protected] (S. Yang), [email protected] (Patrick Y.K. Chau), [email protected] (Y. Cao). 1 Tel.: +86 27 8755 6448. 2 Tel.: +852 2859 1025. 3 Tel.: +86 27 8755 8100.",
"title": ""
},
{
"docid": "dfa482fe44d97e3a3812e35a3964b39c",
"text": "This paper illustrates the use of the recently introduced method of partial directed coherence in approaching how interactions among neural structures change over short time spans that characterize well defined behavioral states. Central to the method is its use of multivariate time series modelling in conjunction with the concept of Granger causality. Simulated neural network models were used to illustrate the technique's power and limitations when dealing with neural spiking data. This was followed by the analysis of multi-unit activity data illustrating dynamical change in the interaction of thalamo-cortical structures in a behaving rat.",
"title": ""
},
{
"docid": "58bc5fb67cfb5e4b623b724cb4283a17",
"text": "In recent years, power systems have been very difficult to manage as the load demands increase and environment constraints restrict the distribution network. One another mode used for distribution of Electrical power is making use of underground cables (generally in urban areas only) instead of overhead distribution network. The use of underground cables arise a problem of identifying the fault location as it is not open to view as in case of overhead network. To improve the reliability of a distribution system, accurate identification of a faulted segment is required in order to reduce the interruption time during fault. Speedy and precise fault location plays an important role in accelerating system restoration, reducing outage time, reducing great financial loss and significantly improving system reliability. The objective of this paper is to study the methods of determining the distance of underground cable fault from the base station in kilometers. Underground cable system is a common practice followed in major urban areas. While a fault occurs for some reason, at that time the repairing process related to that particular cable is difficult due to exact unknown location of the fault in the cable. In this paper, a technique for detecting faults in underground distribution system is presented. Proposed system is used to find out the exact location of the fault and to send an SMS with details to a remote mobile phone using GSM module.",
"title": ""
},
{
"docid": "6b7daba104f8e691dd32cba0b4d66ecd",
"text": "This paper presents the first empirical results to our knowledge on learning synchronous grammars that generate logical forms. Using statistical machine translation techniques, a semantic parser based on a synchronous context-free grammar augmented with λoperators is learned given a set of training sentences and their correct logical forms. The resulting parser is shown to be the bestperforming system so far in a database query domain.",
"title": ""
},
{
"docid": "610769d8ac53d5708f3a699f3f4436f9",
"text": "For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.",
"title": ""
}
] |
scidocsrr
|
7f452369d45c64cece868ccc009e04e6
|
Real-Time Temporal Action Localization in Untrimmed Videos by Sub-Action Discovery
|
[
{
"docid": "ee9c0e79b29fbe647e3e0ccb168532b5",
"text": "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15%, 7% and 12% respectively in mAP.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
}
] |
[
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "d8cd05b5a187e8bc3eacd8777fb36218",
"text": "In this article we review bony changes resulting from alterations in intracranial pressure (ICP) and the implications for ophthalmologists and the patients for whom we care. Before addressing ophthalmic implications, we will begin with a brief overview of bone remodeling. Bony changes seen with chronic intracranial hypotension and hypertension will be discussed. The primary objective of this review was to bring attention to bony changes seen with chronic intracranial hypotension. Intracranial hypotension skull remodeling can result in enophthalmos. In advanced disease enophthalmos develops to a degree that is truly disfiguring. The most common finding for which subjects are referred is ocular surface disease, related to loss of contact between the eyelids and the cornea. Other abnormalities seen include abnormal ocular motility and optic atrophy. Recognition of such changes is important to allow for diagnosis and treatment prior to advanced clinical deterioration. Routine radiographic assessment of bony changes may allow for the identification of patient with abnormal ICP prior to the development of clinically significant disease.",
"title": ""
},
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "293e1834eef415f08e427a41e78d818f",
"text": "Autonomous robots are complex systems that require the interaction between numerous heterogeneous components (software and hardware). Because of the increase in complexity of robotic applications and the diverse range of hardware, robotic middleware is designed to manage the complexity and heterogeneity of the hardware and applications, promote the integration of new technologies, simplify software design, hide the complexity of low-level communication and the sensor heterogeneity of the sensors, improve software quality, reuse robotic software infrastructure across multiple research efforts, and to reduce production costs. This paper presents a literature survey and attribute-based bibliography of the current state of the art in robotic middleware design. The main aim of the survey is to assist robotic middleware researchers in evaluating the strengths and weaknesses of current approaches and their appropriateness for their applications. Furthermore, we provide a comprehensive set of appropriate bibliographic references that are classified based on middleware attributes.",
"title": ""
},
{
"docid": "84a2d26a0987a79baf597508543f39b6",
"text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.",
"title": ""
},
{
"docid": "3a920687e57591c1abfaf10b691132a7",
"text": "BP3TKI Palembang is the government agencies that coordinate, execute and selection of prospective migrants registration and placement. To simplify the existing procedures and improve decision-making is necessary to build a decision support system (DSS) to determine eligibility for employment abroad by applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear sequential systems development methods. The system is built using Microsoft Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system using use case diagrams and class diagrams to identify the needs of users and systems as well as systems implementation guidelines. Decision Support System which is capable of ranking the dihasialkan to prospective migrants, making it easier for parties to take keputusna BP3TKI the workers who will be flown out of the country.",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "e0ff61d4b5361c3e2b39265310d02b85",
"text": "This paper presents an adaptive technique for obtaining centers of the hidden layer neurons of radial basis function neural network (RBFNN) for face recognition. The proposed technique uses firefly algorithm to obtain natural sub-clusters of training face images formed due to variations in pose, illumination, expression and occlusion, etc. Movement of fireflies in a hyper-dimensional input space is controlled by tuning the parameter gamma (γ) of firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time and the recognition accuracy. The proposed technique is novel as it combines the advantages of evolutionary firefly algorithm and RBFNN in adaptive evolution of number and centers of hidden neurons. The strength of the proposed technique lies in its fast convergence, improved face recognition performance, reduced feature selection overhead and algorithm stability. The proposed technique is validated using benchmark face databases, namely ORL, Yale, AR and LFW. The average face recognition accuracies achieved using proposed algorithm for the above face databases outperform some of the existing techniques in face recognition.",
"title": ""
},
{
"docid": "4f0e454b8274636c56a1617668f08eed",
"text": "Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions.",
"title": ""
},
{
"docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5",
"text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.",
"title": ""
},
{
"docid": "34c3ba06f9bffddec7a08c8109c7f4b9",
"text": "The role of e-learning technologies entirely depends on the acceptance and execution of required-change in the thinking and behaviour of the users of institutions. The research are constantly reporting that many e-learning projects are falling short of their objectives due to many reasons but on the top is the user resistance to change according to the digital requirements of new era. It is argued that the suitable way for change management in e-learning environment is the training and persuading of users with a view to enhance their digital literacy and thus gradually changing the users’ attitude in positive direction. This paper discusses change management in transition to e-learning system considering pedagogical, cost and technical implications. It also discusses challenges and opportunities for integrating these technologies in higher learning institutions with examples from Turkey GATA (Gülhane Askeri Tıp Akademisi-Gülhane Military Medical Academy).",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
},
{
"docid": "10da9f0fd1be99878e280d261ea81ba3",
"text": "The fuzzy vault scheme is a cryptographic primitive being considered for storing fingerprint minutiae protected. A well-known problem of the fuzzy vault scheme is its vulnerability against correlation attack -based cross-matching thereby conflicting with the unlinkability requirement and irreversibility requirement of effective biometric information protection. Yet, it has been demonstrated that in principle a minutiae-based fuzzy vault can be secured against the correlation attack by passing the to-beprotected minutiae through a quantization scheme. Unfortunately, single fingerprints seem not to be capable of providing an acceptable security level against offline attacks. To overcome the aforementioned security issues, this paper shows how an implementation for multiple fingerprints can be derived on base of the implementation for single finger thereby making use of a Guruswami-Sudan algorithm-based decoder for verification. The implementation, of which public C++ source code can be downloaded, is evaluated for single and various multi-finger settings using the MCYTFingerprint-100 database and provides security enhancing features such as the possibility of combination with password and a slow-down mechanism.",
"title": ""
},
{
"docid": "782c8958fa9107b8d1087fe0c79de6ee",
"text": "Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions. The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.",
"title": ""
},
{
"docid": "36776b1372e745f683ca66e7c4421a76",
"text": "This paper presents the analyzed results of rotational torque and suspension force in a bearingless motor with the short-pitch winding, which are based on the computation by finite element method (FEM). The bearingless drive technique is applied to a conventional brushless DC motor, in which the stator windings are arranged at the short-pitch, and encircle only a single stator tooth. At first, the winding arrangement in the stator core, the principle of suspension force generation and the magnetic suspension control method are shown in the bearingless motor with brushless DC structure. The torque and suspension force are computed by FEM using a machine model with the short-pitch winding arrangement, and the computed results are compared between the full-pitch and short-pitch winding arrangements. The advantages of short-pitch winding arrangement are found on the basis of computed results and discussion.",
"title": ""
},
{
"docid": "d18a636768e6aea2e84c7fc59593ec89",
"text": "Enterprise social networking (ESN) techniques have been widely adopted by firms to provide a platform for public communication among employees. This study investigates how the relationships between stressors (i.e., challenge and hindrance stressors) and employee innovation are moderated by task-oriented and relationship-oriented ESN use. Since challenge-hindrance stressors and employee innovation are individual-level variables and task-oriented ESN use and relationship-oriented ESN use are team-level variables, we thus use hierarchical linear model to test this cross-level model. The results of a survey of 191 employees in 50 groups indicate that two ESN use types differentially moderate the relationship between stressors and employee innovation. Specifically, task-oriented ESN use positively moderates the effects of the two stressors on employee innovation, while relationship-oriented ESN use negatively moderates the relationship between the two stressors and employee innovation. In addition, we find that challenge stressors significantly improve employee innovation. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "73ec43c5ed8e245d0a1ff012a6a67f76",
"text": "HERE IS MUCH signal processing devoted to detection and estimation. Detection is the task of detetmitdng if a specific signal set is pteaettt in an obs&tion, whflc estimation is the task of obtaining the va.iues of the parameters derriblng the signal. Often the s@tal is complicated or is corrupted by interfeting signals or noise To facilitate the detection and estimation of signal sets. the obsenation is decomposed by a basis set which spans the signal space [ 1) For many problems of engineering interest, the class of aigttlls being sought are periodic which leads quite natuallv to a decomposition by a basis consistittg of simple petiodic fun=tions, the sines and cosines. The classic Fourier tran.,fot,,, h the mechanism by which we M able to perform this decomposttmn. BY necessity, every observed signal we pmmust be of finite extent. The extent may be adjustable and Axtable. but it must be fire. Proces%ng a fiite-duration observation ~POSCS mteresting and interacting considentior,s on the hamomc analysic rhese consldentions include detectability of tones in the Presence of nearby strong tones, rcoohability of similarstrength nearby tones, tesolvability of Gxifting tona, and biases in estimating the parameten of my of the alonmenhoned signals. For practicality, the data we pare N unifomdy spaced samples of the obsetvcd signal. For convenience. N is highJy composite, and we will zwtme N is evett. The harmottic estm~afes we obtain UtmugJt the discrae Fowie~ tmnsfotm (DFT) arc N mifcwmly spaced samples of the asaciated periodic spectra. This approach in elegant and attnctive when the proce~ scheme is cast as a spectral decomposition in an N-dimensional orthogonal vector space 121. Unfottunately, in mmY practical situations, to obtain meaningful results this elegance must be compmmised. One such t=O,l;..,Nl.N.N+l.",
"title": ""
},
{
"docid": "295212e614cc361b1a5fdd320d39f68b",
"text": "Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.",
"title": ""
},
{
"docid": "d6a6ee23cd1d863164c79088f75ece30",
"text": "In our work, 3D objects classification has been dealt with convolutional neural networks which is a common paradigm recently in image recognition. In the first phase of experiments, 3D models in ModelNet10 and ModelNet40 data sets were voxelized and scaled with certain parameters. Classical CNN and 3D Dense CNN architectures were designed for training the pre-processed data. In addition, the two trained CNNs were ensembled and the results of them were observed. A success rate of 95.37% achieved on ModelNet10 by using 3D dense CNN, a success rate of 91.24% achieved with ensemble of two CNNs on ModelNet40.",
"title": ""
},
{
"docid": "7279065640e6f2b7aab7a6e91118e0d5",
"text": "Erythrocyte injury such as osmotic shock, oxidative stress or energy depletion stimulates the formation of prostaglandin E2 through activation of cyclooxygenase which in turn activates a Ca2+ permeable cation channel. Increasing cytosolic Ca2+ concentrations activate Ca2+ sensitive K+ channels leading to hyperpolarization, subsequent loss of KCl and (further) cell shrinkage. Ca2+ further stimulates a scramblase shifting phosphatidylserine from the inner to the outer cell membrane. The scramblase is sensitized for the effects of Ca2+ by ceramide which is formed by a sphingomyelinase following several stressors including osmotic shock. The sphingomyelinase is activated by platelet activating factor PAF which is released by activation of phospholipase A2. Phosphatidylserine at the erythrocyte surface is recognised by macrophages which engulf and degrade the affected cells. Moreover, phosphatidylserine exposing erythrocytes may adhere to the vascular wall and thus interfere with microcirculation. Erythrocyte shrinkage and phosphatidylserine exposure ('eryptosis') mimic features of apoptosis in nucleated cells which however, involves several mechanisms lacking in erythrocytes. In kidney medulla, exposure time is usually too short to induce eryptosis despite high osmolarity. Beyond that high Cl- concentrations inhibit the cation channel and high urea concentrations the sphingomyelinase. Eryptosis is inhibited by erythropoietin which thus extends the life span of circulating erythrocytes. Several conditions trigger premature eryptosis thus favouring the development of anemia. On the other hand, eryptosis may be a mechanism of defective erythrocytes to escape hemolysis. Beyond their significance for erythrocyte survival and death the mechanisms involved in 'eryptosis' may similarly contribute to apoptosis of nucleated cells.",
"title": ""
}
] |
scidocsrr
|
ed6afeb80b8b3da85c6d8fa09b6871a3
|
Using Pivots to Speed-Up k-Medoids Clustering
|
[
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
}
] |
[
{
"docid": "674339928a16b372fb13395f920561e5",
"text": "High-speed, high-efficiency photodetectors play an important role in optical communication links that are increasingly being used in data centres to handle higher volumes of data traffic and higher bandwidths, as big data and cloud computing continue to grow exponentially. Monolithic integration of optical components with signal-processing electronics on a single silicon chip is of paramount importance in the drive to reduce cost and improve performance. We report the first demonstration of microand nanoscale holes enabling light trapping in a silicon photodiode, which exhibits an ultrafast impulse response (full-width at half-maximum) of 30 ps and a high efficiency of more than 50%, for use in data-centre optical communications. The photodiode uses microand nanostructured holes to enhance, by an order of magnitude, the absorption efficiency of a thin intrinsic layer of less than 2 μm thickness and is designed for a data rate of 20 gigabits per second or higher at a wavelength of 850 nm. Further optimization can improve the efficiency to more than 70%.",
"title": ""
},
{
"docid": "590a44ab149b88e536e67622515fdd08",
"text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).",
"title": ""
},
{
"docid": "7eebeb133a9881e69bf3c367b9e20751",
"text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.",
"title": ""
},
{
"docid": "b56cd1e9392976f48dddf7d3a60c5aef",
"text": "This paper presents a novel single-switch converter with high voltage gain and low voltage stress for photovoltaic applications. The proposed converter is composed of coupled-inductor and switched-capacitor techniques to achieve high step-up conversion ratio without adopting extremely high duty ratio or high turns ratio. The capacitors are charged in parallel and discharged in series by the coupled inductor to achieve high step-up voltage gain with an appropriate duty ratio. Besides, the voltage stress on the main switch is reduced with a passive clamp circuit, and the conduction losses are reduced. In addition, the reverse-recovery problem of the diode is alleviated by a coupled inductor. Thus, the efficiency can be further improved. The operating principle, steady state analysis and design of the proposed single switch converter with high step-up gain is carried out. A 24 V input voltage, 400 V output, and 300W maximum output power integrated converter is designed and analysed using MATLAB simulink. Simulation result proves the performance and functionality of the proposed single switch DC-DC converter for validation.",
"title": ""
},
{
"docid": "7db555e42bff7728edb8fb199f063cba",
"text": "The need for more post-secondary students to major and graduate in STEM fields is widely recognized. Students' motivation and strategic self-regulation have been identified as playing crucial roles in their success in STEM classes. But, how students' strategy use, self-regulation, knowledge building, and engagement impact different learning outcomes is not well understood. Our goal in this study was to investigate how motivation, strategic self-regulation, and creative competency were associated with course achievement and long-term learning of computational thinking knowledge and skills in introductory computer science courses. Student grades and long-term retention were positively associated with self-regulated strategy use and knowledge building, and negatively associated with lack of regulation. Grades were associated with higher study effort and knowledge retention was associated with higher study time. For motivation, higher learning- and task-approach goal orientations, endogenous instrumentality, and positive affect and lower learning-, task-, and performance-avoid goal orientations, exogenous instrumentality and negative affect were associated with higher grades and knowledge retention and also with strategic self-regulation and engagement. Implicit intelligence beliefs were associated with strategic self-regulation, but not grades or knowledge retention. Creative competency was associated with knowledge retention, but not grades, and with higher strategic self-regulation. Implications for STEM education are discussed.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "e31901738e78728a7376457f7d1acd26",
"text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.",
"title": ""
},
{
"docid": "0a5ae1eb45404d6a42678e955c23116c",
"text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.",
"title": ""
},
{
"docid": "fcfe75abfde3edbf051ccb78387c3904",
"text": "In this paper a Fuzzy Logic Controller (FLC) for path following of a four-wheel differentially skid steer mobile robot is presented. Fuzzy velocity and fuzzy torque control of the mobile robot is compared with classical controllers. To assess controllers robot kinematics and dynamics are simulated with parameters of P2-AT mobile robot. Results demonstrate the better performance of fuzzy logic controllers in following a predefined path.",
"title": ""
},
{
"docid": "54001ce62d0b571be9fbaf0980aa1b70",
"text": "Due to the large increase of malware samples in the last 10 years, the demand of the antimalware industry for an automated classifier has increased. However, this classifier has to satisfy two restrictions in order to be used in real life situations: high detection rate and very low number of false positives. By modifying the perceptron algorithm and combining existing features, we were able to provide a good solution to the problem, called the one side perceptron. Since the power of the perceptron lies in its features, we will focus our study on improving the feature creation algorithm. This paper presents different methods, including simple mathematical operations and the usage of a restricted Boltzmann machine, for creating features designed for an increased detection rate of the one side perceptron. The analysis is carried out using a large dataset of approximately 3 million files.",
"title": ""
},
{
"docid": "d32887dfac583ed851f607807c2f624e",
"text": "For a through-wall ultrawideband (UWB) random noise radar using array antennas, subtraction of successive frames of the cross-correlation signals between each received element signal and the transmitted signal is able to isolate moving targets in heavy clutter. Images of moving targets are subsequently obtained using the back projection (BP) algorithm. This technique is not constrained to noise radar, but can also be applied to other kinds of radar systems. Different models based on the finite-difference time-domain (FDTD) algorithm are set up to simulate different through-wall scenarios of moving targets. Simulation results show that the heavy clutter is suppressed, and the signal-to-clutter ratio (SCR) is greatly enhanced using this approach. Multiple moving targets can be detected, localized, and tracked for any random movement.",
"title": ""
},
{
"docid": "44402fdc3c9f2c6efaf77a00035f38ad",
"text": "A multi-objective optimization strategy to find optimal designs of composite multi-rim flywheel rotors is presented. Flywheel energy storage systems have been expanding into applications such as rail and automotive transportation, where the construction volume is limited. Common flywheel rotor optimization approaches for these applications are single-objective, aiming to increase the stored energy or stored energy density. The proposed multi-objective optimization offers more information for decision-makers optimizing three objectives separately: stored energy, cost and productivity. A novel approach to model the manufacturing of multi-rim composite rotors facilitates the consideration of manufacturing cost and time within the optimization. An analytical stress calculation for multi-rim rotors is used, which also takes interference fits and residual stresses into account. Constrained by a failure prediction based on the Maximum Strength, Maximum Strain and Tsai-Wu criterion, the discrete and nonlinear optimization was solved. A hybrid optimization strategy is presented that combines a genetic algorithm with a local improvement executed by a sequential quadratic program. The problem was solved for two rotor geometries used for light rail transit applications showing similar design results as in industry.",
"title": ""
},
{
"docid": "9f9268761bd2335303cfe2797d7e9eaa",
"text": "CYBER attacks have risen in recent times. The attack on Sony Pictures by hackers, allegedly from North Korea, has caught worldwide attention. The President of the United States of America issued a statement and “vowed a US response after North Korea’s alleged cyber-attack”.This dangerous malware termed “wiper” could overwrite data and stop important execution processes. An analysis by the FBI showed distinct similarities between this attack and the code used to attack South Korea in 2013, thus confirming that hackers re-use code from already existing malware to create new variants. This attack along with other recently discovered attacks such as Regin, Opcleaver give one clear message: current cyber security defense mechanisms are not sufficient enough to thwart these sophisticated attacks. Today’s defense mechanisms are based on scanning systems for suspicious or malicious activity. If such an activity is found, the files under suspect are either quarantined or the vulnerable system is patched with an update. These scanning methods are based on a variety of techniques such as static analysis, dynamic analysis and other heuristics based techniques, which are often slow to react to new attacks and threats. Static analysis is based on analyzing an executable without executing it, while dynamic analysis executes the binary and studies its behavioral characteristics. Hackers are familiar with these standard methods and come up with ways to evade the current defense mechanisms. They produce new malware variants that easily evade the detection methods. These variants are created from existing malware using inexpensive easily available “factory toolkits” in a “virtual factory” like setting, which then spread over and infect more systems. Once a system is compromised, it either quickly looses control and/or the infection spreads to other networked systems. While security techniques constantly evolve to keep up with new attacks, hackers too change their ways and continue to evade defense mechanisms. As this never-ending billion dollar “cat and mouse game” continues, it may be useful to look at avenues that can bring in novel alternative and/or orthogonal defense approaches to counter the ongoing threats. The hope is to catch these new attacks using orthogonal and complementary methods which may not be well known to hackers, thus making it more difficult and/or expensive for them to evade all detection schemes. This paper focuses on such orthogonal approaches from Signal and Image Processing that complement standard approaches.",
"title": ""
},
{
"docid": "7f5af3806f0baa040a26f258944ad3f9",
"text": "Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a harmonic mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted harmonic mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.",
"title": ""
},
{
"docid": "97691304930a85066a15086877473857",
"text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "0ccf20f28baf8a11c78d593efb9f6a52",
"text": "From a traction application point of view, proper operation of the synchronous reluctance motor over a wide speed range and mechanical robustness is desired. This paper presents new methods to improve the rotor mechanical integrity and the flux weakening capability at high speed using geometrical and variable ampere-turns concepts. The results from computer-aided analysis and experiment are compared to evaluate the methods. It is shown that, to achieve a proper design at high speed, the magnetic and mechanical performances need to be simultaneously analyzed due to their mutual effect.",
"title": ""
},
{
"docid": "f1cfb30b328725121ed232381d43ac3a",
"text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1",
"title": ""
},
{
"docid": "a41d40d8349c1071c6f532b6b8e11be3",
"text": "A novel wideband slotline antenna is proposed using the multimode resonance concept. By symmetrically introducing two slot stubs along the slotline radiator near the nulls of electric-field distribution of the second odd-order mode, two radiation modes are excited in a single slotline resonator. With the help of the two stubs, the second odd-order mode gradually merges with its first counterpart and results into a wideband radiation with two resonances. Prototype antennas are then fabricated to experimentally validate the principle and design approach of the proposed slotline antenna. It is shown that the proposed slotline antenna's impedance bandwidth could be effectively increased to 32.7% while keeping an inherent narrow slot structure.",
"title": ""
},
{
"docid": "ebe91d4e3559439af5dd729e7321883d",
"text": "Performance of data analytics in Internet of Things (IoTs) depends on effective transport services offered by the underlying network. Fog computing enables independent data-plane computational features at the edge-switches, which serves as a platform for performing certain critical analytics required at the IoT source. To this end, in this paper, we implement a working prototype of Fog computing node based on Software-Defined Networking (SDN). Message Queuing Telemetry Transport (MQTT) is chosen as the candidate IoT protocol that transports data generated from IoT devices (a:k:a: MQTT publishers) to a remote host (called MQTT broker). We implement the MQTT broker functionalities integrated at the edge-switches, that serves as a platform to perform simple message-based analytics at the switches, and also deliver messages in a reliable manner to the end-host for post-delivery analytics. We mathematically validate the improved delivery performance as offered by the proposed switch-embedded brokers.",
"title": ""
}
] |
scidocsrr
|
fc572685aa55c813ea4803ee813b4801
|
Proposal : Scalable , Active and Flexible Learning on Distributions
|
[
{
"docid": "9e3057c25630bfdf5e7ebcc53b6995b0",
"text": "We present a new solution to the ``ecological inference'' problem, of learning individual-level associations from aggregate data. This problem has a long history and has attracted much attention, debate, claims that it is unsolvable, and purported solutions. Unlike other ecological inference techniques, our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in Hilbert space. Our approach relies on recent learning theory results for distribution regression, using kernel embeddings of distributions. Our novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving us a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Our approach is highly scalable as it relies on FastFood, a randomized explicit feature representation for kernel embeddings. We apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data. We consider the 2012 US Presidential election, and ask: what was the probability that members of various demographic groups supported Barack Obama, and how did this vary spatially across the country? Our results match standard survey-based exit polling data for the small number of states for which it is available, and serve to fill in the large gaps in this data, at a much higher degree of granularity.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "2bdaaeb18db927e2140c53fcc8d4fa30",
"text": "Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. As a concrete example, in the context of environmental monitoring of Lake Zurich we would like to estimate the regions of the lake where the concentration of chlorophyll or algae is greater than some critical value, which would serve as an indicator of algal bloom phenomena. A critical factor in such applications is the high cost in terms of time, baery power, etc. that is associated with each measurement, therefore it is important to be careful about selecting “informative” locations to sample, in order to reduce the total sampling effort required. We formalize the task of level set estimation as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an active learning algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural seings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Based on the laer extension we also propose a simple path planning algorithm. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely the aforementioned autonomous monitoring of algal populations in Lake Zurich and geolocating network latency.",
"title": ""
}
] |
[
{
"docid": "5b0e33ede34f6532a48782e423128f49",
"text": "The literature on globalisation reveals wide agreement concerning the relevance of international sourcing strategies as key competitive factors for companies seeking globalisation, considering such strategies to be a purchasing management approach focusing on supplies from vendors in the world market, rather than relying exclusively on domestic offerings (Petersen, Frayer, & Scannel, 2000; Stevens, 1995; Trent & Monczka, 1998). Thus, the notion of “international sourcing” mentioned by these authors describes the level of supply globalisation in companies’ purchasing strategy, as related to supplier source (Giunipero & Pearcy, 2000; Levy, 1995; Trent & Monczka, 2003b).",
"title": ""
},
{
"docid": "0a3d4b02d2273087c50b8b0d77fb8c36",
"text": "Circulation. 2017;135:e867–e884. DOI: 10.1161/CIR.0000000000000482 April 11, 2017 e867 ABSTRACT: Multiple randomized controlled trials (RCTs) have assessed the effects of supplementation with eicosapentaenoic acid plus docosahexaenoic acid (omega-3 polyunsaturated fatty acids, commonly called fish oils) on the occurrence of clinical cardiovascular diseases. Although the effects of supplementation for the primary prevention of clinical cardiovascular events in the general population have not been examined, RCTs have assessed the role of supplementation in secondary prevention among patients with diabetes mellitus and prediabetes, patients at high risk of cardiovascular disease, and those with prevalent coronary heart disease. In this scientific advisory, we take a clinical approach and focus on common indications for omega-3 polyunsaturated fatty acid supplements related to the prevention of clinical cardiovascular events. We limited the scope of our review to large RCTs of supplementation with major clinical cardiovascular disease end points; meta-analyses were considered secondarily. We discuss the features of available RCTs and provide the rationale for our recommendations. We then use existing American Heart Association criteria to assess the strength of the recommendation and the level of evidence. On the basis of our review of the cumulative evidence from RCTs designed to assess the effect of omega-3 polyunsaturated fatty acid supplementation on clinical cardiovascular events, we update prior recommendations for patients with prevalent coronary heart disease, and we offer recommendations, when data are available, for patients with other clinical indications, including patients with diabetes mellitus and prediabetes and those with high risk of cardiovascular disease, stroke, heart failure, and atrial fibrillation. David S. Siscovick, MD, MPH, FAHA, Chair Thomas A. Barringer, MD, FAHA Amanda M. Fretts, PhD, MPH Jason H.Y. Wu, PhD, MSc, FAHA Alice H. Lichtenstein, DSc, FAHA Rebecca B. Costello, PhD, FAHA Penny M. Kris-Etherton, PhD, RD, FAHA Terry A. Jacobson, MD, FAHA Mary B. Engler, PhD, RN, MS, FAHA Heather M. Alger, PhD Lawrence J. Appel, MD, MPH, FAHA Dariush Mozaffarian, MD, DrPH, FAHA On behalf of the American Heart Association Nutrition Committee of the Council on Lifestyle and Cardiometabolic Health; Council on Epidemiology and Prevention; Council on Cardiovascular Disease in the Young; Council on Cardiovascular and Stroke Nursing; and Council on Clinical Cardiology Omega-3 Polyunsaturated Fatty Acid (Fish Oil) Supplementation and the Prevention of Clinical Cardiovascular Disease",
"title": ""
},
{
"docid": "da61794b9ffa1f6f4bc39cef9655bf77",
"text": "This manuscript analyzes the effects of design parameters, such as aspect ratio, doping concentration and bias, on the performance of a general CMOS Hall sensor, with insight on current-related sensitivity, power consumption, and bandwidth. The article focuses on rectangular-shaped Hall probes since this is the most general geometry leading to shape-independent results. The devices are analyzed by means of 3D-TCAD simulations embedding galvanomagnetic transport model, which takes into account the Lorentz force acting on carriers due to a magnetic field. Simulation results define a set of trade-offs and design rules that can be used by electronic designers to conceive their own Hall probes.",
"title": ""
},
{
"docid": "fe4954b2b96a0ab95f5eedfca9b12066",
"text": "Marketing historically has undergone various shifts in emphasis from production through sales to marketing orientation. However, the various orientations have failed to engage customers in meaningful relationship mutually beneficial to organisations and customers, with all forms of the shift still exhibiting the transactional approach inherit in traditional marketing (Kubil & Doku, 2010). However, Coltman (2006) indicates that in strategy and marketing literature, scholars have long suggested that a customer centred strategy is fundamental to competitive advantage and that customer relationship management (CRM) programmes are increasingly being used by organisations to support the type of customer understanding and interdepartmental connectedness required to effectively execute a customer strategy.",
"title": ""
},
{
"docid": "f3fdc63904e2bf79df8b6ca30a864fd3",
"text": "Although the potential benefits of a powered ankle-foot prosthesis have been well documented, no one has successfully developed and verified that such a prosthesis can improve amputee gait compared to a conventional passive-elastic prosthesis. One of the main hurdles that hinder such a development is the challenge of building an ankle-foot prosthesis that matches the size and weight of the intact ankle, but still provides a sufficiently large instantaneous power output and torque to propel an amputee. In this paper, we present a novel, powered ankle-foot prosthesis that overcomes these design challenges. The prosthesis comprises an unidirectional spring, configured in parallel with a force-controllable actuator with series elasticity. With this architecture, the ankle-foot prosthesis matches the size and weight of the human ankle, and is shown to be satisfying the restrictive design specifications dictated by normal human ankle walking biomechanics.",
"title": ""
},
{
"docid": "e8fee9f93106ce292c89c26be373030f",
"text": "As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. Therefore it is commonly used in the diagnosis of retinal diseases associated with edema in and under the retinal layers. In this paper, a new framework is proposed for the task of fluid segmentation and detection in retinal OCT images. Based on the raw images and layers segmented by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The leave-one-out cross validation experiments on the RETOUCH database show that our method performs well in both segmentation (mean Dice: 0.7317) and detection (mean AUC: 0.985) tasks.",
"title": ""
},
{
"docid": "181356b104a26d1d300d10619fb78f45",
"text": "Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "3f225efbccb63d0c5170fce44fadb3c6",
"text": "Pelvic pain is a common gynaecological complaint, sometimes without any obvious etiology. We report a case of pelvic congestion syndrome, an often overlooked cause of pelvic pain, diagnosed by helical computed tomography. This seems to be an effective and noninvasive imaging modality. RID=\"\"ID=\"\"<e5>Correspondence to:</e5> J. H. Desimpelaere",
"title": ""
},
{
"docid": "3ca2d95885303f1ab395bd31d32df0c2",
"text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.",
"title": ""
},
{
"docid": "5fa0ae0baaa954fb2ab356719f8ca629",
"text": "Estimating the pose of a camera (virtual or real) in which some augmentation takes place is one of the most important parts of an augmented reality (AR) system. Availability of powerful processors and fast frame grabbers have made vision-based trackers commonly used due to their accuracy as well as flexibility and ease of use. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require certain maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without any visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system while in action. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.",
"title": ""
},
{
"docid": "5ca29a94ac01f9ad20249021802b1746",
"text": "Big Data has become a very popular term. It refers to the enormous amount of structured, semi-structured and unstructured data that are exponentially generated by high-performance applications in many domains: biochemistry, genetics, molecular biology, physics, astronomy, business, to mention a few. Since the literature of Big Data has increased significantly in recent years, it becomes necessary to develop an overview of the state-of-the-art in Big Data. This paper aims to provide a comprehensive review of Big Data literature of the last 4 years, to identify the main challenges, areas of application, tools and emergent trends of Big Data. To meet this objective, we have analyzed and classified 457 papers concerning Big Data. This review gives relevant information to practitioners and researchers about the main trends in research and application of Big Data in different technical domains, as well as a reference overview of Big Data tools.",
"title": ""
},
{
"docid": "f8ea80edbb4f31d5c0d1a2da5e8aae13",
"text": "BACKGROUND\nPremenstrual syndrome (PMS) is a common condition, and for 5% of women, the influence is so severe as to interfere with their mental health, interpersonal relationships, or studies. Severe PMS may result in decreased occupational productivity. The aim of this study was to investigate the influence of perception of PMS on evaluation of work performance.\n\n\nMETHODS\nA total of 1971 incoming female university students were recruited in September 2009. A simulated clinical scenario was used, with a test battery including measurement of psychological symptoms and the Chinese Premenstrual Symptom Questionnaire.\n\n\nRESULTS\nWhen evaluating employee performance in the simulated scenario, 1565 (79.4%) students neglected the impact of PMS, while 136 (6.9%) students considered it. Multivariate logistic regression showed that perception of daily function impairment due to PMS and frequency of measuring body weight were significantly associated with consideration of the influence of PMS on evaluation of work performance.\n\n\nCONCLUSION\nIt is important to increase the awareness of functional impairments related to severe PMS.",
"title": ""
},
{
"docid": "f7c2ebd19c41b697d52850a225bfe8a0",
"text": "There is currently a misconception among designers and users of free space laser communication (lasercom) equipment that 1550 nm light suffers from less atmospheric attenuation than 785 or 850 nm light in all weather conditions. This misconception is based upon a published equation for atmospheric attenuation as a function of wavelength, which is used frequently in the free-space lasercom literature. In hazy weather (visibility > 2 km), the prediction of less atmospheric attenuation at 1550 nm is most likely true. However, in foggy weather (visibility < 500 m), it appears that the attenuation of laser light is independent of wavelength, ie. 785 nm, 850 nm, and 1550 nm are all attenuated equally by fog. This same wavelength independence is also observed in snow and rain. This observation is based on an extensive literature search, and from full Mie scattering calculations. A modification to the published equation describing the atmospheric attenuation of laser power, which more accurately describes the effects of fog, is offered. This observation of wavelength-independent attenuation in fog is important, because fog, heavy snow, and extreme rain are the only types of weather that are likely to disrupt short (<500 m) lasercom links. Short lasercom links will be necessary to meet the high availability requirements of the telecommunications industry.",
"title": ""
},
{
"docid": "485270200008a292cefdb1e952441113",
"text": "This paper describes the prototype design, specimen design, experimental setup, and experimental results of three steel plate shear wall concepts. Prototype light-gauge steel plate shear walls are designed as seismic retrofits for a hospital st area of high seismicity, and emphasis is placed on minimizing their impact on the existing framing. Three single-story test spe designed using these prototypes as a basis, two specimens with flat infill plates (thicknesses of 0.9 mm ) and a third using a corrugat infill plate (thickness of 0.7 mm). Connection of the infill plates to the boundary frames is achieved through the use of b combination with industrial strength epoxy or welds, allowing for mobility of the infills if desired. Testing of the systems is don quasi-static conditions. It is shown that one of the flat infill plate specimens, as well as the specimen utilizing a corrugated in achieve significant ductility and energy dissipation while minimizing the demands placed on the surrounding framing. Exp results are compared to monotonic pushover predictions from computer analysis using a simple model and good agreement DOI: 10.1061/ (ASCE)0733-9445(2005)131:2(259) CE Database subject headings: Shear walls; Experimentation; Retrofitting; Seismic design; Cyclic design; Steel plates . d the field g of be a ; 993; rot are have ds ts istexfrom ctive is to y seis eintrofit reatn to the ular are r light-",
"title": ""
},
{
"docid": "3564cf609cf1b9666eaff7edcd12a540",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "176d0bf9525d6dd9bd4837b174e4f769",
"text": "Prader-Willi syndrome (PWS) is a genetic disorder frequently characterized by obesity, growth hormone deficiency, genital abnormalities, and hypogonadotropic hypogonadism. Incomplete or delayed pubertal development as well as premature adrenarche are usually found in PWS, whereas central precocious puberty (CPP) is very rare. This study aimed to report the clinical and biochemical follow-up of a PWS boy with CPP and to discuss the management of pubertal growth. By the age of 6, he had obesity, short stature, and many clinical criteria of PWS diagnosis, which was confirmed by DNA methylation test. Therapy with recombinant human growth hormone (rhGH) replacement (0.15 IU/kg/day) was started. Later, he presented psychomotor agitation, aggressive behavior, and increased testicular volume. Laboratory analyses were consistent with the diagnosis of CPP (gonadorelin-stimulated LH peak 15.8 IU/L, testosterone 54.7 ng/dL). The patient was then treated with gonadotropin-releasing hormone analog (GnRHa). Hypothalamic dysfunctions have been implicated in hormonal disturbances related to pubertal development, but no morphologic abnormalities were detected in the present case. Additional methylation analysis (MS-MLPA) of the chromosome 15q11 locus confirmed PWS diagnosis. We presented the fifth case of CPP in a genetically-confirmed PWS male. Combined therapy with GnRHa and rhGH may be beneficial in this rare condition of precocious pubertal development in PWS.",
"title": ""
},
{
"docid": "3a3f3e1c0eac36d53a40d7639c3d65cc",
"text": "The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
},
{
"docid": "1e493440a61578c8c6ca8fbe63f475d6",
"text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.",
"title": ""
}
] |
scidocsrr
|
b50725324e44b8548ecc10451e59ec09
|
Logical Physical Clocks and Consistent Snapshots in Globally Distributed Databases
|
[
{
"docid": "f481f0ba70ce16587f7c5639360bc2f9",
"text": "We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application.",
"title": ""
},
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "457684e85d51869692aab90231a711a1",
"text": "Cassandra is a distributed storage system for managing structured data that is designed to scale to a very large size across many commodity servers, with no single point of failure. Reliability at massive scale is a very big challenge. Outages in the service can have significant negative impact. Hence Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different datacenters). At this scale, small and large components fail continuously; the way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. Cassandra has achieved several goals--scalability, high performance, high availability and applicability. In many ways Cassandra resembles a database and shares many design and implementation strategies with databases. Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format.",
"title": ""
}
] |
[
{
"docid": "6147c993e4c7f5b9daf18f99c374b129",
"text": "We propose an efficient text summarization technique that involves two basic operations. The first operation involves finding coherent chunks in the document and the second operation involves ranking the text in the individual coherent chunks and picking the sentences that rank above a given threshold. The coherent chunks are formed by exploiting the lexical relationship between adjacent sentences in the document. Occurrence of words through repetition or relatedness by sense relation plays a major role in forming a cohesive tie. The proposed text ranking approach is based on a graph theoretic ranking model applied to text summarization task.",
"title": ""
},
{
"docid": "c2305233c8ec74913196a6d8a832d582",
"text": "Almost a decade has passed since the objectives and benefits of autonomic computing were stated, yet even the latest system designs and deployments exhibit only limited and isolated elements of autonomic functionality. In previous work, we identified several of the key challenges behind this delay in the adoption of autonomic solutions, and proposed a generic framework for the development of autonomic computing systems that overcomes these challenges. In this article, we describe how existing technologies and standards can be used to realise our autonomic computing framework, and present its implementation as a service-oriented architecture. We show how this implementation employs a combination of automated code generation, model-based and object-oriented development techniques to ensure that the framework can be used to add autonomic capabilities to systems whose characteristics are unknown until runtime. We then use our framework to develop two autonomic solutions for the allocation of server capacity to services of different priorities and variable workloads, thus illustrating its application in the context of a typical data-centre resource management problem.",
"title": ""
},
{
"docid": "6a3bb84e7b8486692611aaa790609099",
"text": "As ubiquitous commerce using IT convergence technologies is coming, it is important for the strategy of cosmetic sales to investigate the sensibility and the degree of preference in the environment for which the makeup style has changed focusing on being consumer centric. The users caused the diversification of the facial makeup styles, because they seek makeup and individuality to satisfy their needs. In this paper, we proposed the effect of the facial makeup style recommendation on visual sensibility. Development of the facial makeup style recommendation system used a user interface, sensibility analysis, weather forecast, and collaborative filtering for the facial makeup styles to satisfy the user’s needs in the cosmetic industry. Collaborative filtering was adopted to recommend facial makeup style of interest for users based on the predictive relationship discovered between the current user and other previous users. We used makeup styles in the survey questionnaire. The pictures of makeup style details, such as foundation, color lens, eye shadow, blusher, eyelash, lipstick, hairstyle, hairpin, necklace, earring, and hair length were evaluated in terms of sensibility. The data were analyzed by SPSS using ANOVA and factor analysis to discover the most effective types of details from the consumer’s sensibility viewpoint. Sensibility was composed of three concepts: contemporary, mature, and individual. The details of facial makeup styles were positioned in 3D-concept space to relate each type of detail to the makeup concept regarding a woman’s cosmetics. Ultimately, this paper suggests empirical applications to verify the adequacy and the validity of this system.",
"title": ""
},
{
"docid": "df2c576e7cc3259ae1e0c29b3e3b4d35",
"text": "The use of previous direct interactions is probably the best way to calculate a reputation but, unfortunately this information is not always available. This is especially true in large multi-agent systems where interaction is scarce. In this paper we present a reputation system that takes advantage, among other things, of social relations between agents to overcome this problem.",
"title": ""
},
{
"docid": "d5d2b61493ed11ee74d566b7713b57ba",
"text": "BACKGROUND\nSymptomatic breakthrough in proton pump inhibitor (PPI)-treated gastro-oesophageal reflux disease (GERD) patients is a common problem with a range of underlying causes. The nonsystemic, raft-forming action of alginates may help resolve symptoms.\n\n\nAIM\nTo assess alginate-antacid (Gaviscon Double Action, RB, Slough, UK) as add-on therapy to once-daily PPI for suppression of breakthrough reflux symptoms.\n\n\nMETHODS\nIn two randomised, double-blind studies (exploratory, n=52; confirmatory, n=262), patients taking standard-dose PPI who had breakthrough symptoms, assessed by Heartburn Reflux Dyspepsia Questionnaire (HRDQ), were randomised to add-on Gaviscon or placebo (20 mL after meals and bedtime). The exploratory study endpoint was change in HRDQ score during treatment vs run-in. The confirmatory study endpoint was \"response\" defined as ≥3 days reduction in the number of \"bad\" days (HRDQ [heartburn/regurgitation] >0.70) during treatment vs run-in.\n\n\nRESULTS\nIn the exploratory study, significantly greater reductions in HRDQ scores (heartburn/regurgitation) were observed in the Gaviscon vs placebo (least squares mean difference [95% CI] -2.10 [-3.71 to -0.48]; P=.012). Post hoc \"responder\" analysis of the exploratory study also revealed significantly more Gaviscon patients (75%) achieved ≥3 days reduction in \"bad\" days vs placebo patients (36%), P=.005. In the confirmatory study, symptomatic improvement was observed with add-on Gaviscon (51%) but there was no significant difference in response vs placebo (48%) (OR (95% CI) 1.15 (0.69-1.91), P=.5939).\n\n\nCONCLUSIONS\nAdding Gaviscon to PPI reduced breakthrough GERD symptoms but a nearly equal response was observed for placebo. Response to intervention may vary according to whether symptoms are functional in origin.",
"title": ""
},
{
"docid": "f87b87af157de5bd5229f3e20a0d12a2",
"text": "The paper describes an improvement of the chopper method for elimination of parasitic voltages in a low resistance comparison and measurement procedure. The basic circuit diagram along with a short description of the working principle are presented and the appropriate low resistance comparator prototype was designed and realized. Preliminary examinations confirm the possibility of measuring extremely low voltages. Very high accuracy in resistance comparison and measurement is achieved (0.08 ppm for 1,000 attempts). Some special critical features in the design are discussed and solutions for overcoming the problems are described.",
"title": ""
},
{
"docid": "8fcc1b7e4602649f66817c4c50e10b3d",
"text": "Conventional wisdom suggests that praising a child as a whole or praising his or her traits is beneficial. Two studies tested the hypothesis that both criticism and praise that conveyed person or trait judgments could send a message of contingent worth and undermine subsequent coping. In Study 1, 67 children (ages 5-6 years) role-played tasks involving a setback and received 1 of 3 forms of criticism after each task: person, outcome, or process criticism. In Study 2, 64 children role-played successful tasks and received either person, outcome, or process praise. In both studies, self-assessments, affect, and persistence were measured on a subsequent task involving a setback. Results indicated that children displayed significantly more \"helpless\" responses (including self-blame) on all dependent measures after person criticism or praise than after process criticism or praise. Thus person feedback, even when positive, can create vulnerability and a sense of contingent self-worth.",
"title": ""
},
{
"docid": "93133be6094bba6e939cef14a72fa610",
"text": "We systematically searched available databases. We reviewed 6,143 studies published from 1833 to 2017. Reports in English, French, German, Italian, and Spanish were considered, as were publications in other languages if definitive treatment and recurrence at specific follow-up times were described in an English abstract. We assessed data in the manner of a meta-analysis of RCTs; further we assessed non-RCTs in the manner of a merged data analysis. In the RCT analysis including 11,730 patients, Limberg & Dufourmentel operations were associated with low recurrence of 0.6% (95%CI 0.3–0.9%) 12 months and 1.8% (95%CI 1.1–2.4%) respectively 24 months postoperatively. Analysing 89,583 patients from RCTs and non-RCTs, the Karydakis & Bascom approaches were associated with recurrence of only 0.2% (95%CI 0.1–0.3%) 12 months and 0.6% (95%CI 0.5–0.8%) 24 months postoperatively. Primary midline closure exhibited long-term recurrence up to 67.9% (95%CI 53.3–82.4%) 240 months post-surgery. For most procedures, only a few RCTs without long term follow up data exist, but substitute data from numerous non-RCTs are available. Recurrence in PSD is highly dependent on surgical procedure and by follow-up time; both must be considered when drawing conclusions regarding the efficacy of a procedure.",
"title": ""
},
{
"docid": "3b125237578f4505a0ca6c9477e2b766",
"text": "With today’s technology, elderly users could be supported in living independently in their own homes for a prolonged period of time. Commercially available products enable remote monitoring of the state of the user, enhance social networks, and even support elderly citizens in their everyday routines. Whereas technology seems to be in place to support elderly users, one might question the value of present solutions in terms of solving real user problems such as loneliness and self-efficacy. Furthermore, products tend to be complex in use and do not relate to the reference framework of elderly users. Consequently, acceptability of many present solutions tends to be low. This paper presents a design vision of assisted living solutions that elderly love to use. Based on earlier work, five concrete design goals have been identified that are specific to assisted living services for elderly users. The vision is illustrated by three examples of ongoing work; these cases present the design process of prototypes that are being tested in the field with elderly users. Even though the example cases are limited in terms of number of participants and quantitative data, the qualitative feedback and design experiences can serve as inspiration for designers of assisted living services.",
"title": ""
},
{
"docid": "0ede49c216f911cd01b3bfcf0c539d6e",
"text": "Distribution patterns along a slope and vertical root distribution were compared among seven major woody species in a secondary forest of the warm-temperate zone in central Japan in relation to differences in soil moisture profiles through a growing season among different positions along the slope. Pinus densiflora, Juniperus rigida, Ilex pedunculosa and Lyonia ovalifolia, growing mostly on the upper part of the slope with shallow soil depth had shallower roots. Quercus serrata and Quercus glauca, occurring mostly on the lower slope with deep soil showed deeper rooting. Styrax japonica, mainly restricted to the foot slope, had shallower roots in spite of growing on the deepest soil. These relations can be explained by the soil moisture profile under drought at each position on the slope. On the upper part of the slope and the foot slope, deep rooting brings little advantage in water uptake from the soil due to the total drying of the soil and no period of drying even in the shallow soil, respectively. However, deep rooting is useful on the lower slope where only the deep soil layer keeps moist. This was supported by better diameter growth of a deep-rooting species on deeper soil sites than on shallower soil sites, although a shallow-rooting species showed little difference between them.",
"title": ""
},
{
"docid": "9f60376e3371ac489b4af90026041fa7",
"text": "There is a substantive body of research focusing on women's experiences of intimate partner violence (IPV), but a lack of qualitative studies focusing on men's experiences as victims of IPV. This article addresses this gap in the literature by paying particular attention to hegemonic masculinities and men's perceptions of IPV. Men ( N = 9) participated in in-depth interviews. Interview data were rigorously subjected to thematic analysis, which revealed five key themes in the men's narratives: fear of IPV, maintaining power and control, victimization as a forbidden narrative, critical understanding of IPV, and breaking the silence. Although the men share similar stories of victimization as women, the way this is influenced by their gendered histories is different. While some men reveal a willingness to disclose their victimization and share similar fear to women victims, others reframe their victim status in a way that sustains their own power and control. The men also draw attention to the contextual realities that frame abuse, including histories of violence against the women who used violence and the realities of communities suffering intergenerational affects of colonized histories. The findings reinforce the importance of in-depth qualitative work toward revealing the context of violence, understanding the impact of fear, victimization, and power/control on men's mental health as well as the outcome of legal and support services and lack thereof. A critical discussion regarding the gendered context of violence, power within relationships, and addressing men's need for support without redefining victimization or taking away from policies and support for women's ongoing victimization concludes the work.",
"title": ""
},
{
"docid": "283c6f04a5409a56fa366832c8a93c9c",
"text": "A substantial body of work has examined how exploitative and exploratory learning processes need to be balanced within an organization in order to increase innovation, productivity, and firm performance. Since exploration and exploitation require different resources, structures, and processes, several approaches to balancing these activities have been suggested; one of which is simultaneous implementation which is termed ambidexterity. In this paper, we adjust the lens and suggest that equally crucial issues to resolve are (a) defining ‘balance’ and (b) determining criteria for assessing ‘appropriate.’ We argue that balance does not necessarily require identical proportions of exploration and exploitation and propose different mixes of these two processes leading to different ambidexterity configurations. Three specific ambidexterity configurations are examined in terms of their distinct contributions to strategic objectives. In addition we argue that several contingency factors (organizational and environmental) influence the relation between particular ambidexterity configurations and performance. Therefore an ambidexterity configurations need to change and evolve to achieve optimum performance over time. We contribute to emerging research in contingency theory, organizational learning, and strategic management.",
"title": ""
},
{
"docid": "e42a1faf3d983bac59c0bfdd79212093",
"text": "L eadership matters, according to prominent leadership scholars (see also Bennis, 2007). But what is leadership? That turns out to be a challenging question to answer. Leadership is a complex and diverse topic, and trying to make sense of leadership research can be an intimidating endeavor. One comprehensive handbook of leadership (Bass, 2008), covering more than a century of scientific study, comprises more than 1,200 pages of text and more than 200 additional pages of references! There is clearly a substantial scholarly body of leadership theory and research that continues to grow each year. Given the sheer volume of leadership scholarship that is available, our purpose is not to try to review it all. That is why our focus is on the nature or essence of leadership as we and our chapter authors see it. But to fully understand and appreciate the nature of leadership, it is essential that readers have some background knowledge of the history of leadership research, the various theoretical streams that have evolved over the years, and emerging issues that are pushing the boundaries of the leadership frontier. Further complicating our task is that more than one hundred years of leadership research have led to several paradigm shifts and a voluminous body of knowledge. On several occasions, scholars of leadership became quite frustrated by the large amount of false starts, incremental theoretical advances, and contradictory findings. As stated more than five decades ago by Warren Bennis (1959, pp. 259–260), “Of all the hazy and confounding areas in social psychology, leadership theory undoubtedly contends for Leadership: Past, Present, and Future",
"title": ""
},
{
"docid": "5b79a4fcedaebf0e64b7627b2d944e22",
"text": "Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or nongradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network’s ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a selfreplication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.",
"title": ""
},
{
"docid": "a691642e6d27c0df3508a2ab953e4392",
"text": "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estima-",
"title": ""
},
{
"docid": "d96237fca40ac097e52146549672fbdf",
"text": "Cannabidiol (CBD) is a phytocannabinoid with therapeutic properties for numerous disorders exerted through molecular mechanisms that are yet to be completely identified. CBD acts in some experimental models as an anti-inflammatory, anticonvulsant, anti-oxidant, anti-emetic, anxiolytic and antipsychotic agent, and is therefore a potential medicine for the treatment of neuroinflammation, epilepsy, oxidative injury, vomiting and nausea, anxiety and schizophrenia, respectively. The neuroprotective potential of CBD, based on the combination of its anti-inflammatory and anti-oxidant properties, is of particular interest and is presently under intense preclinical research in numerous neurodegenerative disorders. In fact, CBD combined with Δ(9)-tetrahydrocannabinol is already under clinical evaluation in patients with Huntington's disease to determine its potential as a disease-modifying therapy. The neuroprotective properties of CBD do not appear to be exerted by the activation of key targets within the endocannabinoid system for plant-derived cannabinoids like Δ(9)-tetrahydrocannabinol, i.e. CB(1) and CB(2) receptors, as CBD has negligible activity at these cannabinoid receptors, although certain activity at the CB(2) receptor has been documented in specific pathological conditions (i.e. damage of immature brain). Within the endocannabinoid system, CBD has been shown to have an inhibitory effect on the inactivation of endocannabinoids (i.e. inhibition of FAAH enzyme), thereby enhancing the action of these endogenous molecules on cannabinoid receptors, which is also noted in certain pathological conditions. CBD acts not only through the endocannabinoid system, but also causes direct or indirect activation of metabotropic receptors for serotonin or adenosine, and can target nuclear receptors of the PPAR family and also ion channels.",
"title": ""
},
{
"docid": "609806e76f3f919da03900165c2727b8",
"text": "Modern and powerful mobile devices comprise an attractive target for any potential intruder or malicious code. The usual goal of an attack is to acquire users’ sensitive data or compromise the device so as to use it as a stepping stone (or bot) to unleash a number of attacks to other targets. In this paper, we focus on the popular iPhone device. We create a new stealth and airborne malware namely iSAM able to wirelessly infect and self-propagate to iPhone devices. iSAM incorporates six different malware mechanisms, and is able to connect back to the iSAM bot master server to update its programming logic or to obey commands and unleash a synchronized attack. Our analysis unveils the internal mechanics of iSAM and discusses the way all iSAM components contribute towards achieving its goals. Although iSAM has been specifically designed for iPhone it can be easily modified to attack any iOS-based device.",
"title": ""
},
{
"docid": "6f049f55c1b6f65284c390bd9a2d7511",
"text": "Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.",
"title": ""
},
{
"docid": "ef0e2fb10fe5a3a5b2676f7630989d14",
"text": "This paper presents a novel method of characterizing optically transparent diamond-grid unit cells at millimeter-wave (mmWave) spectrum. The unit cell consists of Ag-alloy grids featuring 2000-Å thickness and $3 \\mu \\mathrm{m}$ grid-width, resulting in 88 % optical transmittance and sheet resistance of $\\pmb{3.22 \\Omega/\\mathrm{sq}}$. The devised characterization method enables accurate and efficient modeling of transparent circuits at mmWave. The validity of this approach is studied by devising an optically transparent patch antenna operating at 30.34 GHz with a measured gain of 3.2 dBi. The featured analysis and demonstration paves way to a novel concept of integrating optically transparent antennas within the active region of display panels in the future.",
"title": ""
},
{
"docid": "f05718832e9e8611b4cd45b68d0f80e3",
"text": "Conflict occurs frequently in any workplace; health care is not an exception. The negative consequences include dysfunctional team work, decreased patient satisfaction, and increased employee turnover. Research demonstrates that training in conflict resolution skills can result in improved teamwork, productivity, and patient and employee satisfaction. Strategies to address a disruptive physician, a particularly difficult conflict situation in healthcare, are addressed.",
"title": ""
}
] |
scidocsrr
|
2e9015433f83b79fb13724ffacc0bdad
|
Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability
|
[
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
}
] |
[
{
"docid": "45eb2d7b74f485e9eeef584555e38316",
"text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.",
"title": ""
},
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
},
{
"docid": "de99a984795645bc2e9fb4b3e3173807",
"text": "Neural networks are a family of powerful machine learning models. is book focuses on the application of neural network models to natural language data. e first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. e second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. ese architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.",
"title": ""
},
{
"docid": "2be58a0a458115fb9ef00627cc0580e0",
"text": "OBJECTIVE\nTo determine the physical and psychosocial impact of macromastia on adolescents considering reduction mammaplasty in comparison with healthy adolescents.\n\n\nMETHODS\nThe following surveys were administered to adolescents with macromastia and control subjects, aged 12 to 21 years: Short-Form 36v2, Rosenberg Self-Esteem Scale, Breast-Related Symptoms Questionnaire, and Eating-Attitudes Test-26 (EAT-26). Demographic variables and self-reported breast symptoms were compared between the 2 groups. Linear regression models, unadjusted and adjusted for BMI category (normal weight, overweight, obese), were fit to determine the effect of case status on survey score. Odds ratios for the risk of disordered eating behaviors (EAT-26 score ≥ 20) in cases versus controls were also determined.\n\n\nRESULTS\nNinety-six subjects with macromastia and 103 control subjects participated in this study. Age was similar between groups, but subjects with macromastia had a higher BMI (P = .02). Adolescents with macromastia had lower Short-Form 36v2 domain, Rosenberg Self-Esteem Scale, and Breast-Related Symptoms Questionnaire scores and higher EAT-26 scores compared with controls. Macromastia was also associated with a higher risk of disordered eating behaviors. In almost all cases, the impact of macromastia was independent of BMI category.\n\n\nCONCLUSIONS\nMacromastia has a substantial negative impact on health-related quality of life, self-esteem, physical symptoms, and eating behaviors in adolescents with this condition. These observations were largely independent of BMI category. Health care providers should be aware of these important negative health outcomes that are associated with macromastia and consider early evaluation for adolescents with this condition.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "447c5b2db5b1d7555cba2430c6d73a35",
"text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.",
"title": ""
},
{
"docid": "47bf54c0d51596f39929e8f3e572a051",
"text": "Parameterizations of triangulated surfaces are used in an increasing number of mesh processing applications for various purposes. Although demands vary, they are often required to preserve the surface metric and thus minimize angle, area and length deformation. However, most of the existing techniques primarily target at angle preservation while disregarding global area deformation. In this paper an energy functional is proposed, that quantifies angle and global area deformations simultaneously, while the relative importance between angle and area preservation can be controlled by the user through a parameter. We show how this parameter can be chosen to obtain parameterizations, that are optimized for an uniform sampling of the surface of a model. Maps obtained by minimizing this energy are well suited for applications that desire an uniform surface sampling, like re-meshing or mapping regularly patterned textures. Besides being invariant under rotation and translation of the domain, the energy is designed to prevent face flips during minimization and does not require a fixed boundary in the parameter domain. Although the energy is nonlinear, we show how it can be minimized efficiently using non-linear conjugate gradient methods in a hierarchical optimization framework and prove the convergence of the algorithm. The ability to control the tradeoff between the degree of angle and global area preservation is demonstrated for several models of varying complexity.",
"title": ""
},
{
"docid": "e1bee61b205d29db6b2ebbaf95e9c20b",
"text": "Despite the fact that there are thousands of programming languages existing there is a huge controversy about what language is better to solve a particular problem. In this paper we discuss requirements for programming language with respect to AGI research. In this article new language will be presented. Unconventional features (e.g. probabilistic programming and partial evaluation) are discussed as important parts of language design and implementation. Besides, we consider possible applications to particular problems related to AGI. Language interpreter for Lisp-like probabilistic mixed paradigm programming language is implemented in Haskell.",
"title": ""
},
{
"docid": "3a1019c31ff34f8a45c65703c1528fc4",
"text": "The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "b0766f310c4926b475bb646911a27f34",
"text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.",
"title": ""
},
{
"docid": "569700bd1114b1b93a13af25b2051631",
"text": "Empathy and sympathy play crucial roles in much of human social interaction and are necessary components for healthy coexistence. Sympathy is thought to be a proxy for motivating prosocial behavior and providing the affective and motivational base for moral development. The purpose of the present study was to use functional MRI to characterize developmental changes in brain activation in the neural circuits underpinning empathy and sympathy. Fifty-seven individuals, whose age ranged from 7 to 40 years old, were presented with short animated visual stimuli depicting painful and non-painful situations. These situations involved either a person whose pain was accidentally caused or a person whose pain was intentionally inflicted by another individual to elicit empathic (feeling as the other) or sympathetic (feeling concern for the other) emotions, respectively. Results demonstrate monotonic age-related changes in the amygdala, supplementary motor area, and posterior insula when participants were exposed to painful situations that were accidentally caused. When participants observed painful situations intentionally inflicted by another individual, age-related changes were detected in the dorsolateral prefrontal and ventromedial prefrontal cortex, with a gradual shift in that latter region from its medial to its lateral portion. This pattern of activation reflects a change from a visceral emotional response critical for the analysis of the affective significance of stimuli to a more evaluative function. Further, these data provide evidence for partially distinct neural mechanisms subserving empathy and sympathy, and demonstrate the usefulness of a developmental neurobiological approach to the new emerging area of moral neuroscience.",
"title": ""
},
{
"docid": "023302562ddfe48ac81943fedcf881b7",
"text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.",
"title": ""
},
{
"docid": "691032ab4d9bcc1f536b1b8a5d8e73ae",
"text": "Many decisions must be made under stress, and many decision situations elicit stress responses themselves. Thus, stress and decision making are intricately connected, not only on the behavioral level, but also on the neural level, i.e., the brain regions that underlie intact decision making are regions that are sensitive to stress-induced changes. The purpose of this review is to summarize the findings from studies that investigated the impact of stress on decision making. The review includes those studies that examined decision making under stress in humans and were published between 1985 and October 2011. The reviewed studies were found using PubMed and PsycInfo searches. The review focuses on studies that have examined the influence of acutely induced laboratory stress on decision making and that measured both decision-making performance and stress responses. Additionally, some studies that investigated decision making under naturally occurring stress levels and decision-making abilities in patients who suffer from stress-related disorders are described. The results from the studies that were included in the review support the assumption that stress affects decision making. If stress confers an advantage or disadvantage in terms of outcome depends on the specific task or situation. The results also emphasize the role of mediating and moderating variables. The results are discussed with respect to underlying psychological and neural mechanisms, implications for everyday decision making and future research directions.",
"title": ""
},
{
"docid": "ea765da47c4280f846fe144570a755dc",
"text": "A new nonlinear noise reduction method is presented that uses the discrete wavelet transform. Similar to Donoho (1995) and Donohoe and Johnstone (1994, 1995), the authors employ thresholding in the wavelet transform domain but, following a suggestion by Coifman, they use an undecimated, shift-invariant, nonorthogonal wavelet transform instead of the usual orthogonal one. This new approach can be interpreted as a repeated application of the original Donoho and Johnstone method for different shifts. The main feature of the new algorithm is a significantly improved noise reduction compared to the original wavelet based approach. This holds for a large class of signals, both visually and in the l/sub 2/ sense, and is shown theoretically as well as by experimental results.",
"title": ""
},
{
"docid": "4427f79777bfe5ea1617f06a5aa6f0cc",
"text": "Despite decades of sustained effort, memory corruption attacks continue to be one of the most serious security threats faced today. They are highly sought after by attackers, as they provide ultimate control --- the ability to execute arbitrary low-level code. Attackers have shown time and again their ability to overcome widely deployed countermeasures such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) by crafting Return Oriented Programming (ROP) attacks. Although Turing-complete ROP attacks have been demonstrated in research papers, real-world ROP payloads have had a more limited objective: that of disabling DEP so that injected native code attacks can be carried out. In this paper, we provide a systematic defense, called Control Flow and Code Integrity (CFCI), that makes injected native code attacks impossible. CFCI achieves this without sacrificing compatibility with existing software, the need to replace system programs such as the dynamic loader, and without significant performance penalty. We will release CFCI as open-source software by the time of this conference.",
"title": ""
},
{
"docid": "3969a0156c558020ca1de3b978c3ab4e",
"text": "Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) are 2 clinically opposite growth-affecting disorders belonging to the group of congenital imprinting disorders. The expression of both syndromes usually depends on the parental origin of the chromosome in which the imprinted genes reside. SRS is characterized by severe intrauterine and postnatal growth retardation with various additional clinical features such as hemihypertrophy, relative macrocephaly, fifth finger clinodactyly, and triangular facies. BWS is an overgrowth syndrome with many additional clinical features such as macroglossia, organomegaly, and an increased risk of childhood tumors. Both SRS and BWS are clinically and genetically heterogeneous, and for clinical diagnosis, different diagnostic scoring systems have been developed. Six diagnostic scoring systems for SRS and 4 for BWS have been previously published. However, neither syndrome has common consensus diagnostic criteria yet. Most cases of SRS and BWS are associated with opposite epigenetic or genetic abnormalities in the 11p15 chromosomal region leading to opposite imbalances in the expression of imprinted genes. SRS is also caused by maternal uniparental disomy 7, which is usually identified in 5-10% of the cases, and is therefore the first imprinting disorder that affects 2 different chromosomes. In this review, we describe in detail the clinical diagnostic criteria and scoring systems as well as molecular causes in both SRS and BWS.",
"title": ""
},
{
"docid": "65aa27cc08fd1f3532f376b536c452ba",
"text": "Design work and design knowledge in Information Systems (IS) is important for both research and practice. Yet there has been comparatively little critical attention paid to the problem of specifying design theory so that it can be communicated, justified, and developed cumulatively. In this essay we focus on the structural components or anatomy of design theories in IS as a special class of theory. In doing so, we aim to extend the work of Walls, Widemeyer and El Sawy (1992) on the specification of information systems design theories (ISDT), drawing on other streams of thought on design research and theory to provide a basis for a more systematic and useable formulation of these theories. We identify eight separate components of design theories: (1) purpose and scope, (2) constructs, (3) principles of form and function, (4) artifact mutability, (5) testable propositions, (6) justificatory knowledge (kernel theories), (7) principles of implementation, and (8) an expository instantiation. This specification includes components missing in the Walls et al. adaptation of Dubin (1978) and Simon (1969) and also addresses explicitly problems associated with the role of instantiations and the specification of design theories for methodologies and interventions as well as for products and applications. The essay is significant as the unambiguous establishment of design knowledge as theory gives a sounder base for arguments for the rigor and legitimacy of IS as an applied discipline and for its continuing progress. A craft can proceed with the copying of one example of a design artifact by one artisan after another. A discipline cannot.",
"title": ""
}
] |
scidocsrr
|
14306881432e7b8363e84157717369f4
|
Performance considerations of network functions virtualization using containers
|
[
{
"docid": "b6c62936aef87ab2cce565f6142424bf",
"text": "Concerns have been raised about the performance of PC-based virtual routers as they do packet processing in software. Furthermore, it becomes challenging to maintain isolation among virtual routers due to resource contention in a shared environment. Hardware vendors recognize this issue and PC hardware with virtualization support (SR-IOV and Intel-VTd) has been introduced in recent years. In this paper, we investigate how such hardware features can be integrated with two different virtualization technologies (LXC and KVM) to enhance performance and isolation of virtual routers on shared environments. We compare LXC and KVM and our results indicate that KVM in combination with hardware support can provide better trade-offs between performance and isolation. We notice that KVM has slightly lower throughput, but has superior isolation properties by providing more explicit control of CPU resources. We demonstrate that KVM allows defining a CPU share for a virtual router, something that is difficult to achieve in LXC, where packet forwarding is done in a kernel shared by all virtual routers.",
"title": ""
}
] |
[
{
"docid": "19fe8c6452dd827ffdd6b4c6e28bc875",
"text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.",
"title": ""
},
{
"docid": "ca1729ffc67b37c39eca7d98115a55ec",
"text": "Causal inference is one of the fundamental problems in science. In recent years, several methods have been proposed for discovering causal structure from observational data. These methods, however, focus specifically on numeric data, and are not applicable on nominal or binary data. In this work, we focus on causal inference for binary data. Simply put, we propose causal inference by compression. To this end we propose an inference framework based on solid information theoretic foundations, i.e. Kolmogorov complexity. However, Kolmogorov complexity is not computable, and hence we propose a practical and computable instantiation based on the Minimum Description Length (MDL) principle. To apply the framework in practice, we propose ORIGO, an efficient method for inferring the causal direction from binary data. ORIGO employs the lossless PACK compressor, works directly on the data and does not require assumptions about neither distributions nor the type of causal relations. Extensive evaluation on synthetic, benchmark, and real-world data shows that ORIGO discovers meaningful causal relations, and outperforms state-of-the-art methods by a wide margin.",
"title": ""
},
{
"docid": "17cd4876c5189cf91fbe1ad4cfd1c962",
"text": "Ad click prediction is a task to estimate the click-through rate (CTR) in sponsored ads, the accuracy of which impacts user search experience and businesses' revenue. State-of-the-art sponsored search systems typically model it as a classification problem and employ machine learning approaches to predict the CTR per ad. In this paper, we propose a new approach to predict ad CTR in sequence which considers user browsing behavior and the impact of top ads quality to the current one. To the best of our knowledge, this is the first attempt in the literature to predict ad CTR by using Recurrent Neural Networks (RNN) with Long Short-Term Memory (LSTM) cells. The proposed model is evaluated on a real dataset and we show that LSTM-RNN outperforms DNN model on both AUC and RIG. Since the RNN inference is time consuming, a simplified version is also proposed, which can achieve more than half of the gain with the overall serving cost almost unchanged.",
"title": ""
},
{
"docid": "670b1d7cf683732c38d197126e094a74",
"text": "Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domainspecific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift, we argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning.",
"title": ""
},
{
"docid": "420a3d0059a91e78719955b4cc163086",
"text": "The superior skills of experts, such as accomplished musicians and chess masters, can be amazing to most spectators. For example, club-level chess players are often puzzled by the chess moves of grandmasters and world champions. Similarly, many recreational athletes find it inconceivable that most other adults – regardless of the amount or type of training – have the potential ever to reach the performance levels of international competitors. Especially puzzling to philosophers and scientists has been the question of the extent to which expertise requires innate gifts versus specialized acquired skills and abilities. One of the most widely used and simplest methods of gathering data on exceptional performance is to interview the experts themselves. But are experts always capable of describing their thoughts, their behaviors, and their strategies in a manner that would allow less-skilled individuals to understand how the experts do what they do, and perhaps also understand how they might reach expert level through appropriate training? To date, there has been considerable controversy over the extent to which experts are capable of explaining the nature and structure of their exceptional performance. Some pioneering scientists, such as Binet (1893 / 1966), questioned the validity of the experts’ descriptions when they found that some experts gave reports inconsistent with those of other experts. To make matters worse, in those rare cases that allowed verification of the strategy by observing the performance, discrepancies were found between the reported strategies and the observations (Watson, 1913). Some of these discrepancies were explained, in part, by the hypothesis that some processes were not normally mediated by awareness/attention and that the mere act of engaging in self-observation (introspection) during performance changed the content of ongoing thought processes. These problems led most psychologists in first half of the 20th century to reject all types of introspective verbal reports as valid scientific evidence, and they focused almost exclusively on observable behavior (Boring, 1950). In response to the problems with the careful introspective analysis of images and perceptions, investigators such as John B.",
"title": ""
},
{
"docid": "18762f4c3115ae53b2b88aafde77856c",
"text": "BACKGROUND\nReconstruction of the skin defects of malar region poses some challenging problems including obvious scar formation, dog-ear formation, trapdoor deformity and displacement of surrounding anatomic landmarks such as the lower eyelid, oral commissure, ala nasi, and sideburn.\n\n\nPURPOSE\nHere, a new local flap procedure, namely the reading man procedure, for reconstruction of large malar skin defects is described.\n\n\nMATERIALS AND METHODS\nIn this technique, 2 flaps designed in an unequal Z-plasty manner are used. The first flap is transposed to the defect area, whereas the second flap is used for closure of the first flap's donor site. In the last 5 years, this technique has been used for closure of the large malar defects in 18 patients (11 men and 7 women) aged 21 to 95 years. The defect size was ranging between 3 and 8.5 cm in diameter.\n\n\nRESULTS\nA tension-free defect closure was obtained in all patients. There was no patient with dog-ear formation, ectropion, or distortion of the surrounding anatomic structures. No tumor recurrence was observed. A mean follow-up of 26 months (range, 5 mo to 3.5 y) revealed a cosmetically acceptable scar formation in all patients.\n\n\nCONCLUSIONS\nThe reading man procedure was found to be a useful and easygoing technique for the closure of malar defects, which allows defect closure without any additional excision of surrounding healthy tissue. It provides a tension-free closure of considerably large malar defects without creating distortions of the mobile anatomic structures.",
"title": ""
},
{
"docid": "737dda9cc50e5cf42523e6cadabf524e",
"text": "Maintaining incisor alignment is an important goal of orthodontic retention and can only be guaranteed by placement of an intact, passive and permanent fixed retainer. Here we describe a reliable technique for bonding maxillary retainers and demonstrate all the steps necessary for both technician and clinician. The importance of increasing the surface roughness of the wire and teeth to be bonded, maintaining passivity of the retainer, especially during bonding, the use of a stiff wire and correct placement of the retainer are all discussed. Examples of adverse tooth movement from retainers with twisted and multistrand wires are shown.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "7abe1fd1b0f2a89bf51447eaef7aa989",
"text": "End users increasingly expect ubiquitous connectivity while on the move. With a variety of wireless access technologies available, we expect to always be connected to the technology that best matches our performance goals and price points. Meanwhile, sophisticated onboard units (OBUs) enable geolocation and complex computation in support of handover. In this paper, we present an overview of vertical handover techniques and propose an algorithm empowered by the IEEE 802.21 standard, which considers the particularities of the vehicular networks (VNs), the surrounding context, the application requirements, the user preferences, and the different available wireless networks [i.e., Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), and Universal Mobile Telecommunications System (UMTS)] to improve users' quality of experience (QoE). Our results demonstrate that our approach, under the considered scenario, is able to meet application requirements while ensuring user preferences are also met.",
"title": ""
},
{
"docid": "5e3d770390e03445c079c05a097fb891",
"text": "Electronic Commerce has increased the global reach of small and medium scale enterprises (SMEs); its acceptance as an IT infrastructure depends on the users’ conscious assessment of the influencing constructs as depicted in Technology Acceptance Model (TAM), Theory of Reasoned Action (TRA), Theory of Planned Behaviour (TPB), and Technology-Organization-Environment (T-O-E) model. The original TAM assumes the constructs of perceived usefulness (PU) and perceived ease of use (PEOU); TPB perceived behavioural control and subjective norms; and T-O-E firm’s size, consumer readiness, trading partners’ readiness, competitive pressure, and scope of business operation. This paper reviewed and synthesized the constructs of these models and proposed an improved TAM through T-O-E. The improved TAM and T-O-E integrated more constructs than the original TAM, T-O-E, TPB, and IDT, leading to eighteen propositions to promote and facilitate future research, and to guide explanation and prediction of IT adoption in an organized system. The integrated constructscompany mission, individual difference factors, perceived trust, and perceived service quality improve existing knowledge on EC acceptance and provide bases for informed decision(s).",
"title": ""
},
{
"docid": "654f50ccb20720fdb49a2326ae014ba9",
"text": "OBJECTIVE\nThis study was undertaken to describe the distribution of pelvic organ support stages in a population of women seen at outpatient gynecology clinics for routine gynecologic health care.\n\n\nSTUDY DESIGN\nThis was an observational study. Women seen for routine gynecologic health care at four outpatient gynecology clinics were recruited to participate. After informed consent was obtained general biographic data were collected regarding obstetric history, medical history, and surgical history. Women then underwent a pelvic examination. Pelvic organ support was measured and described according to the pelvic organ prolapse quantification system. Stages of support were evaluated by variable for trends with Pearson chi(2) statistics.\n\n\nRESULTS\nA total of 497 women were examined. The average age was 44 years, with a range of 18 to 82 years. The overall distribution of pelvic organ prolapse quantification system stages was as follows: stage 0, 6.4%; stage 1, 43.3%; stage 2, 47.7%; and stage 3, 2.6%. No subjects examined had pelvic organ prolapse quantification system stage 4 prolapse. Variables with a statistically significant trend toward increased pelvic organ prolapse quantification system stage were advancing age, increasing gravidity and parity, increasing number of vaginal births, delivery of a macrosomic infant, history of hysterectomy or pelvic organ prolapse operations, postmenopausal status, and hypertension.\n\n\nCONCLUSION\nThe distribution of the pelvic organ prolapse quantification system stages in the population revealed a bell-shaped curve, with most subjects having stage 1 or 2 support. Few subjects had either stage 0 (excellent support) or stage 3 (moderate to severe pelvic support defects) results. There was a statistically significant trend toward increased pelvic organ prolapse quantification system stage of support among women with many of the historically quoted etiologic factors for the development of pelvic organ prolapse.",
"title": ""
},
{
"docid": "5569fa921ab298e25a70d92489b273fc",
"text": "We present Centiman, a system for high performance, elastic transaction processing in the cloud. Centiman provides serializability on top of a key-value store with a lightweight protocol based on optimistic concurrency control (OCC).\n Centiman is designed for the cloud setting, with an architecture that is loosely coupled and avoids synchronization wherever possible. Centiman supports sharded transaction validation; validators can be added or removed on-the-fly in an elastic manner. Processors and validators scale independently of each other and recover from failure transparently to each other. Centiman's loosely coupled design creates some challenges: it can cause spurious aborts and it makes it difficult to implement common performance optimizations for read-only transactions. To deal with these issues, Centiman uses a watermark abstraction to asynchronously propagate information about transaction commits through the system.\n In an extensive evaluation we show that Centiman provides fast elastic scaling, low-overhead serializability for read-heavy workloads, and scales to millions of operations per second.",
"title": ""
},
{
"docid": "09a23ea8fc94178fdde98cc2774abc54",
"text": "Heating, Ventilation, and Air Conditioning (HVAC) accounts for about half of the energy consumption in buildings. HVAC energy consumption can be reduced by changing the indoor air temperature setpoint, but changing the setpoint too aggressively can overly reduce user comfort. We have therefore designed and implemented SPOT: a Smart Personalized Office Thermal control system that balances energy conservation with personal thermal comfort in an office environment. SPOT relies on a new model for personal thermal comfort that we call the Predicted Personal Vote model. This model quantitatively predicts human comfort based on a set of underlying measurable environmental and personal parameters. SPOT uses a set of sensors, including a Microsoft Kinect, to measure the parameters underlying the PPV model, then controls heating and cooling elements to dynamically adjust indoor temperature to maintain comfort. Based on a deployment of SPOT in a real office environment, we find that SPOT can accurately maintain personal comfort despite environmental fluctuations and allows a worker to balance personal comfort with energy use.",
"title": ""
},
{
"docid": "9950daef3ca18eeee0482717c5e5fe5e",
"text": "Rapidly growing rate of industry of earth moving machines is assured through the high performance construction machineries with complex mechanism and automation of construction activity. Design of backhoe link mechanism is critical task in context of digging force developed through actuators during the digging operation. The digging forces developed by actuators must be greater than that of the resistive forces offered by the terrain to be excavated. This paper focuses on the evaluation method of bucket capacity and digging forces required to dig the terrain for light duty construction work. This method provides the prediction of digging forces and can be applied for autonomous operation of excavation task. The evaluated digging forces can be used as boundary condition and loading conditions to carry out Finite Element Analysis of the backhoe mechanism for strength and stress analysis. A generalized breakout force and digging force model also developed using the fundamentals of kinematics of backhoe mechanism in context of robotics. An analytical approach provided for static force analysis of mini hydraulic backhoe excavator attachment.",
"title": ""
},
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
},
{
"docid": "2ad80de5642ab11f6aaf079bc09f4c42",
"text": "We examine the relationship between geography and ethnic homophily in Estonia, a linguistically divided country. Analyzing the physical locations and cellular communications of tens of thousands of individuals, we document a strong relationship between the ethnic concentration of an individual's geographic neighborhood and the ethnic composition of the people with whom he interacts. The empirical evidence is consistent with a theoretical model in which individuals prefer to form ties with others living close by and of the same ethnicity. Exploiting variation in the data caused by migrants and quasi-exogenous settlement patterns, we nd suggestive evidence that the ethnic composition of geographic neighborhoods has a causal in uence on the ethnic structure of social networks.",
"title": ""
},
{
"docid": "5ef325cffe20a0337eca258fa7ad8392",
"text": "DEAP (Distributed Evolutionary Algorithms in Python) is a novel volutionary computation framework for rapid prototyping and testing of ideas. Its design departs from most other existing frameworks in that it seeks to make algorithms explicit and data structures transparent, as opposed to the more common black box type of frameworks. It also incorporates easy parallelism where users need not concern themselves with gory implementation details like synchronization and load balancing, only functional decomposition. Several examples illustrate the multiple properties of DEAP.",
"title": ""
},
{
"docid": "2c2281551bc085a12e9b9bf15ff092c5",
"text": "Clustering aims at discovering groups and identifying interesting distributions and patterns in data sets. Researchers have extensively studied clustering since it arises in many application domains in engineering and social sciences. In the last years the availability of huge transactional and experimental data sets and the arising requirements for data mining created needs for clustering algorithms that scale and can be applied in diverse domains. This paper surveys clustering methods and approaches available in literature in a comparative way. It also presents the basic concepts, principles and assumptions upon which the clustering algorithms are based. Another important issue is the validity of the clustering schemes resulting from applying algorithms. This is also related to the inherent features of the data set under concern. We review and compare clustering validity measures available in the literature. Furthermore, we illustrate the issues that are underaddressed by the recent algorithms and we address new research directions.",
"title": ""
},
{
"docid": "f560dbe8f3ff47731061d67b596ec7b0",
"text": "This paper considers the problem of fixed priority scheduling of periodic tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the Liu and Layland bound. The results are shown to provide a basis for developing predictable distributed real-time systems.",
"title": ""
},
{
"docid": "9a43387bb85efe85e9395a90a7934b5f",
"text": "0. Introduction This is a manual for coding Centering Theory (Grosz et al., 1995) in Spanish. The manual is still under revision. The coding is being done on two sets of corpora: • ISL corpus. A set of task-oriented dialogues in which participants try to find a date where they can meet. Distributed by the Interactive Systems Lab at Carnegie Mellon University. Transcription conventions for this corpus can be found in Appendix A. • CallHome corpus. Spontaneous telephone conversations, distributed by the Linguistics Data Consortium at the University of Pennsylvania. Information about this corpus can be obtained from the LDC. This manual provides guidelines for how to segment discourse (Section 1), what to include in the list of forward-looking centers (Section 2), and how to rank the list (Section 3). In Section 4, we list some unresolved issues. 1. Utterance segmentation 1.1 Utterance In this section, we discuss how to segment discourse into utterances. Besides general segmentation of coordinated and subordinated clauses, we discuss how to treat some spoken language phenomena, such as false starts. In general, an utterance U is a tensed clause. Because we are analyzing telephone conversations, a turn may be a clause or it may be not. For those cases in which the turn is not a clause, a turn is considered an utterance if it contains entities. The first pass in segmentation is to break the speech into intonation units. For the ISL corpus, an utterance U is defined as an intonation unit marked by either {period}, {quest} or {seos} (see Appendix A for details on transcription). Note that {comma}, unless it is followed by {seos}, does not define an utterance. In the example below, (1c.) corresponds to the beginning of a turn by a different speaker. However, even though (1c.) is not a tensed clause, it is treated as an utterance because it contains entities, it is followed by {comma} {seos}, and it does not seem to belong to the following utterance.",
"title": ""
}
] |
scidocsrr
|
5dfb326a4efdde7ec72d6d40d07c3e74
|
Ontology Engineering Methodology
|
[
{
"docid": "8eb96ae8116a16e24e6a3b60190cc632",
"text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.",
"title": ""
}
] |
[
{
"docid": "2a173d5f62b9fb1db3afe6f36b64ba5b",
"text": "any health care facilities have incorporated an antiseptic skin cleansing protocol, often referred to as preoperative bathing and cleansing, to reduce the endogenous microbial burden on the skin of patients undergoing elective surgery, with the aim of reducing the risk of surgical site infections (SSIs). According to a recent study by Injean et al, 91% of all facilities that perform coronary artery bypass surgery in California have a standardized preoperative bathing and cleansing protocol for patients. Historically, this practice has been endorsed by national and international organizations, such as the Hospital Infection Control Practice Advisory Committee and the Centers for Disease Control and Prevention, the Association for Professionals in Infection Control and Epidemiology (APIC), AORN, the Institute for Healthcare Improvement (IHI), and the National Institute for Health and Care Excellence (NICE), which recommend bathing and/or cleansing with an antiseptic agent before surgery as a component of a broader strategy to reduce SSIs. The 2008 Society for Healthcare Epidemiology of America (SHEA)/ Infectious Diseases Society of America (IDSA)/Surgical Infection Society (SIS) strategies to prevent SSIs in acute care hospitals declined to recommend a specific application policy regarding selection of an antiseptic agent for preoperative bathing but acknowledged that the (maximal) antiseptic benefits of chlorhexidine gluconate (CHG) are dependent on achieving adequate skin surface concentrations.",
"title": ""
},
{
"docid": "f330cfad6e7815b1b0670217cd09b12e",
"text": "In this paper we study the effect of false data injection attacks on state estimation carried over a sensor network monitoring a discrete-time linear time-invariant Gaussian system. The steady state Kalman filter is used to perform state estimation while a failure detector is employed to detect anomalies in the system. An attacker wishes to compromise the integrity of the state estimator by hijacking a subset of sensors and sending altered readings. In order to inject fake sensor measurements without being detected the attacker will need to carefully design his actions to fool the estimator as abnormal sensor measurements would result in an alarm. It is important for a designer to determine the set of all the estimation biases that an attacker can inject into the system without being detected, providing a quantitative measure of the resilience of the system to such attacks. To this end, we will provide an ellipsoidal algorithm to compute its inner and outer approximations of such set. A numerical example is presented to further illustrate the effect of false data injection attack on state estimation.",
"title": ""
},
{
"docid": "6220113006a0314017fa9f5c7243842e",
"text": "This paper presents a temperature sensor based on a frequency-to-digital converter with digitally controlled process compensation. The proposed temperature sensor utilizes ring oscillators to generate a temperature dependent frequency. The adjusted linear frequency difference slope is used to improve the linearity of the temperature sensor and to compensate for process variations. Furthermore, an additional process compensation scheme is proposed to enhance the accuracy under one point calibration. With one point calibration, the resolution of the temperature sensor is 0.18 <sup>°</sup>C/LSB and the maximum inaccuracy of 20 measured samples is less than ±1.5<sup>°</sup>C over a temperature range of 0<sup>°</sup>C ~ 110<sup>°</sup>C. The entire block occupies 0.008 mm<sup>2</sup> in 65 nm CMOS and consumes 500 μW at a conversion rate of 469 kS/s.",
"title": ""
},
{
"docid": "866f7fa780b24fe420623573482df984",
"text": "We present the prenatal ultrasound findings of massive macroglossia in a fetus with prenatally diagnosed Beckwith-Wiedemann syndrome. Three-dimensional surface mode ultrasound was utilized for enhanced visualization of the macroglossia.",
"title": ""
},
{
"docid": "00669cc35f09b699e08fa7c8cc3701c8",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read interpolation of spatial data some theory for kriging now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
},
{
"docid": "6954c2a51c589987ba7e37bd81289ba1",
"text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.",
"title": ""
},
{
"docid": "28ebea841f4495f3e15e2aad94989122",
"text": "This is equivalent to minimizing the sum of squares with a constraint of the form Σ |βj| s. It is similar to ridge regression, which has constraint Σjβ j t. Because of the form of the l1-penalty, the lasso does variable selection and shrinkage, whereas ridge regression, in contrast, only shrinks. If we consider a more general penalty of the form .Σpj=1β q j / 1=q, then the lasso uses q = 1 and ridge regression has q = 2. Subset selection emerges as q → 0, and the lasso uses the smallest value of q (i.e. closest to subset selection) that yields a convex problem. Convexity is very attractive for computational purposes.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "95fa8dea9960f1ecdebef3c195819821",
"text": "Microemulsions are clear, stable, isotropic mixtures of oil, water and surfactant, frequently in combination with a cosurfactant. These systems are currently of interest to the pharmaceutical scientist because of their considerable potential to act as drug delivery vehicles by incorporating a wide range of drug molecules. In order to appreciate the potential of microemulsions as delivery vehicles, this review gives an overview of the formation and phase behaviour and characterization of microemulsions. The use of microemulsions and closely related microemulsion-based systems as drug delivery vehicles is reviewed, with particular emphasis being placed on recent developments and future directions.",
"title": ""
},
{
"docid": "1f52a93eff0c020564acc986b2fef0e7",
"text": "The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.",
"title": ""
},
{
"docid": "1952280e21067c36faf64a3ba4b64506",
"text": "OBJECTIVE\nThe Sophysa Pressio (Sophysa Ltd., Orsay, France) is a new intracranial pressure monitoring system. This study aimed to evaluate its accuracy and compare it with the popular Codman intracranial pressure transducer (Codman/Johnson & Johnson, Raynham, MA) in vitro.\n\n\nMETHODS\nA computerized rig was used to test the Pressio and Codman transducers simultaneously. Properties that were tested included drift over 7 days, the effect of temperature on drift, frequency response, the accuracy of measurement of static and pulsatile pressures, and connectivity of the system.\n\n\nRESULTS\nLong-term (7 d) relative zero drift was less than 0.05 mmHg. The temperature drift was low (0.3 mmHg/207C). Absolute static accuracy was better than 0.5 mmHg over the range of 0 to 100 mmHg. Pulse waveform accuracy, relative to the Codman transducer, was better than 0.2 mmHg over the range of 1 to 20 mmHg. The frequency bandwidth of the Pressio transducer was 22 Hz. The Pressio monitor can transmit data directly to an external computer without the use of a pressure bridge amplifier.\n\n\nCONCLUSION\nThe new Pressio transducer proved to be accurate for measuring static and dynamic pressure during in vitro evaluation.",
"title": ""
},
{
"docid": "3e8757d33a9131941ae5f3ecde3f714e",
"text": "Probabilistic graphical model (PGM) is a generic model that represents the probability-based relationships among random variables by a graph, and is a general method for knowledge representation and inference involving uncertainty. In recent years, PGM provides an important means for solving the uncertainty of intelligent information field, and becomes research focus in the fields of machine learning and artificial intelligence etc. In the paper, PGM and its three types of basic models are reviewed, including the learning and inference theory, research status, application and promotion.",
"title": ""
},
{
"docid": "80b514540933a9cc31136c8cb86ec9b3",
"text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.",
"title": ""
},
{
"docid": "a07acba9f691e2fb338014d92bdb6dd1",
"text": "This article introduces a conceptual tool for analyzing video game temporality, the temporal frame, and a methodology by which new temporal frames can be constructed as needed during analysis. A temporal frame is a set of events, along with the temporality induced by the relationships between those events. The authors discuss four common temporal frames: real-world time (events taking place in the physical world), gameworld time (events within the represented gameworld, including events associated with gameplay actions), coordination time (events that coordinate the actions of players and agents), and fictive time (applying sociocultural labels to events, as well as narrated event sequences). They use frames to analyze the real-time/turn-based distinction, various temporal anomalies, and temporal manipulations as a form of gameplay. These discussions illustrate how temporal frames are useful for gaining a more nuanced understanding of temporal phenomena in games. Additionally, their relationist characterization of temporal frames supports analysis and design.",
"title": ""
},
{
"docid": "64e54dc578b5e6c3faa2d910ba2b808c",
"text": "A design of a radome-covered slot antenna array based on Substrate Integrated Waveguide (SIW) technology is presented in this paper. The design method consists of the analysis of an isolated radiating element, the synthesis of a linear array, the optimization of a planar array, and the development of a power divider with a transition from a feeding metal waveguide to an SIW. The antenna array is designed using a full-wave electromagnetic solver (CST Microwave Studio) for the operating frequency band 26.5-27.5 GHz. The paper describes simulation results of the antenna array consisting of 10×10 longitudinal slots and simulated and measured results of a fabricated antenna array consisting of 4×4 slots.",
"title": ""
},
{
"docid": "64f2091b23a82fae56751a78d433047c",
"text": "Aging variation poses a serious problem to automatic face recognition systems. Most of the face recognition studies that have addressed the aging problem are focused on age estimation or aging simulation. Designing an appropriate feature representation and an effective matching framework for age invariant face recognition remains an open problem. In this paper, we propose a discriminative model to address face matching in the presence of age variation. In this framework, we first represent each face by designing a densely sampled local feature description scheme, in which scale invariant feature transform (SIFT) and multi-scale local binary patterns (MLBP) serve as the local descriptors. By densely sampling the two kinds of local descriptors from the entire facial image, sufficient discriminatory information, including the distribution of the edge direction in the face image (that is expected to be age invariant) can be extracted for further analysis. Since both SIFT-based local features and MLBP-based local features span a high-dimensional feature space, to avoid the overfitting problem, we develop an algorithm, called multi-feature discriminant analysis (MFDA) to process these two local feature spaces in a unified framework. The MFDA is an extension and improvement of the LDA using multiple features combined with two different random sampling methods in feature and sample space. By random sampling the training set as well as the feature space, multiple LDA-based classifiers are constructed and then combined to generate a robust decision via a fusion rule. Experimental results show that our approach outperforms a state-of-the-art commercial face recognition engine on two public domain face aging data sets: MORPH and FG-NET. We also compare the performance of the proposed discriminative model with a generative aging model. A fusion of discriminative and generative models further improves the face matching accuracy in the presence of aging.",
"title": ""
},
{
"docid": "8a9191c256f62b7efce93033752059e6",
"text": "Food products fermented by lactic acid bacteria have long been used for their proposed health promoting properties. In recent years, selected probiotic strains have been thoroughly investigated for specific health effects. Properties like relief of lactose intolerance symptoms and shortening of rotavirus diarrhoea are now widely accepted for selected probiotics. Some areas, such as the treatment and prevention of atopy hold great promise. However, many proposed health effects still need additional investigation. In particular the potential benefits for the healthy consumer, the main market for probiotic products, requires more attention. Also, the potential use of probiotics outside the gastrointestinal tract deserves to be explored further. Results from well conducted clinical studies will expand and increase the acceptance of probiotics for the treatment and prevention of selected diseases.",
"title": ""
},
{
"docid": "eca48600132b8c43f2c221b28e275455",
"text": "This paper presents a deep Convolutional Neural Network (CNN) based approach for document image classification. One of the main requirement of deep CNN architecture is that they need huge number of samples for training. To overcome this problem we adopt a deep CNN which is trained using big image dataset containing millions of samples i.e., ImageNet. The proposed work outperforms both the traditional structure similarity methods and the CNN based approaches proposed earlier. The accuracy of the proposed approach with merely 20 images per class outperforms the state-of-the-art by achieving classification accuracy of 68.25%. The best results on Tobbacoo-3428 dataset show that our proposed method outperforms the state-of-the-art method by a significant margin and achieved a median accuracy of 77.6% with 100 samples per class used for training and validation.",
"title": ""
},
{
"docid": "b6a600ea1c277bc3bf8f2452b8aef3f1",
"text": "Fusion of data from multiple sensors can enable robust navigation in varied environments. However, for optimal performance, the sensors must calibrated relative to one another. Full sensor-to-sensor calibration is a spatiotemporal problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.",
"title": ""
},
{
"docid": "10cf5eed6ed3a153b8302ab2de3ebca7",
"text": "Olive is one of the most ancient crop plants and the World Olive Germplasm Bank of Cordoba (WOGBC), Spain, is one of the world’s largest collections of olive germplasm. We used 33 SSR (Simple Sequence Repeats) markers and 11 morphological characteristics of the endocarp to characterise, identify and authenticate 824 trees, representing 499 accessions from 21 countries of origin, from the WOGBC collection. The SSR markers exhibited high variability and information content. Of 332 cultivars identified in this study based on unique combinations of SSR genotypes and endocarp morphologies, 200 were authenticated by genotypic and morphological markers matches with authentic control samples. We found 130 SSR genotypes that we considered as molecular variants because they showed minimal molecular differences but the same morphological profile than 48 catalogued cultivars. We reported 15 previously described and 37 new cases of synonyms as well as 26 previously described and seven new cases of homonyms. We detected several errors in accession labelling, which may have occurred at any step during establishment of plants in the collection. Nested sets of 5, 10 and 17 SSRs were proposed to progressively and efficiently identify all of the genotypes studied here. The study provides a useful protocol for the characterisation, identification and authentication of any olive germplasm bank that has facilitated the establishment of a repository of true-to-type cultivars at the WOGBC.",
"title": ""
}
] |
scidocsrr
|
b27001b8f4a0f7d2953e8b647afb775c
|
Physiotherapy Exercises Recognition Based on RGB-D Human Skeleton Models
|
[
{
"docid": "29e1ecb7b1dfbf4ca2a229726dcab12e",
"text": "The recently developed depth sensors, e.g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI). Although great progress has been made by leveraging the Kinect sensor, e.g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This paper focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy hand shapes obtained from the Kinect sensor, we propose a novel distance metric, Finger-Earth Mover's Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that our hand gesture recognition system is accurate (a 93.2% mean accuracy on a challenging 10-gesture dataset), efficient (average 0.0750 s per frame), robust to hand articulations, distortions and orientation or scale changes, and can work in uncontrolled environments (cluttered backgrounds and lighting conditions). The superiority of our system is further demonstrated in two real-life HCI applications.",
"title": ""
},
{
"docid": "749728f5301311db9aec203ab54248c3",
"text": "Human posture recognition is an attractive and challenging topic in computer vision because of its wide range of application. The coming of low cost device Kinect with its SDK gives us a possibility to resolve with ease some difficult problems encountered when working with conventional cameras. In this paper, we explore the capacity of using skeleton information provided by Kinect for human posture recognition in a context of a health monitoring framework. We conduct 7 different experiments with 4 types of features extracted from human skeleton. The obtained results show that this device can detect with high accuracy four interested postures (lying, sitting, standing, bending).",
"title": ""
}
] |
[
{
"docid": "fe014ab328ff093deadca25eab9d965f",
"text": "Since conventional microstrip hairpin filter and diplexer are inherently formed by coupled-line resonators, spurious response and poor isolation performance are unavoidable. This letter presents a simple technique that is suitable for an inhomogeneous structure such as microstrip to cure such poor performances. The technique is based on the stepped impedance coupled-line resonator and is verified by the experimental results of the designed 0.9GHz/1.8GHz microstrip hairpin diplexer.",
"title": ""
},
{
"docid": "06f99b18bae3f15e77db8ff2d8c159cc",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "20fafc2ea5ae88eff0ed98ac031963ab",
"text": "Outpatient scheduling is considered as a complex problem. Efficient solutions to this problem are required by many health care facilities. This paper proposes an efficient approach to outpatient scheduling by specifying a bidding method and converting it to a group role assignment problem. The proposed approach is validated by conducting simulations and experiments with randomly generated patient requests for available time slots. The major contribution of this paper is an efficient outpatient scheduling approach making automatic outpatient scheduling practical. The exciting result is due to the consideration of outpatient scheduling as a collaborative activity and the creation of a qualification matrix in order to apply the group role assignment algorithm.",
"title": ""
},
{
"docid": "5e530aefee0a4b1ef986a086a17078fd",
"text": "One key property of word embeddings currently under study is their capacity to encode hypernymy. Previous works have used supervised models to recover hypernymy structures from embeddings. However, the overall results do not clearly show how well we can recover such structures. We conduct the first dataset-centric analysis that shows how only the Baroni dataset provides consistent results. We empirically show that a possible reason for its good performance is its alignment to dimensions specific of hypernymy: generality and similarity.",
"title": ""
},
{
"docid": "04065494023ed79211af3ba0b5bc4c7e",
"text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.",
"title": ""
},
{
"docid": "ec6b6463fdbabbaade4c9186b14e7acf",
"text": "In order for robots to learn from people with no machine learning expertise, robots should learn from natural human instruction. Most machine learning techniques that incorporate explanations require people to use a limited vocabulary and provide state information, even if it is not intuitive. This paper discusses a software agent that learned to play the Mario Bros. game using explanations. Our goals to improve learning from explanations were twofold: 1) to filter explanations into advice and warnings and 2) to learn policies from sentences without state information. We used sentiment analysis to filter explanations into advice of what to do and warnings of what to avoid. We developed object-focused advice to represent what actions the agent should take when dealing with objects. A reinforcement learning agent used object-focused advice to learn policies that maximized its reward. After mitigating false negatives, using sentiment as a filter was approximately 85% accurate. object-focused advice performed better than when no advice was given, the agent learned where to apply the advice, and the agent could recover from adversarial advice. We also found the method of interaction should be designed to ease the cognitive load of the human teacher or the advice may be of poor quality.",
"title": ""
},
{
"docid": "5bdf4585df04c00ebcf00ce94a86ab38",
"text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.",
"title": ""
},
{
"docid": "1364758783c75a39112d01db7e7cfc63",
"text": "Steganography plays an important role in secret communication in digital worlds and open environments like Internet. Undetectability and imperceptibility of confidential data are major challenges of steganography methods. This article presents a secure steganography method in frequency domain based on partitioning approach. The cover image is partitioned into 8×8 blocks and then integer wavelet transform through lifting scheme is performed for each block. The symmetric RC4 encryption method is applied to secret message to obtain high security and authentication. Tree Scan Order is performed in frequency domain to find proper location for embedding secret message. Secret message is embedded in cover image with minimal degrading of the quality. Experimental results demonstrate that the proposed method has achieved superior performance in terms of high imperceptibility of stego-image and it is secure against statistical attack in comparison with existing methods.",
"title": ""
},
{
"docid": "1c9a14804cd1bd673c2547642f9b6683",
"text": "In this paper we applied multilabel classification algorithms to the EUR-Lex database of legal documents of the European Union. On this document collection, we studied three different multilabel classification problems, the largest being the categorization into the EUROVOC concept hierarchy with almost 4000 classes. We evaluated three algorithms: (i) the binary relevance approach which independently trains one classifier per label; (ii) the multiclass multilabel perceptron algorithm, which respects dependencies between the base classifiers; and (iii) the multilabel pairwise perceptron algorithm, which trains one classifier for each pair of labels. All algorithms use the simple but very efficient perceptron algorithm as the underlying classifier, which makes them very suitable for large-scale multilabel classification problems. The main challenge we had to face was that the almost 8,000,000 perceptrons that had to be trained in the pairwise setting could no longer be stored in memory. We solve this problem by resorting to the dual representation of the perceptron, which makes the pairwise approach feasible for problems of this size. The results on the EUR-Lex database confirm the good predictive performance of the pairwise approach and demonstrates the feasibility of this approach for large-scale tasks.",
"title": ""
},
{
"docid": "b69e6bf80ad13a60819ae2ebbcc93ae0",
"text": "Computational manufacturing technologies such as 3D printing hold the potential for creating objects with previously undreamed-of combinations of functionality and physical properties. Human designers, however, typically cannot exploit the full geometric (and often material) complexity of which these devices are capable. This STAR examines recent systems developed by the computer graphics community in which designers specify higher-level goals ranging from structural integrity and deformation to appearance and aesthetics, with the final detailed shape and manufacturing instructions emerging as the result of computation. It summarizes frameworks for interaction, simulation, and optimization, as well as documents the range of general objectives and domain-specific goals that have been considered. An important unifying thread in this analysis is that different underlying geometric and physical representations are necessary for different tasks: we document over a dozen classes of representations that have been used for fabrication-aware design in the literature. We analyze how these classes possess obvious advantages for some needs, but have also been used in creative manners to facilitate unexpected problem solutions.",
"title": ""
},
{
"docid": "ff952443eef41fb430ff2831b5ee33d5",
"text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.",
"title": ""
},
{
"docid": "f79b5057cf1bd621f8a3a69efcd5e100",
"text": "A novel, tri-band, planar plate-type antenna made of a compact metal plate for wireless local area network (WLAN) applications in the 2.4GHz (2400–2484MHz), 5.2GHz (5150– 5350MHz), and 5.8 GHz (5725–5825 MHz) bands is presented. The antenna was designed in a way that the operating principle includes dipole and loop resonant modes to cover the 2.4/5.2 and 5.8 GHz bands, respectively. The antenna comprises a larger radiating arm and a smaller loop radiating arm, which are connected to each other at the signal ground point. The antenna can easily be fed by using a 50 Ω mini-coaxial cable and shows good radiation performance. Details of the design are described and discussed in the article.",
"title": ""
},
{
"docid": "0c67bd1867014053a5bec3869f3b4f8c",
"text": "BACKGROUND AND PURPOSE\nConstraint-induced movement therapy (CI therapy) has previously been shown to produce large improvements in actual amount of use of a more affected upper extremity in the \"real-world\" environment in patients with chronic stroke (ie, >1 year after the event). This work was carried out in an American laboratory. Our aim was to determine whether these results could be replicated in another laboratory located in Germany, operating within the context of a healthcare system in which administration of conventional types of physical therapy is generally more extensive than in the United States.\n\n\nMETHODS\nFifteen chronic stroke patients were given CI therapy, involving restriction of movement of the intact upper extremity by placing it in a sling for 90% of waking hours for 12 days and training (by shaping) of the more affected extremity for 7 hours on the 8 weekdays during that period.\n\n\nRESULTS\nPatients showed a significant and very large degree of improvement from before to after treatment on a laboratory motor test and on a test assessing amount of use of the affected extremity in activities of daily living in the life setting (effect sizes, 0.9 and 2.2, respectively), with no decrement in performance at 6-month follow-up. During a pretreatment control test-retest interval, there were no significant changes on these tests.\n\n\nCONCLUSIONS\nResults replicate in Germany the findings with CI therapy in an American laboratory, suggesting that the intervention has general applicability.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "f5e6df40898a5b84f8e39784f9b56788",
"text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.",
"title": ""
},
{
"docid": "3f45d5b611b59e0bcaa0ff527d11f5af",
"text": "Ensemble methods use multiple models to get better performance. Ensemble methods have been used in multiple research fields such as computational intelligence, statistics and machine learning. This paper reviews traditional as well as state-of-the-art ensemble methods and thus can serve as an extensive summary for practitioners and beginners. The ensemble methods are categorized into conventional ensemble methods such as bagging, boosting and random forest, decomposition methods, negative correlation learning methods, multi-objective optimization based ensemble methods, fuzzy ensemble methods, multiple kernel learning ensemble methods and deep learning based ensemble methods. Variations, improvements and typical applications are discussed. Finally this paper gives some recommendations for future research directions.",
"title": ""
},
{
"docid": "171fd68f380f445723b024f290a02d69",
"text": "Cytokines, produced at the site of entry of a pathogen, drive inflammatory signals that regulate the capacity of resident and newly arrived phagocytes to destroy the invading pathogen. They also regulate antigen presenting cells (APCs), and their migration to lymph nodes to initiate the adaptive immune response. When naive CD4+ T cells recognize a foreign antigen-derived peptide presented in the context of major histocompatibility complex class II on APCs, they undergo massive proliferation and differentiation into at least four different T-helper (Th) cell subsets (Th1, Th2, Th17, and induced T-regulatory (iTreg) cells in mammals. Each cell subset expresses a unique set of signature cytokines. The profile and magnitude of cytokines produced in response to invasion of a foreign organism or to other danger signals by activated CD4+ T cells themselves, and/or other cell types during the course of differentiation, define to a large extent whether subsequent immune responses will have beneficial or detrimental effects to the host. The major players of the cytokine network of adaptive immunity in fish are described in this review with a focus on the salmonid cytokine network. We highlight the molecular, and increasing cellular, evidence for the existence of T-helper cells in fish. Whether these cells will match exactly to the mammalian paradigm remains to be seen, but the early evidence suggests that there will be many similarities to known subsets. Alternative or additional Th populations may also exist in fish, perhaps influenced by the types of pathogen encountered by a particular species and/or fish group. These Th cells are crucial for eliciting disease resistance post-vaccination, and hopefully will help resolve some of the difficulties in producing efficacious vaccines to certain fish diseases.",
"title": ""
},
{
"docid": "ba69b4c09bbcd6cfd50632a8d4bea877",
"text": "In this report we consider the current status of the coverage of computer science in education at the lowest levels of education in multiple countries. Our focus is on computational thinking (CT), a term meant to encompass a set of concepts and thought processes that aid in formulating problems and their solutions in different fields in a way that could involve computers [130].\n The main goal of this report is to help teachers, those involved in teacher education, and decision makers to make informed decisions about how and when CT can be included in their local institutions. We begin by defining CT and then discuss the current state of CT in K-9 education in multiple countries in Europe as well as the United States. Since many students are exposed to CT outside of school, we also discuss the current state of informal educational initiatives in the same set of countries.\n An important contribution of the report is a survey distributed to K-9 teachers, aiming at revealing to what extent different aspects of CT are already part of teachers' classroom practice and how this is done. The survey data suggest that some teachers are already involved in activities that have strong potential for introducing some aspects of CT. In addition to the examples given by teachers participating in the survey, we present some additional sample activities and lesson plans for working with aspects of CT in different subjects. We also discuss ways in which teacher training can be coordinated as well as the issue of repositories. We conclude with future directions for research in CT at school.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
},
{
"docid": "c700a8a3dc4aa81c475e84fc1bbf9516",
"text": "A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.",
"title": ""
}
] |
scidocsrr
|
bc674a5d6ee37a7ba716400b4af9d722
|
Automatic Argumentative-Zoning Using Word2vec
|
[
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "cd89079c74f5bb0218be67bf680b410f",
"text": "This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95%, depending on the data) in finding sentiments within Web pages and news articles.",
"title": ""
},
{
"docid": "80b173cf8dbd0bc31ba8789298bab0fa",
"text": "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.",
"title": ""
}
] |
[
{
"docid": "d23649c81665bc76134c09b7d84382d0",
"text": "This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or S. Basagni ( ) Department of Electrical and Computer Engineering, Northeastern University e-mail: [email protected] A. Carosi · C. Petrioli Dipartimento di Informatica, Università di Roma “La Sapienza” e-mail: [email protected] C. Petrioli e-mail: [email protected] E. Melachrinoudis · Z. M. Wang Department of Mechanical and Industrial Engineering, Northeastern University e-mail: [email protected] Z. M. Wang e-mail: [email protected] RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.",
"title": ""
},
{
"docid": "4c12c08d72960b3b75662e9459e23079",
"text": "Graph structures play a critical role in computer vision, but they are inconvenient to use in pattern recognition tasks because of their combinatorial nature and the consequent difficulty in constructing feature vectors. Spectral representations have been used for this task which are based on the eigensystem of the graph Laplacian matrix. However, graphs of different sizes produce eigensystems of different sizes where not all eigenmodes are present in both graphs. We use the Levenshtein distance to compare spectral representations under graph edit operations which add or delete vertices. The spectral representations are therefore of different sizes. We use the concept of the string-edit distance to allow for the missing eigenmodes and compare the correct modes to each other. We evaluate the method by first using generated graphs to compare the effect of vertex deletion operations. We then examine the performance of the method on graphs from a shape database.",
"title": ""
},
{
"docid": "81cf3581955988c71b58e7a097ea00bd",
"text": "Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertex coloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrix estimation problems. The framework is based upon the viewpoint that a partition of a matrix into structurally orthogonal groups of columns corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrix as an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.",
"title": ""
},
{
"docid": "8096886eff1b288561cbe75302e8c578",
"text": "In this paper, we develop a framework to classify supply chain risk management problems and approaches for the solution of these problems. We argue that risk management problems need to be handled at three levels strategic, operational and tactical. In addition, risk within the supply chain might manifest itself in the form of deviations, disruptions and disasters. To handle unforeseen events in the supply chain there are two obvious approaches: (1) to design chains with built in risk-tolerance and (2) to contain the damage once the undesirable event has occurred. Both of these approaches require a clear understanding of undesirable events that may take place in the supply chain and also the associated consequences and impacts from these events. We can then focus our efforts on mapping out the propagation of events in the supply chain due to supplier non-performance, and employ our insight to develop two mathematical programming based preventive models for strategic level deviation and disruption management. The first model, a simple integer quadratic optimization model, adapted from the Markowitz model, determines optimal partner selection with the objective of minimizing both the operational cost and the variability of total operational cost. The second model, a simple mixed integer programming optimization model, adapted from the credit risk minimization model, determines optimal partner selection such that the supply shortfall is minimized even in the face of supplier disruptions. Hence, both of these models offer possible approaches to robust supply chain design.",
"title": ""
},
{
"docid": "8593882a00d738151c8cba1a99e94898",
"text": "Multimodality image registration plays a crucial role in various clinical and research applications. The aim of this study is to present an optimized MR to CT whole-body deformable image registration algorithm and its validation using clinical studies. A 3D intermodality registration technique based on B-spline transformation was performed using optimized parameters of the elastix package based on the Insight Toolkit (ITK) framework. Twenty-eight (17 male and 11 female) clinical studies were used in this work. The registration was evaluated using anatomical landmarks and segmented organs. In addition to 16 anatomical landmarks, three key organs (brain, lungs, and kidneys) and the entire body volume were segmented for evaluation. Several parameters--such as the Euclidean distance between anatomical landmarks, target overlap, Dice and Jaccard coefficients, false positives and false negatives, volume similarity, distance error, and Hausdorff distance--were calculated to quantify the quality of the registration algorithm. Dice coefficients for the majority of patients (> 75%) were in the 0.8-1 range for the whole body, brain, and lungs, which satisfies the criteria to achieve excellent alignment. On the other hand, for kidneys, Dice coefficients for volumes of 25% of the patients meet excellent volume agreement requirement, while the majority of patients satisfy good agreement criteria (> 0.6). For all patients, the distance error was in 0-10 mm range for all segmented organs. In summary, we optimized and evaluated the accuracy of an MR to CT deformable registration algorithm. The registered images constitute a useful 3D whole-body MR-CT atlas suitable for the development and evaluation of novel MR-guided attenuation correction procedures on hybrid PET-MR systems.",
"title": ""
},
{
"docid": "c3fb97edabf2c4fa68cf45bb888e5883",
"text": "Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains, the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called bandits with knapsacks, that combines bandit learning with aspects of stochastic integer programming. In particular, a bandit algorithm needs to solve a stochastic version of the well-known knapsack problem, which is concerned with packing items into a limited-size knapsack. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.\n We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel “balanced exploration” paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains, including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.",
"title": ""
},
{
"docid": "e743bfe8c4f19f1f9a233106919c99a7",
"text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"title": ""
},
{
"docid": "a14a9e61d9a13041d095e3db05b0900c",
"text": "Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). However, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. In particular, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or even misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. Here, we introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.",
"title": ""
},
{
"docid": "636b0dd2a23a87f91b2820d70d687a37",
"text": "KNOWLEDGE is neither data nor information, though it is related to both, and the differences between these terms are often a matter of degree. We start with those more familiar terms both because they are more familiar and because we can understand knowledge best with reference to them. Confusion about what data, information, and knowledge are -how they differ, what those words mean -has resulted in enormous expenditures on technology initiatives that rarely deliver what the firms spending the money needed or thought they were getting. Often firms don't understand what they need until they invest heavily in a system that fails to provide it.",
"title": ""
},
{
"docid": "d22c8390e6ea9ea8c7a84e188cd10ba5",
"text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.",
"title": ""
},
{
"docid": "da18fa8e30c58f6b0039d8b1dc4b11a0",
"text": "Customer churn prediction is one of the key steps to maximize the value of customers for an enterprise. It is difficult to get satisfactory prediction effect by traditional models constructed on the assumption that the training and test data are subject to the same distribution, because the customers usually come from different districts and may be subject to different distributions in reality. This study proposes a feature-selection-based dynamic transfer ensemble (FSDTE) model that aims to introduce transfer learning theory for utilizing the customer data in both the target and related source domains. The model mainly conducts a two-layer feature selection. In the first layer, an initial feature subset is selected by GMDH-type neural network only in the target domain. In the second layer, several appropriate patterns from the source domain to target training set are selected, and some features with higher mutual information between them and the class variable are combined with the initial subset to construct a new feature subset. The selection in the second layer is repeated several times to generate a series of new feature subsets, and then, we train a base classifier in each one. Finally, a best base classifier is selected dynamically for each test pattern. The experimental results in two customer churn prediction datasets show that FSDTE can achieve better performance compared with the traditional churn prediction strategies, as well as three existing transfer learning strategies.",
"title": ""
},
{
"docid": "96356639b8df06ff61b3a33563b24a8b",
"text": "the objects, such as movie reviews, book reviews, and product reviews etc. Sentiment analysis is the mining the sentiment or opinion words and identification and analysis of the opinion and arguments in the text. In this paper, we proposed an ontology based combination approach to enhance the exits approaches of sentiment classifications and use supervised learning techniques for classifications.",
"title": ""
},
{
"docid": "e30df718ca1981175e888755cce3ce90",
"text": "Human identification at distance by analysis of gait patterns extracted from video has recently become very popular research in biometrics. This paper presents multi-projections based approach to extract gait patterns for human recognition. Binarized silhouette of a motion object is represented by 1-D signals which are the basic image features called the distance vectors. The distance vectors are differences between the bounding box and silhouette, and extracted using four projections to silhouette. Eigenspace transformation is applied to time-varying distance vectors and the statistical distance based supervised pattern classification is then performed in the lower-dimensional eigenspace for human identification. A fusion strategy developed is finally executed to produce final decision. Based on normalized correlation on the distance vectors, gait cycle estimation is also performed to extract the gait cycle. Experimental results on four databases demonstrate that the right person in top two matches 100% of the times for the cases where training and testing sets corresponds to the same walking styles, and in top three-four matches 100% of the times for training and testing sets corresponds to the different walking styles.",
"title": ""
},
{
"docid": "6445e510d1e3806b878ae07288d2578b",
"text": "The functionalization of polymeric substances is of great interest for the development of 15 innovative materials for advanced applications. For many decades, the functionalization of 16 chitosan has been a convenient way to improve its properties with the aim to prepare new 17 materials with specialized characteristics. In the present article, we summarize the latest methods 18 for the modification and derivatization of chitin and chitosan, trying to introduce specific 19 functional groups under experimental conditions, which allow a control over the macromolecular 20 architecture. This is motivated because an understanding of the interdependence between chemical 21 structure and properties is an important condition for proposing innovative materials. New 22 advances in methods and strategies of functionalization such as click chemistry approach, grafting 23 onto copolymerization, coupling with cyclodextrins and reactions in ionic liquids are discussed. 24",
"title": ""
},
{
"docid": "6d00ae440b45ddad03fb04f480c8c78c",
"text": "Collaborative Filtering (CF) is widely applied to personalized recommendation systems. Traditional collaborative filtering techniques make predictions through a user-item matrix of ratings which explicitly presents user preference. With the increasingly growing number of users and items, insufficient rating data still leads to the decreasing predictive accuracy with traditional collaborative filtering approaches. In the real world, however, many different types of user feedback, e.g. review, like or not, votes etc., co-exist in many online content providers. In this paper we integrate rating data with some other new types of user feedback and propose a multi-task matrix factorization model in order for flexibly using multiple data. We use a common user feature space shared across sub-models in this model and thus the model can simultaneously train the corresponding sub-models with every training sample. Our experiments indicate that new types of user feedback really work and show improvements on predictive accuracy compared to state-of-the-art algorithms.",
"title": ""
},
{
"docid": "471471cfc90e7f212dd7bbbee08d714e",
"text": "Every year, a large number of children in the United States enter the foster care system. Many of them are eventually reunited with their biological parents or quickly adopted. A significant number, however, face long-term foster care, and some of these children are eventually adopted by their foster parents. The decision by foster parents to adopt their foster child carries significant economic consequences, including forfeiting foster care payments while also assuming responsibility for medical, legal, and educational expenses, to name a few. Since 1980, U.S. states have begun to offer adoption subsidies to offset some of these expenses, significantly lowering the cost of adopting a child who is in the foster care system. This article presents empirical evidence of the role that these economic incentives play in foster parents’ decision of when, or if, to adopt their foster child. We find that adoption subsidies increase adoptions through two distinct price mechanisms: by lowering the absolute cost of adoption, and by lowering the relative cost of adoption versus long-term foster care.",
"title": ""
},
{
"docid": "647b76de7edbca25accdd65fed64d34e",
"text": "Despite the evidence that social video conveys rich human personality information, research investigating the automatic prediction of personality impressions in vlogging has shown that, amongst the Big-Five traits, automatic nonverbal behavioral cues are useful to predict mainly the Extraversion trait. This finding, also reported in other conversational settings, indicates that personality information may be coded in other behavioral dimensions like the verbal channel, which has been less studied in multimodal interaction research. In this paper, we address the task of predicting personality impressions from vloggers based on what they say in their YouTube videos. First, we use manual transcripts of vlogs and verbal content analysis techniques to understand the ability of verbal content for the prediction of crowdsourced Big-Five personality impressions. Second, we explore the feasibility of a fully-automatic framework in which transcripts are obtained using automatic speech recognition (ASR). Our results show that the analysis of error-free verbal content is useful to predict four of the Big-Five traits, three of them better than using nonverbal cues, and that the errors caused by the ASR system decrease the performance significantly.",
"title": ""
},
{
"docid": "ad558d1f3d5ab563ade2e606464b7ca0",
"text": "Recently, densified small cell deployment with overlay coverage through coexisting heterogeneous networks has emerged as a viable solution for 5G mobile networks. However, this multi-tier architecture along with stringent latency requirements in 5G brings new challenges in security provisioning due to the potential frequent handovers and authentications in 5G small cells and HetNets. In this article, we review related studies and introduce SDN into 5G as a platform to enable efficient authentication hand-over and privacy protection. Our objective is to simplify authentication handover by global management of 5G HetNets through sharing of userdependent security context information among related access points. We demonstrate that SDN-enabled security solutions are highly efficient through its centralized control capability, which is essential for delay-constrained 5G communications.",
"title": ""
},
{
"docid": "3c017a50302e8a09eff32b97474433a1",
"text": "Few concepts embody the goals of artificial intelligence as well as fully autonomous robots. Countless films and stories have been made that focus on a future filled with autonomous agents that complete menial tasks or run errands that humans do not want or are too busy to carry out. One such task is driving automobiles. In this paper, we summarize the work we have done towards a future of fully-autonomous vehicles, specifically coordinating such vehicles safely and efficiently at intersections. We then discuss the implications this work has for other areas of AI, including planning, multiagent learning, and computer vision.",
"title": ""
},
{
"docid": "ba89a62ac2d1b36738e521d4c5664de2",
"text": "Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.",
"title": ""
}
] |
scidocsrr
|
2faf063cd213d639c8aaad3b0a2722e4
|
Gender identity development in adolescence
|
[
{
"docid": "1cdd599b49d9122077a480a75391aae8",
"text": "Two aspects of children's early gender development-the spontaneous production of gender labels and gender-typed play-were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children's gender labeling as based on mothers' biweekly telephone interviews regarding their children's language from 9 through 21 months. Videotapes of children's play both alone and with mother during home visits at 17 and 21 months were independently analyzed for play with gender-stereotyped and gender-neutral toys. Finally, the relation between gender labeling and gender-typed play was examined. Children transitioned to using gender labels at approximately 19 months, on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in gender-typed play, suggesting that knowledge of gender categories might influence gender typing before the age of 2.",
"title": ""
},
{
"docid": "558abc8028d1d5b6956d2cf046efb983",
"text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.",
"title": ""
},
{
"docid": "6d45e9d4d1f46debcbf1b95429be60fd",
"text": "Sex differences in cortical thickness (CTh) have been extensively investigated but as yet there are no reports on CTh in transsexuals. Our aim was to determine whether the CTh pattern in transsexuals before hormonal treatment follows their biological sex or their gender identity. We performed brain magnetic resonance imaging on 94 subjects: 24 untreated female-to-male transsexuals (FtMs), 18 untreated male-to-female transsexuals (MtFs), and 29 male and 23 female controls in a 3-T TIM-TRIO Siemens scanner. T1-weighted images were analyzed to obtain CTh and volumetric subcortical measurements with FreeSurfer software. CTh maps showed control females have thicker cortex than control males in the frontal and parietal regions. In contrast, males have greater right putamen volume. FtMs had a similar CTh to control females and greater CTh than males in the parietal and temporal cortices. FtMs had larger right putamen than females but did not differ from males. MtFs did not differ in CTh from female controls but had greater CTh than control males in the orbitofrontal, insular, and medial occipital regions. In conclusion, FtMs showed evidence of subcortical gray matter masculinization, while MtFs showed evidence of CTh feminization. In both types of transsexuals, the differences with respect to their biological sex are located in the right hemisphere.",
"title": ""
},
{
"docid": "2b8296f8760e826046cd039c58026f83",
"text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.",
"title": ""
}
] |
[
{
"docid": "ee19f23ddd9aaf77923cb3a7607b67fa",
"text": "With worldwide shipments of smartphones (487.7 million) exceeding PCs (414.6 million including tablets) in 2011, and in the US alone, more users predicted to access the Internet from mobile devices than from PCs by 2015, clearly there is a desire to be able to use mobile devices and networks like we use PCs and wireline networks today. However, in spite of advances in the capabilities of mobile devices, a gap will continue to exist, and may even widen, with the requirements of rich multimedia applications. Mobile cloud computing can help bridge this gap, providing mobile applications the capabilities of cloud servers and storage together with the benefits of mobile devices and mobile connectivity, possibly enabling a new generation of truly ubiquitous multimedia applications on mobile devices: Cloud Mobile Media (CMM) applications.",
"title": ""
},
{
"docid": "66d24e13c8ac0dc5c0e85b3e2873346c",
"text": "In advanced CMOS technologies, the negative bias temperature instability (NBTI) phenomenon in pMOSFETs is a major reliability concern as well as a limiting factor in future device scaling. Recently, much effort has been expended to further the basic understanding of this mechanism. This tutorial gives an overview of the physics of NBTI. Discussions include such topics as the impact of NBTI on the observed changes in the device characteristics as well as the impact of gate oxide processes on the physics of NBTI. Current experimental results, exploring various NBTI effects such as frequency dependence and relaxation, are also discussed. Since some of the recent work on the various NBTI effects seems contradictory, focus is placed on highlighting our current understanding, our open questions and our future challenges.",
"title": ""
},
{
"docid": "e7f771269ee99c04c69d1a7625a4196f",
"text": "This report is a summary of Device-associated (DA) Module data collected by hospitals participating in the National Healthcare Safety Network (NHSN) for events occurring from January through December 2010 and re ported to the Centers for Disease Control and Prevention (CDC) by July 7, 2011. This report updates previously published DA Module data from the NHSN and provides contemporary comparative rates. This report comple ments other NHSN reports, including national and state-specific reports of standardized infection ratios for select health care-associated infections (HAIs). The NHSN was established in 2005 to integrate and supersede 3 legacy surveillance systems at the CDC: the National Nosocomial Infections Surveillance system, the Dialysis Surveillance Network, and the National Sur veillance System for Healthcare Workers. NHSN data col lection, reporting, and analysis are organized into 3 components—Patient Safety, Healthcare Personnel",
"title": ""
},
{
"docid": "28a86caf1d86c58941f72c71699fabb1",
"text": "Dicing of ultrathin (e.g. <; 75um thick) “via-middle” 3DI/TSV semiconductor wafers proves to be challenging because the process flow requires the dicing step to occur after wafer thinning and back side processing. This eliminates the possibility of using any type of “dice-before-grind” techniques. In addition, the presence of back side alignment marks, TSVs, or other features in the dicing street can add challenges for the dicing process. In this presentation, we will review different dicing processes used for 3DI/TSV via-middle products. Examples showing the optimization process for a 3DI/TSV memory device wafer product are provided.",
"title": ""
},
{
"docid": "6087ad77caa9947591eb9a3f8b9b342d",
"text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.",
"title": ""
},
{
"docid": "b1789c3522ae188b3838a09d764e460f",
"text": "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.",
"title": ""
},
{
"docid": "bb547f90a98aa25d0824dc63b9de952d",
"text": "When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.",
"title": ""
},
{
"docid": "e05f857b063275500cf54d4596c646d4",
"text": "This paper is a contribution to the electric modeling of electrochemical cells. Specifically, cells for a new copper electrowinning process, which uses bipolar electrodes, are studied. Electrowinning is used together with solvent extraction and has gained great importance, due to its significant cost and environmental advantages, as compared to other copper reduction methods. Current electrowinning cells use unipolar electrodes connected electrically in parallel. Instead, bipolar electrodes, are connected in series. They are also called floating, because they are not wire-connected, but just immersed in the electrolyte. The main advantage of this technology is that, for the same copper production, a cell requires a much lower DC current, as compared with the unipolar case. This allows the cell to be supplied from a modular and compact PWM rectifier instead of a bulk high current thyristor rectifier, having a significant economic impact. In order to study the quality of the copper, finite difference algorithms in two dimensions are derived to obtain the distribution of the potential and the electric field inside the cell. Different geometrical configurations of cell and floating electrodes are analyzed. The proposed method is a useful tool for analysis and design of electrowinning cells, reducing the time-consuming laboratory implementations.",
"title": ""
},
{
"docid": "ee4288bcddc046ae5e9bcc330264dc4f",
"text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.",
"title": ""
},
{
"docid": "a208187fc81a633ac9332ee11567b1a7",
"text": "Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.",
"title": ""
},
{
"docid": "c0e99b3b346ef219e8898c3608d2664f",
"text": "A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called disocclusion. The general solution is to smooth the depth map using a Gaussian smoothing filter before 3D warping. However, the filtered depth map causes geometric distortion and the depth quality is seriously degraded. Therefore, we propose a new depth map filtering algorithm to solve the disocclusion problem while maintaining the depth quality. In order to preserve the visual quality of the virtual view, we smooth the depth map with further reduced deformation. After extracting object boundaries depending on the position of the virtual view, we apply a discontinuity-adaptive smoothing filter according to the distance of the object boundary and the amount of depth discontinuities. Finally, we obtain the depth map with higher quality compared to other methods. Experimental results showed that the disocclusion is efficiently removed and the visual quality of the virtual view is maintained.",
"title": ""
},
{
"docid": "cc220d8ae1fa77b9e045022bef4a6621",
"text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.",
"title": ""
},
{
"docid": "5b9ca6d2cec03c771e89fe8e5dd23012",
"text": "Posttraumatic agitation is a challenging problem for acute and rehabilitation staff, persons with traumatic brain injury, and their families. Specific variables for evaluation and care remain elusive. Clinical trials have not yielded a strong foundation for evidence-based practice in this arena. This review seeks to evaluate the present literature (with a focus on the decade 1995-2005) and employ previous clinical experience to deliver a review of the topic. We will discuss definitions, pathophysiology, evaluation techniques, and treatment regimens. A recommended approach to the evaluation and treatment of the person with posttraumatic agitation will be presented. The authors hope that this review will spur discussion and assist in facilitating clinical care paradigms and research programs.",
"title": ""
},
{
"docid": "39991ac199197e44aaf1a0d656175963",
"text": "Weakly-supervised object localization methods tend to fail for object classes that consistently co-occur with the same background elements, e.g. trains on tracks. We propose a method to overcome these failures by adding a very small amount of modelspecific additional annotation. The main idea is to cluster a deep network’s mid-level representations and assign object or distractor labels to each cluster. Experiments show substantially improved localization results on the challenging ILSVC2014 dataset for bounding box detection and the PASCAL VOC2012 dataset for semantic segmentation.",
"title": ""
},
{
"docid": "64c44342abbce474e21df67c0a5cc646",
"text": "In this paper it is shown that the principal eigenvector is a necessary representation of the priorities derived from a positive reciprocal pairwise comparison judgment matrix A 1⁄4 ðaijÞ when A is a small perturbation of a consistent matrix. When providing numerical judgments, an individual attempts to estimate sequentially an underlying ratio scale and its equivalent consistent matrix of ratios. Near consistent matrices are essential because when dealing with intangibles, human judgment is of necessity inconsistent, and if with new information one is able to improve inconsistency to near consistency, then that could improve the validity of the priorities of a decision. In addition, judgment is much more sensitive and responsive to large rather than to small perturbations, and hence once near consistency is attained, it becomes uncertain which coefficients should be perturbed by small amounts to transform a near consistent matrix to a consistent one. If such perturbations were forced, they could be arbitrary and thus distort the validity of the derived priority vector in representing the underlying decision. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "ebd8e2cfc51e78fbf6772128d8e4e479",
"text": "This paper uses delaying functions, functions that require signiicant calculation time, in the development of a one-pass lottery scheme in which winners are chosen fairly using only internal information. Since all this information may be published (even before the lottery closes), anyone can do the calculation and therefore verify that the winner was chosen correctly. Since the calculation uses a delaying function, ticket purchasers cannot take advantage of this information. Fraud on the part of the lottery agent is detectable and no single ticket purchaser needs to be trusted. Coalitions of purchasers attempting to control the winning ticket calculation are either unsuccessful or are detected. The scheme can be made resistant to coalitions of arbitrary size. Since we assume that coalitions of larger size are harder to assemble, the probability that the lottery is fair can be made arbitrarily high. The paper deenes delaying functions and contrasts them with pricing functions 8] and time-lock puzzles 16].",
"title": ""
},
{
"docid": "f4e7e0ea60d9697e8fea434990409c16",
"text": "Prognostics is very useful to predict the degradation trend of machinery and to provide an alarm before a fault reaches critical levels. This paper proposes an ARIMA approach to predict the future machine status with accuracy improvement by an improved forecasting strategy and an automatic prediction algorithm. Improved forecasting strategy increases the times of model building and creates datasets for modeling dynamically to avoid using the previous values predicted to forecast and generate the predictions only based on the true observations. Automatic prediction algorithm can satisfy the requirement of real-time prognostics by automates the whole process of ARIMA modeling and forecasting based on the Box-Jenkins's methodology and the improved forecasting strategy. The feasibility and effectiveness of the approach proposed is demonstrated through the prediction of the vibration characteristic in rotating machinery. The experimental results show that the approach can be applied successfully and effectively for prognostics of machine health condition.",
"title": ""
},
{
"docid": "06907205e1fd513f0d1ddef33b92e40c",
"text": "Better shape priors improve the mask accuracy and reduce false removal. Moving down the table from the no prior case to the box priors and then to the class specific shape priors from the Pascal dataset masks the masks smaller, improves the mIoU and also reduces the false removal rate. Input image Global loss Local loss Qualitative comparison of global vs local loss. Local real-fake loss improves the in-painting results producing sharper, texture-rich images, compared to smooth blurry results obtained by the global loss. References",
"title": ""
},
{
"docid": "1fe0bfec531eac34bd81a11b3d5cf1ab",
"text": "We demonstrate an advanced ReRAM based analog artificial synapse for neuromorphic systems. Nitrogen doped TiN/PCMO based artificial synapse is proposed to improve the performance and reliability of the neuromorphic systems by using simple identical spikes. For the first time, we develop fully unsupervised learning with proposed analog synapses which is illustrated with the help of auditory and electroencephalography (EEG) applications.",
"title": ""
}
] |
scidocsrr
|
29138495be0fcf49833b85c6b3ba3b1a
|
Government-Driven Participation and Collective Intelligence: A Case of the Government 3.0 Initiative in Korea
|
[
{
"docid": "0f208f26191386dd5c868fa3cc7c7b31",
"text": "This paper revisits the data–information–knowledge–wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, information, knowledge, and wisdom. The hierarchy referred to variously as the ‘Knowledge Hierarchy’, the ‘Information Hierarchy’ and the ‘Knowledge Pyramid’ is one of the fundamental, widely recognized and ‘taken-for-granted’ models in the information and knowledge literatures. It is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. After revisiting Ackoff’s original articulation of the hierarchy, definitions of data, information, knowledge and wisdom as articulated in recent textbooks in information systems and knowledge management are reviewed and assessed, in pursuit of a consensus on definitions and transformation processes. This process brings to the surface the extent of agreement and dissent in relation to these definitions, and provides a basis for a discussion as to whether these articulations present an adequate distinction between data, information, and knowledge. Typically information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge, but there is less consensus in the description of the processes that transform elements lower in the hierarchy into those above them, leading to a lack of definitional clarity. In addition, there is limited reference to wisdom in these texts.",
"title": ""
}
] |
[
{
"docid": "19c439bd0a7e9b5287ad56b9321dd081",
"text": "Recommendations of products to customers are proved to boost sales, increase customer satisfaction and improve user experience, making recommender systems an important tool for retail businesses. With recent technological advancements in AmI and Ubiquitous Computing, the benefits of recommender systems can be enjoyed not only in e-commerce, but in the physical store scenario as well. However, developing effective context-aware recommender systems by non-expert practitioners is not an easy task due to the complexity of building the necessary data models and selecting and configuring recommendation algorithms. In this paper we apply the Model Driven Development paradigm on the physical commerce recommendation domain by defining a UbiCARS Domain Specific Modelling Language, a modelling editor and a system, that aim to reduce complexity, abstract the technical details and expedite the development and application of State-of-the-Art recommender systems in ubiquitous environments (physical retail stores), as well as to enable practitioners to utilize additional data resulting from ubiquitous user-product interaction in the recommendation process to improve recommendation accuracy.",
"title": ""
},
{
"docid": "125353c682f076f7ad4f75b08b97280b",
"text": "This paper describes a novel conformal surface wave (CSW) launcher that can excite electromagnetic surface waves along unshielded power line cables nonintrusively. This CSW launcher can detect open circuit faults on power cables. Unlike conventional horn-type launchers, this CSW launcher is small, lightweight, and cost effective, and can be placed easily on a power cable. For a nonintrusive open fault detection, the error is <; 5% when the cable length is <; 10 m, which is comparable with other direct-connect fault-finding techniques. For a cable length of 15.14 m, 7.6% error is noted. Besides cable fault detection, the potential applications of the proposed launcher include broadband power line communication and high-frequency power transmission.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "8ee9ae8afd88a761d9db6128f736bbea",
"text": "Semantic relatedness measures quantify the degree in which some words or concepts are related, considering not only similarity but any possible semantic relationship among them. Relatedness computation is of great interest in different areas, such as Natural Language Processing, Information Retrieval, or the Semantic Web. Different methods have been proposed in the past; however, current relatedness measures lack some desirable properties for a new generation of Semantic Web applications: maximum coverage, domain independence, and universality. In this paper, we explore the use of a semantic relatedness measure between words, that uses the Web as knowledge source. This measure exploits the information about frequencies of use provided by existing search engines. Furthermore, taking this measure as basis, we define a new semantic relatedness measure among ontology terms. The proposed measure fulfils the above mentioned desirable properties to be used on the Semantic Web. We have tested extensively this semantic measure to show that it correlates well with human judgment, and helps solving some particular tasks, as word sense disambiguation or ontology matching.",
"title": ""
},
{
"docid": "bc8780078bef1e7c602e16dcf3ccf0bc",
"text": "In this paper, we deal with the problem of authentication and tamper-proofing of text documents that can be distributed in electronic or printed forms. We advocate the combination of robust text hashing and text data-hiding technologies as an efficient solution to this problem. First, we consider the problem of text data-hiding in the scope of the Gel'fand-Pinsker data-hiding framework. For illustration, two modern text data-hiding methods, namely color index modulation (CIM) and location index modulation (LIM), are explained. Second, we study two approaches to robust text hashing that are well suited for the considered problem. In particular, both approaches are compatible with CIM and LIM. The first approach makes use of optical character recognition (OCR) and a classical cryptographic message authentication code (MAC). The second approach is new and can be used in some scenarios where OCR does not produce consistent results. The experimental work compares both approaches and shows their robustness against typical intentional/unintentional document distortions including electronic format conversion, printing, scanning, [...] VILLAN SEBASTIAN, Renato Fisher, et al. Tamper-proofing of Electronic and Printed Text Documents via Robust Hashing and Data-Hiding. In: Proceedings of SPIE-IS&T Electronic Imaging 2007, Security, Steganography, and Watermarking of Multimedia",
"title": ""
},
{
"docid": "0d669a684c2c65afef96438f88a9a84d",
"text": "STUDY OBJECTIVE\nTo describe the daily routine application of a new telemonitoring system in a large population of cardiac device recipients.\n\n\nMETHODS\nData transmitted daily and automatically by a remote, wireless Home Monitoring system (HM) were analyzed. The average time gained in the detection of events using HM versus standard practice and the impact of HM on physician workload were examined. The mean interval between device interrogations was used to compare the rates of follow-up visits versus that recommended in guidelines.\n\n\nRESULTS\n3,004,763 transmissions were made by 11,624 recipients of pacemakers (n = 4,631), defibrillators (ICD; n = 6,548), and combined ICD + cardiac resynchronization therapy (CRT-D) systems (n = 445) worldwide. The duration of monitoring/patient ranged from 1 to 49 months, representing 10,057 years. The vast majority (86%) of events were disease-related. The mean interval between last follow-up and occurrence of events notified by HM was 26 days, representing a putative temporal gain of 154 and 64 days in patients usually followed at 6- and 3-month intervals, respectively. The mean numbers of events per patient per month reported to the caregivers for the overall population was 0.6. On average, 47.6% of the patients were event-free. The mean interval between follow-up visits in patients with pacemakers, single-chamber ICDs, dual chamber ICDs, and CRT-D systems were 5.9 +/- 2.1, 3.6 +/- 3.3, 3.3 +/- 3.5, and 1.9 +/- 2.9 months, respectively.\n\n\nCONCLUSIONS\nThis broad clinical application of a new monitoring system strongly supports its capability to improve the care of cardiac device recipients, enhance their safety, and optimize the allocation of health resources.",
"title": ""
},
{
"docid": "4c48f4912937f429c80e52d66609f657",
"text": "Fetus in fetu is a rare developmental aberration, characterized by encasement of partially developed monozygotic, diamniotic, and monochorionic fetus into the normally developing host. A 4-month-old boy presented with abdominal mass. Radiological investigations gave the suspicion of fetus in fetu. At surgery a fetus enclosed in an amnion like membrane at upper retroperitoneal location was found and excised. The patient is doing well after the operation.",
"title": ""
},
{
"docid": "1352bb015fea7badea4e9d15f3af4030",
"text": "We present an overview of the QUT plant classification system submitted to LifeCLEF 2014. This system uses generic features extracted from a convolutional neural network previously used to perform general object classification. We examine the effectiveness of these features to perform plant classification when used in combination with an extremely randomised forest. Using this system, with minimal tuning, we obtained relatively good results with a score of 0.249 on the test set of LifeCLEF 2014.",
"title": ""
},
{
"docid": "44aa302a4fcb1793666b6aedc9aa5798",
"text": "Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain's core algorithms.",
"title": ""
},
{
"docid": "e6c7713b9ff08aa01d98c9fec77ebf7a",
"text": "Everyday many users purchases product, book travel tickets, buy goods and services through web. Users also share their views about product, hotel, news, and topic on web in the form of reviews, blogs, comments etc. Many users read review information given on web to take decisions such as buying products, watching movie, going to restaurant etc. Reviews contain user's opinion about product, event or topic. It is difficult for web users to read and understand contents from large number of reviews. Important and useful information can be extracted from reviews through opinion mining and summarization process. We presented machine learning and Senti Word Net based method for opinion mining from hotel reviews and sentence relevance score based method for opinion summarization of hotel reviews. We obtained about 87% of accuracy of hotel review classification as positive or negative review by machine learning method. The classified and summarized hotel review information helps web users to understand review contents easily in a short time.",
"title": ""
},
{
"docid": "ad3437a7458e9152f3eb451e5c1af10f",
"text": "In recent years the number of academic publication increased strongly. As this information flood grows, it becomes more difficult for researchers to find relevant literature effectively. To overcome this difficulty, recommendation systems can be used which often utilize text similarity to find related documents. To improve those systems we add scientometrics as a ranking measure for popularity into these algorithms. In this paper we analyse whether and how scientometrics are useful in a recommender system.",
"title": ""
},
{
"docid": "3a549571e281b9b381a347fb49953d2c",
"text": "Social media has been gaining popularity among university students who use social media at higher rates than the general population. Students consequently spend a significant amount of time on social media, which may inevitably have an effect on their academic engagement. Subsequently, scholars have been intrigued to examine the impact of social media on students' academic engagement. Research that has directly explored the use of social media and its impact on students in tertiary institutions has revealed limited and mixed findings, particularly within a South African context; thus leaving a window of opportunity to further investigate the impact that social media has on students' academic engagement. This study therefore aims to investigate the use of social media in tertiary institutions, the impact that the use thereof has on students' academic engagement and to suggest effective ways of using social media in tertiary institutions to improve students' academic engagement from students' perspectives. This study used an interpretivist (inductive) approach in order to determine and comprehend student's perspectives and experiences towards the use of social media and the effects thereof on their academic engagement. A single case study design at Rhodes University was used to determine students' perceptions and data was collected using an online survey. The findings reveal that students use social media for both social and academic purposes. Students further perceived that social media has a positive impact on their academic engagement and suggest that using social media at tertiary level could be advantageous and could enhance students' academic engagement.",
"title": ""
},
{
"docid": "45f2599c6a256b55ee466c258ba93f48",
"text": "Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders.",
"title": ""
},
{
"docid": "5793cf03753f498a649c417e410c325e",
"text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.",
"title": ""
},
{
"docid": "7678ef732bf2a4b6a16f44e45b34ebe8",
"text": "big day: Citizens of Bitotia would once and for all establish which byte order was better, big-endian (B) or little-endian (L). Little Bit Timmy was a big supporter of little endian because that would give him the best position in the word. However, the population was split quite evenly between L and B, with a small minority of Bits who still remembered the single-tape Turing machine and preferred unary encoding (U), without any of this endianness business. Nonetheless, about half of the Bits preferred big-endian (B > L > U), and about half were the other way round (L > B > U). The voting rule was simple enough: You gave 2 points to your top choice, 1 point to your second-best, and 0 points to the worst. As Timmy was about to fall asleep, a sudden realization struck him: Why vote L > B > U and give the point to B, when U is not winning anyway? Immediately, Timmy knew: He would vote L > U > B! The next day brought some of the most sensational news in the whole history of Bitotia: Unary system had won! There were 104 votes L > U > B, 98 votes B > U > L, and 7 votes U > B > L. (Bitotia is a surprisingly small country.) U had won with 216 points, while B had 203 and L had 208. Apparently, Timmy was not the only one who found the trick. Naturally, Bitotians wanted to find out if they could avoid such situations in the future, but ... since they have to use unary now, we will have to help them!",
"title": ""
},
{
"docid": "76502e21fbb777a3442928897ef271f0",
"text": "Staphylococcus saprophyticus (S. saprophyticus) is a Gram-positive, coagulase-negative facultative bacterium belongs to Micrococcaceae family. It is a unique uropathogen associated with uncomplicated urinary tract infections (UTIs), especially cystitis in young women. Young women are very susceptible to colonize this organism in the urinary tracts and it is spread through sexual intercourse. S. saprophyticus is the second most common pathogen after Escherichia coli causing 10-20% of all UTIs in sexually active young women [13]. It contains the urease enzymes that hydrolyze the urea to produce ammonia. The urease activity is the main factor for UTIs infection. Apart from urease activity it has numerous transporter systems to adjust against change in pH, osmolarity, and concentration of urea in human urine [2]. After severe infections, it causes various complications such as native valve endocarditis [4], pyelonephritis, septicemia, [5], and nephrolithiasis [6]. About 150 million people are diagnosed with UTIs each year worldwide [7]. Several virulence factors includes due to the adherence to urothelial cells by release of lipoteichoic acid is a surface-associated adhesion amphiphile [8], a hemagglutinin that binds to fibronectin and hemagglutinates sheep erythrocytes [9], a hemolysin; and production of extracellular slime are responsible for resistance properties of S. saprophyticus [1]. Based on literature, S. saprophyticus strains are susceptible to vancomycin, rifampin, gentamicin and amoxicillin-clavulanic, while resistance to other antimicrobials such as erythromycin, clindamycin, fluoroquinolones, chloramphenicol, trimethoprim/sulfamethoxazole, oxacillin, and Abstract",
"title": ""
},
{
"docid": "d263d778738494e26e160d1c46874fff",
"text": "We introduce new online models for two important aspectsof modern financial markets: Volume Weighted Average Pricetrading and limit order books. We provide an extensivestudy of competitive algorithms in these models and relatethem to earlier online algorithms for stock trading.",
"title": ""
},
{
"docid": "93afa2c0b51a9d38e79e033762335df9",
"text": "With explosive growth of data volume and ever-increasing diversity of data modalities, cross-modal similarity search, which conducts nearest neighbor search across different modalities, has been attracting increasing interest. This paper presents a deep compact code learning solution for efficient cross-modal similarity search. Many recent studies have proven that quantization-based approaches perform generally better than hashing-based approaches on single-modal similarity search. In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search. Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared subspace and the private subspaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space, where the mean embedding of different modality distributions can be explicitly compared. In addition, in the shared subspace, a quantizer is learned to produce the semantics preserving compact codes with the help of label alignment. Thanks to this novel network architecture in cooperation with supervised quantization training, SPDQ can preserve intramodal and intermodal similarities as much as possible and greatly reduce quantization error. Experiments on two popular benchmarks corroborate that our approach outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "ac156d7b3069ff62264bd704b7b8dfc9",
"text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.